id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
500590
|
pes2o/s2orc
|
v3-fos-license
|
Non-A Hepatitis B Virus Genotypes in Antenatal Clinics, United Kingdom
Serostatus for viral e antigen is no longer accurate for inferring potential infectivity of pregnant virus carriers.
H epatitis B virus (HBV) infection remains a major health problem worldwide and mother-to-infant transmission represents one of the most effi cient ways of maintaining hepatitis B carriage in any population. Intervention to prevent this route of infection is a key part of the global program of hepatitis B control. Although there are 3 routes of transmission of HBV from infected mothers to their infants, including transplacental and postnatal, most transmission is likely to occur perinatally at the time of labor and delivery (1). HBV e antigen (HBeAg) in maternal serum is associated with high infectivity; in the absence of intervention after delivery, including both passive and active immunization, 90% of babies born to carrier mothers whose serum contains HBeAg will become chronically infected with HBV (2,3). Babies born to mothers whose serum contains antibody to HBeAg (anti-HBe) become infected far less frequently (4). However, babies who are infected may be at risk of developing fulminant hepatitis B (2).
The prevalence of HBV infection in the United Kingdom is low (0.4%) (5). In the late 1990s, the World Health Organization (WHO) recommended introduction of global universal hepatitis B immunization programs (6); by March 2002, a total of 151 countries, including 34 in Europe, had introduced HBV vaccine within their national immunization programs. However, current control of mother-to-infant HBV transmission in the United Kingdom is based on selective hepatitis B immunization of infants at risk. A recent WHO survey in Europe indicated that 8 other countries also used this approach (7). This requires routine antenatal screening for HBV infection (8,9), offered by 34 countries in Europe, with infants born to all hepatitis B-infected mothers being offered immediate postnatal active immunization with hepatitis B vaccine. In the United Kingdom, babies at highest risk for infection, those born to mothers whose serum does not contain anti-HBe, are offered additional passive immunization prophylaxis (10) with 200 IU of hepatitis B immunoglobulin (HBIg) within 24 hours of delivery. In this protocol, detection of anti-HBe is used to infer low infectivity.
Despite full prophylaxis for neonates, a small proportion of infants still become persistently infected (11)(12)(13) and are at risk of developing sequelae of chronic HBV infection and increasing the HBV reservoir. Although the causes for these failures could be many, we noted that in management of HBV-infected healthcare workers, inference of infectivity is now based upon plasma viral load for HBV rather than HBe markers. Until 2001 in the United Kingdom, fi tness of an HBV-infected healthcare worker to undertake in-*University College London Hospitals National Health Service Foundation Trust, London, United Kingdom; and †Health Protection Agency, London, United Kingdom vasive procedures was predicated upon absence of HBeAg, a protocol that was found to enable transmission to patients (14). All transmission involved infections by viruses with the pre-core premature stop codons, which refl ected changes in viral genotypes caused by increased migration in UK healthcare workers. To investigate potential inappropriate categorization of infection risk through continued use of HBe markers in the antenatal setting, we undertook a study to relate HBe markers to HBV DNA levels and genotypes as predictors of potential infectivity.
Patients
As part of routine antenatal care, screening for HBV infection is offered to all pregnant mothers at the University College London Hospital. Pregnant HBV carriers who came to the hospital from September 1989 through September 2004 were identifi ed. Serum samples from 114 HBV-infected mothers were available for further testing. Ethnic origin of mothers was not recorded.
Serologic Tests
Serum was separated and stored at -20°C in the Department of Virology, University College London Hospital, in accordance with laboratory policy to archive samples from carriers because of the long incubation time to clinical expression of HBV-related chronic liver disease. Samples would have been tested at initial collection for HBsAg by using a range of commercial assays and had reactivity confi rmed by neutralization tests. Further testing for HBeAg, anti-HBe, antibody to hepatitis B virus core antigen (anti-HBc), and immunoglobulin M to HBc would have been performed routinely to determine the need for HBIg and confi rm carrier status.
Quantitative PCR and Sequencing
Viral load for HBV DNA was measured as described (15). Briefl y, HBV DNA was extracted from serum by using the QIAamp Virus BioRobot 9604 and QIAamp96 Virus Kit reagents (QIAGEN, Hilden, Germany) in accordance with the manufacturer's instructions. Twenty microliters of extract was used for input into a Taqman-based assay for HBV DNA in an ABI Prism 7000 sequence detection system (Applied Biosystems, Foster City, CA, USA). Serum samples containing >100 IU/mL of viral DNA were selected for sequencing. Five microliters of extract was used for nested amplifi cation of the entire virus surface antigen gene as described (16). We amplifi ed precore and basal core promoter (BCP) regions of HBV DNA from anti-HBe-positive serum samples that contained >10 4 IU/mL of HBV DNA. Briefl y, 5 μL of extracted HBV DNA was amplifi ed by using primers H4072, 5′-TCTTGCCCAAGGTCTTA CAT-3′, and C outer (outer antisense), 5′-TCCCACCTTAT GAGTCCAAG-3′, in the fi rst round and primers H4072 (primer sequence as above) and C inner, 5′-CAGCGAG-GCGAGGGAGTTCTTCTT-3′, in the second round. Conditions for amplifi cation were the same for both rounds: 94°C for 4 min; 35 cycles at 94°C for 30 s, 55°C for 30 s, and 72°C for 1 min; and a fi nal extension at 72°C for 5 min. Amplicons were sequenced with CEQ 8000 Genetic Analysis Systems (Beckman Coulter, Fullerton, CA, USA) in accordance with the manufacturer's instructions.
Generated nucleotide sequences were assembled and analyzed by using the SeqMan program (DNASTAR Inc., Madison, WI, USA). Alignments of nucleotide sequences were conducted to determine phylogenetic relationships between different isolates of HBV by using the MegAlign program (DNASTAR, Inc.). Data were used to construct a phylogenetic tree. Further analysis was also conducted with HBV STAR analysis, which assigns HBV genotypes by using a position-specifi c scoring matrix (www.vgb.ucl. ac.uk/star.shtml). Statistical signifi cance was determined by using the Fisher exact test in the Arcus Quickstat package (www.camcode.com/arcus.htm).
Results
Thirteen (11.4%) of 114 HBsAg-positive serum samples contained detectable HBeAg, 95 (83.3%) contained anti-HBe, and 6 (5.3%) did not contain HBeAg or anti-HBe. HBIg had been recommended only for babies born to 13 mothers whose serum contained HBeAg and to 6 mothers whose serum did not contain HBeAg or anti-HBe.
Discussion
This study investigated the continuing use in the United Kingdom of maternal HBeAg markers as predicators for enhanced neonatal HBIg prophylaxis in addition to neonatal vaccine. Among 51 countries in Europe, the United Kingdom, along with 14 others, has elected to not introduce routine neonatal HBV immunization at this time (7), rather opting for selective screening in the antenatal clinics and targeted prophylaxis to infants born to infected mothers. This policy requires effi cient HBV screening in clinics. We recognize that resources required for implementing this policy are not available in many countries. This policy has the advantage of enabling the addition of HBIg to prophylaxis for infants born to mothers with high infectivity, although how widespread this practice is in Europe is not known. HBIg is a costly intervention and is limited by availability. It is also a blood product that has the risk for transmission of prion disease through inclusion of donations from persons with variant Creutzfeldt-Jakob disease in the plasma pool.
Serum samples from 114 hepatitis B carrier mothers were examined. Thirteen (11.4%) contained HBeAg, with concentrations of HBV DNA ranging from 7.8 × 10 5 to 1 × 10 8 IU/mL. All infants born to these mothers would have been at high risk of acquiring HBV and should have been offered active immunization with the HBV vaccine, as well as passive prophylaxis with HBIg. Six serum samples did not contain detectable HBeAg or anti-HBe. Although HBV DNA levels were low in all samples, infants of these mothers would still have been given HBIg in accordance with guidelines, probably unnecessarily. Eighty-fi ve of 95 serum samples with anti-HBe had HBV DNA levels <10 4 IU/ mL and infants of these mothers would have received only active immunization. Ten (10.5%) of 95 serum samples had HBV DNA concentrations >10 4 IU/mL, and 2 (2.1%) of these had high viral loads >10 5 IU/mL (110,000 IU/mL and 8,690,000 IU/mL, respectively). The infants of these mothers would not have been offered HBIg on the basis of maternal anti-HBe as a marker of low infectivity. It is not known whether such infants are more likely to become infected as they had received only vaccine prophylaxis.
In the late 1970s in Japan, use of anti-HBe as a marker for low infectivity had been based on the observation (17) that anti-HBe-seropositive carriers were unlikely to transmit hepatitis B sexually or to their infants. This belief was verifi ed by observations in genitourinary medicine clinics (18) and included in Department of Health policy in the United Kingdom that allowed hepatitis B carriers to conduct exposure-prone procedures if their serum did not contain HBeAg. In retrospect, it seems likely that at the time of promulgation of these guidelines, most infections with hepatitis B virus in the UK workforce would have been with genotype A. This Department of Health policy continued until description of several surgical transmissions from HBV-infected healthcare workers (14) and the recognition that some carriers whose serum contained anti-HBe had high viral loads. After this episode, estimation of plasma Figure. Box and whisker plots of hepatitis B virus (HBV) load in 3 groups of mothers whose serum contained hepatitis B virus e antigen (HBeAg), antibody to hepatitis B virus e antigen (anti-HBe), or neither of these markers (e Neg). Boxes are middle quartiles, horizontal lines are medians, whiskers are ranges, and dots represent 10 anti-HBe-seropositive mothers whose serum contained >10 4 IU/mL HBV DNA. Thirty-three anti-HBe-seropositive mothers and 1 mother whose serum did not contain either marker did not have detectable HBV DNA (<50 IU/mL). HBV DNA load was introduced to manage infected healthcare workers (19). Most of the surgeons involved had been born in HBV-endemic countries outside Europe and would have been infected by a genotype other than genotype A. All viruses transmitted had premature stop codons in the precore region, which are changes not commonly seen in genotype A infections. Dominance of nongenotype A infections among antenatal women in the United Kingdom, with genotype A accounting for only 15%, is explained by the recent observation that a net of ≈6,000 HBV carriers immigrate annually to the United Kingdom (5) from areas such as eastern Europe, where non-A viruses predominate. This immigration will undoubtedly change clinical expression of HBV carriage in the United Kingdom and provides an example of reemergence of an old virus disease with different characteristics. Flaring (increase in alanine aminotransferase levels caused by immune-mediated destruction of hepatocytes) and late escape (elevated levels of viral DNA) of virus from host-dependent modulation (innate or adaptive immune responses to infection with HBV) is seen more frequently with non-A viruses than with European genotype A HBV. All but 1 of the viruses in serum samples from 10 anti-HBe carrier mothers who had high viral loads were non-A, and all carried changes associated with enhanced virus replication. Five had changes in the precore region, 2 had changes in the BCP, and 3 had changes in both regions.
BCP mutations at nucleotide positions 1762/1764 and precore mutation G1896A, which results in a premature stop at codon 28, reduce or prevent expression of HBeAg. Both mutations are likely the result of virus evolution and selection of the fi ttest strains (20) during host immune responses. BCP changes result in decreased transcription of precore/core mRNA, reduced secretion of HBeAg (21), and enhanced virus production in vitro (22,23). These changes have been detected more often in viruses with genotypes A and C than in those with genotypes B, D, and E (24). However, in our study, BCP mutations were seen in viruses with genotypes A, C, D, and E. These mutations are thought to arise before precore changes (25). The premature stop precore mutation is restricted to HBV genotypes containing a thymidine at nucleotide position 1858, which is required for stabilizing the stem loop structure (26). This mutation, which is found in viruses with genotypes B, D, E, G and some strains with genotypes C and A (27), explains the high prevalence of premature stop variants in Asia and the Mediterranean region, where predominant genotypes are B, C, and D and their previous low prevalence in the United Kingdom. Our study demonstrates changing phenotypes of virus infections caused by population movement. These changes are unlikely to be limited to the United Kingdom and have wider implications for infectious diseases globally.
Our study demonstrates that reliance on only HBV serologic markers leads to misclassifi cation of HBV carrier mothers. A proportion of low-infectivity carriers had high levels of virus in plasma but their infants would not have received optimal enhanced prophylaxis with postnatal HBIg. This policy could allow avoidable breakthrough infections in infants. In view of the infl ux of immigrant HBV carriers into the United Kingdom, a new HBV antenatal screening strategy is needed to identify and offer adequate protection to infants at risk of acquiring HBV infection. Quantifi cation of HBV DNA is a more objective direct measure of potential infectivity and brings this procedure in line with management of HBV-infected healthcare workers (19). However, the cut-off level of HBV DNA needed to defi ne potential activity has yet to be established. Finally, given the emerging pattern of an overall increase in HBV carriage in the United Kingdom, consideration should once again be given to a national program of immunization of infants.
Dr Dervisevic is a clinical virologist at the University College London Hospitals, of London. His research interests include viral hepatitis and other bloodborne viruses.
|
2014-10-01T00:00:00.000Z
|
2007-11-01T00:00:00.000
|
{
"year": 2007,
"sha1": "b82f52530058f643ea5a00323323a1353327f583",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3201/eid1311.070578",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b82f52530058f643ea5a00323323a1353327f583",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
218938123
|
pes2o/s2orc
|
v3-fos-license
|
A Comparative Evaluation of Two Commonly Used GP Solvents on Different Epoxy Resin-based Sealers: An In Vitro Study
ABSTRACT Aim This study evaluates epoxy resin-based sealers after their final set, immersed in Endosolve-R or xylene for 1–2 minutes, for its easy removal mechanically after softening. Materials and methods Sixty Teflon molds were grouped with 20 samples in each of the three commercially available sealers, i.e., AH 26, AH Plus, and Adseal. The sealers were put in the specific molds after their manipulation as per the instructions given in the literature by the manufacturer. They were allowed to harden for 2 weeks at 37°C in 100% humidity. Two subgroups, A-Xylene and B-Endosolv-R, of 10 samples each, were formed from 20 set specimens based on solvents to which they were immersed for 1 and 2 minutes, respectively. The data obtained was subjected to the Mauchly's test one-way ANOVA and two-way ANOVA for analysis. Results It was proved that for all the sealers immersed in solvents, there was a significant reduction in the mean Vickers hardness as the time increases. There was a significant difference in the initial hardness between the mentioned sealers with AH plus showing the highest followed by AH 26 and Adseal showing the lowest. AH Plus and Adseal sealers were softened by xylene after 2 minutes of their initial microhardness (p < 0.001); least effect was seen on AH 26. After 2 minutes, Endosolv-R softened initial microhardness of all the three sealers (p < 0.001). Conclusion It was concluded that Endosolv-R was more effective in softening the epoxy-based resin sealer than xylene, after 2 minutes of exposure. How to cite this article Tyagi S, Choudhary E, Choudhary A, et al. A Comparative Evaluation of Two Commonly Used GP Solvents on Different Epoxy Resin-based Sealers: An In Vitro Study. Int J Clin Pediatr Dent 2020;13(1):35–37.
IntroductIon
The success rate of root canal treatment ranges from 86 to 93% and the most common cause of its failure is the microbial infections of the root canal system. 1 Root canal-treated teeth can be retreated either by orthograde or retrograde retreatment. There are various reasons of endodontic failure such as left-out canals, inappropriate cleaning, under/overobturation, inefficient hermetic seal, and bacterial microflora in the root canal. 2 When resin-based sealers are used, retreatment and removal of the gutta percha (GP) is not easy. Therefore, different solvents can be used along with the mechanical method to avoid complications like altering of the original canal shape, canal straightening, or perforations. [3][4][5] This study was designed to evaluate two GP solvents on three commercially procured epoxy resin-based sealers.
Specimen Preparation
Sixty disks of Teflon measuring 12 × 2 mm in diameter and height with a well of 1.5 × 6.0 mm in depth and diameter were fabricated. The molds were divided into three groups of 20 samples each. The sealers were put in the specific molds after their manipulation as per the instructions given in the literature by the manufacturer. They were allowed to harden for 14 days at 37°C in 100% humidity. Two subgroups, A (Xylene) and B (Endosolv-R) of 10 samples each, were formed from 20 set specimens based on solvents to which they were immersed in, for 1 and 2 minutes, respectively.
Measuring the Softening of the Sealer Surface
The Mitutoyo microhardness testing machine with an indenter was used to calculate the Vickers microhardness (HV) of all the specimens. The specimens were then subjected for 10 seconds to a load of 10 g at three different, predetermined points by the indenter and were measured under the microscope with 100 times magnification. The mean was calculated for the samples.
Specimens were immersed in the mentioned solvents for 60 seconds. They were air-dried after retrieval from the solvent and the microhardness was reassessed. Each specimen was again immersed in corresponding solvents for another 1 minute.
A total of 10 specimens from every group were assessed for microhardness after 1 and 2 minutes in solvent immersion. Data were collected and tabulated to obtain mean and standard deviation.
The two-way analysis of variance (ANOVA) was performed to assess the mean hardness across the groups. Data were also subjected to one-way ANOVA, followed by pairwise comparison using the Tuckey's post hoc analysis.
results
With time, hardness reduced considerably for all the sealers and solvents. Tables 1 and 2 show mean and standard deviation of Vickers microhardness of root canal sealers immersed in the solvents for 1 and 2 minutes.
Highest reduction in the mean hardness (HV) was seen in the AH Plus sealer as compared to the other two. It was more evident for Endosolv-R in case of AH Plus. Among the three groups, subgroups A and B showed considerable difference in the mean hardness after 1 and 2 minutes but the result was constant after 1 minute in subgroup B. After 2 minutes, the mean hardness (HV) of group I was considerably different than groups II and III, while the means of groups II and III showed no variation.
After 60 seconds, Endosolv-R was most favorable in dissolving Adseal then AH 26 and AH plus as compared to xylene that was most favorable in dissolving AH Plus then Adseal and minimum in AH 26. After 2 minutes, Endosolv-R was most favorable in dissolving AH plus followed by Adseal and AH 26 as compared to xylene that was most effective against AH Plus (79.1%) followed by Adseal (65.1%) and least effective against AH 26 (7.6%).
dIscussIon
In any retreatment case, complete removal of the sealer and the gutta percha is very crucial, in order to facilitate entry for the antimicrobial agent, disinfectant, and medicament in the canal and further, to ensure its success. 6,7 Whenever resin-based sealers are used, retreatment and removal of the gutta percha becomes difficult. Therefore, different solvents can be used along with the mechanical method, to avoid complications like altering of the original canal shape, canal straightening, or perforations.
Xylene, chloroform, Pandine needle oil, eucalyptol oil, turpentine oil, etc., are commonly used solvents in the nonsurgical retreatment cases for easy removal of root canal fillings.
About 60-70% of the gutta percha can be easily removed within 2-3 minutes but some firmly adhered remnants of the sealer and the gutta percha that remained attached to the canal dentin walls are difficult to remove; therefore, along with solvents various mechanical methods have been well documented like using files, gates glidden, heated pluggers, ultrasonic, etc., for complete removal of root canal fillings. 5,8,9 Also "wicking action" by solvents as suggested by Ruddle is most effective in removal of the gutta percha in cases of retreatment. 10 In this study, resin-based sealers have been used as they firmly adhere to dentin walls and are difficult to remove as compared to nonresin-based sealers. 11 Various authors have suggested that these resin-based sealers are biocompatible, radiopaque, and firmly adhere to both gutta percha and dentinal walls; therefore, they are difficult to remove in retreatment cases. [12][13][14] The study compared three sealers after being immersed in two different solvents (Endosolv-R and xylene) for 1 and 2 minutes. It was found that Endosolv-R was the most effective softener for all the three sealers in less time. 15 Due to hydrophobic property of Endosolv-R and xylene, they have the capacity to break through the 3D lattice structure of epoxy resin-based sealers formed after the chemical reaction. 15 A combined use of Endosolv-R along with rotary files for removal of the gutta percha from apical third in less time has been well reported by various authors. 6,16 Evident reduction in microhardness of the enamel and the dentin along with reduction in the binding force of resin-based endodontic sealers with use of xylene have also been noted. 17,18 The U.S. Food and Drug Administration has barred chloroform for its carcinogenicity and cytotoxicity. 17,19 An endodontic solvent like orange oil is more popular because of its safe and biocompatible nature, even though few authors have suggested orange oil to be less effective than chloroform and xylene. 20 Xylene is mainly composed of chlorinated hydrocarbon that has the capability to dissolve the gutta percha and the sealer; when used along with the mechanical methods, it can facilitate easy removal of filling materials. 16 Whereas, Endosolv-R that contains formamide (66.5 g) and phenyl ethylic alcohol (33.5 g) is more effective for removal of the resin-based sealer. 21 The Occupational Safety and Health Administration stated the adverse effect of xylene, which includes hypersensitivity of the mucous membrane and the eye, when ingested causes gastrointestinal discomfort, when inhaled causes air spaces hemorrhages, chemical pneumonitis, if extruded periapically causes cytotoxic reaction. 22 Chutich et al. have recommended that the quantity of xylene that leaches out of the apical foramen is way less than the permissible dose. 23 Biological acceptability of Endosolv-R is questionable as it is known to have fetotoxic properties. 24 conclusIon It was concluded that Endosolv-R was more effective for softening the epoxy-based resin sealer than xylene, after 2 minutes of exposure. Further studies are required with long-term trials and varying parameters simulating the clinical conditions. references
|
2020-05-21T00:05:04.398Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "323856d8b253e069827a7a16aa8a04d4ea3de723",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5005/jp-journals-10005-1741",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d800a155a61ebc661f6c45e0cb53b9733cc87b8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216328851
|
pes2o/s2orc
|
v3-fos-license
|
Developing the Knowledge Workers Model for Core Competencies Management in Iraqi Higher Education Institutions
This paper aims at developing the knowledge workers model for core competencies management via identifying the dimensions of the knowledge workers that are possibly related to the core competencies management. The primary motivations for the current research lies in those gaps represented of scientific and experimental studies in this scope, as well as for the purpose of increasing knowledge in this field. This theoretical research contributes to the development of the knowledge workers model based on its dimensions for core competencies management in Iraqi higher education Institutions. This research used quantitative approach by questionnaire was taken in collecting the data from the research community represented by some Iraqi higher education institutes samples that reached (256) questionnaires, which is about (80%), distributed to individually. The correlation coefficient (Spearman's) and Regression coefficient was relied on by using spss-ver.24. also the Knowledge-based Institutional theory was depended on explaining the results. The empirical analysis of the results was made using (Cronbach's alpha) to test the scales consistency of the validity of the consistency coefficient. The questionnaire was on a high consistency and validity. The results have largely supported the research model referring to the relationship of the knowledge workers have a good correlation and influence relationship in the core competencies management. Hence, this research could be of great use to the researchers, academics, professionals and policies makers.
INTRODUCTION
Relying on the development of the business knowledge is not enough anymore in addition to the crucial role that the basic skills have and the employees competencies that depend on the invention and the human directions. This is applied on the privileged individuals Crucially for the organization development i.e. knowledge workers (Igielski, 2017: 11). The rise in the number of knowledge workers is the distinctive type of work of the 21st century, which calls for a new management approach, with a high level of work autonomy for knowledge workers (Farkas & Torok, 2011: 76). Most of the previous published studies of knowledge workers and abilities came either theoretical or case studies, highlighting that there is scarcity or no research in the literature to identify an ability that leads to strategies for more productive workers versus the rest of the staff (Shukla & Sethi, 2004: 1-2). Despite the competencies of knowledgeworkers, organizations face the challenges of creating a knowledge-worker's model for managing their organizations' capabilities. The challenge, on the other hand, is the inevitable key to success. However, competencies management and full commitment to work, and the new management systems require a focus on the knowledge-workers model and provide *Address correspondence to this author at the College of Administration and Economic, University of Anbar, Iraq; Tel: +9647902710533; Fax: +96424429693; E-mail: dr.khamees_Mohammed@uoanbar.edu.iq JEL classification: M1 them with the opportunity to achieve self-realization and development (Igielski, 2017: 12-15).
Previous researchers have argued that basic competencies can apply to knowledge-workers and that social and complementary competencies are critical of importance especially for knowledge-workers (Farkas & Torok, 2011: 71-72). The knowledge-workers base ability approach ignores core organizational competencies. The development, based on the resources and the company strategy have specified the core human competency of the knowledge workers as a specific resource for the company (Shukla & Sethi, 2004: 3). We strive to present a gap of the research in the study of knowledge workers for core competencies management in organization. Thus, we present a literature review of the core competencies set based on the approach of developing the knowledge worker's model.
Our aim in this research is to seek to developing the knowledge workers model that is related to the core competencies management, specifically by identifying the knowledge workers dimensions that may be associated with core competencies in accordance with the observations made by previous studies on organizations. Such a perspective directs the attention to the problematic questions about the ability on developing the knowledge workers model for core competencies management? How should the Institutions of the higher education invest the knowledge workers for core competency management in them? Will they succeed in doing so? This paper is considered a study of conceptual problems of the nature of the relationship of knowledge workers with the core competencies management, It could also be about the cognitive interrelation about the research of these two subjects.
The labor force especially, can no longer be understood only as a factor of production but must be valued as the strategic basic competence of any Institutions (Shukla & Sethi, 2004: 1). Some organizations work on aligning the components of knowledge-workers identified by psychologists with the practical communities elements, of which the increasing core competencies is one of its components (Chu & Khosla, 2012: 2391. The self-directed knowledge worker should think of many core competencies such as the thinking skills and the continuous learning (Awad & Ghaziri, 2007: 438). However, it is but a few who have known about the knowledge workers dimensions associated with the core competencies management in Institutions, the primary motivations for our current research that lie in those gaps that were ignored by previous literatures. The Increased evidences suggest that most of the previous studies neglected the knowledge workers model development for core competencies management. Therefore, from a theoretical perspective and an accurate understanding of the characteristics or dimensions of knowledge workers, for core competencies management is a necessary and important issue. But even some previous rare experimental studies prior to harnessing knowledge workers for core competencies management were built in the core commercial activities context. We, therefore, developed a conceptual model induced from the literatures as the knowledge workers model for core competency management via considering the concepts and dimensions of the knowledge workers and the core competencies in the organization to check and make sure of the their relationship and supporting in the core competencies management. Based on the suggested model, we say that the knowledge workers by (knowledge acquisition, intellectual capabilities, challenge and achievement, and excellence) could be able basically of the core competencies management in their Institutions.
Nevertheless, this theoretical research contributes to the knowledge workers model development based on its dimensions for core competencies management in Iraqi higher education Institutions. Besides giving the research experimental proofs to support the relationship between the knowledge workers and the he core competencies management. In accordance with that, the current research includes, in the next section, a literature review, methodology, testing the research model, conclusions and recommendations.
Knowledge-Based Institutional Theory
This theory is basically considered a modern expansion for the theory of the Resources-based institutional view. It is a managing systematic theory formulated by the Dutch professor (Wernerfelt Birger) in 1984(Wernerfelt,1995. This theory is referred to the cause behind the performance differences in the organizations but the organization of a superb performance possess and control its resources till getting the competitive advantages which are not available with the others, also this theory looks at this organization as a bundle of resources (Gaya et al., 2013(Gaya et al., : 2050. The main idea behind the Knowledgebased Institutional theory is that this organization could continue in its existence due to its ability to managing its knowledge resources with a more efficiency level, In another sense, these organizations are not more than community entities store and use this knowledge, competencies, important and vital abilities for the survival, growth and success of the institution. Hence, its success or failure is determined by the ability of this organization to discover, acquiring and absorb the knowledge resources in the local and external environment (Miles, 2012: 186). Accordingly, the Knowledge-based Institutional theory has explained the importance of the variables of the research (knowledge workers and the core competencies management) which is the field of the study its relation with each other. Hence, the importance of the existence is shown in the specialist administration for the knowledge of the organization that seeks to fully exploit for its knowledge resources and core competencies.
Knowledge Workers Concept
Business organizations have supported in the compound and complex process that particularly apply knowledge and direct core support whether through the formation of new knowledge (the research and development) or the preparation of knowledge workers from their trainees in the various fields. (Liu & Chai, 2011: 3) pointed out that Knowledge workers are the new groups formed in the age of the knowledge economy that many researchers have conducted studies on such as (Druker Peter) and (Arther Anderson). They are the individuals who develop, use and circulate the knowledge (Stair & Reynolds, 2010: 444). In the dictionary, they are those who collect, analyze and deal with information for the production of goods and services (Mishra, 2011: 4). From a similar vivid perspective (Liu & Chai, 2011: 3) explained that knowledge workers are those individuals who use professional knowledge acquired by their own selfexperiences and outstanding achievements, skills associated with production and management, the knowledge application activities in a particular project and, generally, they have strong knowledge, capability or talent. (Kreitner & Kinicki, 2007: 12), on the other hand, see that the knowledge workers are those individuals who add value by using their brains rather than exhausting themselves with the muscle effort. Or they are the individuals who develop, use and circulate knowledge to add value and they are usually professionals in science, engineering, business, working in the offices and belong to professional organizations (Stair & Reynolds, 2010: 444). (Lei & Lan, 2013: 61) describe knowledge workers as the individuals who have the ability to accomplish the goals and excel in the production of knowledge, creativity and innovation and applied them in organization. Finally, (Grainne et al., 2011: 610) preferred the definition of (Swart, 2007: 452) who sees the Knowledge workers as the workers who apply their required knowledge and valuable skills via practical experiences with complex and innovative problems in the environments that provide rich collective knowledge and relational resources. As a result, this concept ranged from a simple narrow to a broad and complex one.
Knowledge Workers Dimensions
Researcher's opinions have differed in determining the dimensions and characteristics of knowledge workers and Table 1 summarizes these dimensions.
Due to the purposes of this research, the following dimensions have been relied on as a scale of knowledge-workers for an implicit the presence and agreement between most of the researchers on them: knowledge acquisition, intellectual capabilities, challenge and achievement, and excellence.
Core Competencies Concept
The core competencies are considered the real resources for the competitive advantage and interest in intangible assets as well as the interest in the tangible assets and aptitudes. (Spendlove, 2007: 409). The scholars, the researchers and the writers, of the strategic management and human resource management, have addressed the term of the core competencies in many concepts but all of these different concepts share an intellectual thread that they are originated from within the organization and rely on its resources. They also represent strength points that enable business organizations to compete and survive. However, the researchers did not agree to give a unified definition of the core competencies being a concept having many interpretations coming from the various knowledge disciplines that use this concept. Some competencies are called core because each one of them is the tool that the knowledge worker certainly needs to use in the work (Awad & Ghaziri, 2007: 438). (Nobre & Walker, 2011: 337) points out in determining the core competence concept that the question should be whether this competence gives the organization a unique advantage over competitors and helps it to make profits? This concept, then, can be identified. (Daft, 2010: 80) sees what the organization accomplishes distinctly and individually compared to its competitors. This is confirmed by (Silber & Kearny, 2010: 112) that organizational skills and knowledge that without them the organization does not exist, make an organization work better than any other organization and thus makes it unique. (Hitt et al., 2003: 81) and (Hill & Jones 2008: 67) agree that the basis of the core competencies and the distinctive capabilities are the unique resources of the organization but believe that distinctive capabilities are embodied in the complementarity of these resources and once these resources have achieved the sustainable competitive characteristic, they become core competencies.
From a wider perspective (Coulter, 2010: 77), the attainment of distinctive aptitudes comes as an advanced stage that is achieved when the organization possesses core competencies as the basis on which it is based and such competencies of their arising role through the availability of the accumulated learning in the organization that such core competencies will make the superiority. The first case is represented by the long-term benefits that distinguish the organization and give it greater potentials than its competitors by achieving sustainable competitive advantage over its competitors (Silber & Kearay, 2010: 112). This will enable it to achieve a prestigious competitive position and gain a large market share. While in the short term the core competencies add value to organization customers via increasing the quality of its products and reducing its costs which will help it retain customers with the possibility of gaining new customers (Barringer & Ireland, 2008: 175).
Criteria for Core Competencies
The organizations attempt to determine what they need to possess of core competencies in order to rely on in developing their competitive characteristics (David, 2011: 120). The organizations usually have diverse resources and capabilities, but not all of have achieved competitive advantages. The competitive advantages need unique resources that are not available to other competitors in addition to the core competencies possessed by the organization only (Johnson et al., 2005: 118).
Although organizations have a huge amount of resources and capabilities, it has become difficult for them to determine precisely where their competencies lie in that amount of resources and capabilities and whether these competencies are core or not? To answer this question, some writers and researchers in the field of strategic management cite a number of criteria which, in turn, the organizations use in determining what they already have of the core competencies: valuable, rare, non-Imitable, nonsubstitutability, exploitability, durability, dynamic, extendibility, Non-transferable and Appropriability.
Core competencies Dimensions
The core competencies dimensions are many according to the writers and the researchers in the field of strategic management and the organization. Table 2 summarizes these dimensions.
The current research adopts four main dimensions: resources, shared vision, teamwork and Empowerment that were the focus of the most writers and researchers.
The Role of Knowledge Workers in the Core Competencies Management
Researcher's opinions have differed about the nature of the relationship between the knowledge workers and the core competencies and is still a controversial issue. So, we presented the views of researchers on the role of knowledge workers with the core competencies as indicated by (Macmilian & Tampoe, 2000: 122) that the core competencies are generated by the accumulated knowledge, skills of the individuals and their self-potential (task skills, common skills, specialized knowledge and mentality) that these resources relate exclusively to the individuals especially in the service organizations due to the nature of the organization work that it needs. (Jingfang et al., 2009: 2) points out that the knowledge-workers should possess the competencies and capabilities that contribute to productivity of their works that add the value to the organization. Through the views of (Chu & Khosla, 2012: 2391) on the adaptation of the knowledge-workers' elements identified by the psychologists to those of the practice communities which is the increasing of the core competencies is one of them. Via unifying the knowledge workers components with the practice communities it show as the vivid role of the knowledge workers role in these communities and their role in increasing the core competencies. there is a clear indication of the knowledge workers in the core competencies management of the organizations and the first signs of this relationship existence. Figure 1 shows this role.
These arguments and others, mentioned above, support the current research model development of the knowledge workers for core competencies management in higher education Institutions.
RESEARCH METHOD a. Research Model
The methodological treatment of the study problem, in the light of its theoretical framework and its field implications, requires a design of a research model in Figure 2, which refers to the logical relationships between the research variables expressing the solutions proposed by researchers to answer the problematic research questions raised. The research model represents a set of hypotheses built on the -The dependent variable: Core competencies management including (Resources, shared vision, Teamwork and Empowerment).
b. Method of Data Collection
The questionnaire is considered main tool for data collection in the current research. It was prepared based on studies (Mohanta et al., 2006) and (Cheng & Zhang, 2008) concerning the independent variable knowledge workers and on studies (Hafeez & Essmail, 2007) and (Agha et al., 2012) regarding the dependent variable of the core competencies management. The tool was adapted in accordance with the objectives and directions of the current research using (Likert) scale of five-grade (1-5) i.e. (strongly agreed -agreed -neutral -disagree -strongly disagree).The questionnaire was made of two sections: the first includes the knowledge workers that has ( knowledge acquisition, intellectual abilities, challenge and achievement, excellence) and assigned (28) items by seven items for each dimension, while the second section management of core competencies includes (resources, shared vision, teamwork, and empowerment) and allocated (16) items by four items for each dimension. In order to confirm the validity of the questionnaire in measuring the research variables, it was subjected to the validity and the consistency tests using (Cronbach's alpha) scale and it was found that the alpha coefficient was (0.921) on the overall level of the variables and was (0.854) at the level of sub-variables for knowledge workers management of the core competencies. While the coefficient of alpha was (0.793) at the level of the two main variables. Such these ratios are acceptable in the management studies and the scale used has a good degree of consistency. As for validity, it is equal to the square root of consistency (0.891), which indicates a great degree of validity for the used scale.
c. Community and Sample
The current research community was represented by five Iraqi universities (Mustansiriya, Iraqi, Kufa, Anbar, Fallujah), and the sample consisted of the university leaders (the dean's assistants, heads of
TESTING THE SEARCH MODEL
For the purpose of identifying the nature of the relationship between the independent variable knowledge workers and the dependent variable of the core competencies management in the research Institutions, we will verify the validity of the research model according to the grade correlation coefficient (Spearman's) and the regression coefficient chosen to perform an analysis on the research variables. To complete the descriptive and diagnostic processes based on the descriptive analysis data, the correlation relationships between study variables were identified as shown in Table 3.
•
Correlation relations have emerged between knowledge acquisition and core competencies management dimensions of represented by (resources, shared vision, teamwork, and empowerment). The correlation coefficients were (0.445 **, 0.448 **, 0.384 **, 0.316 **) respectively at the level (0.01). This indicates a significant correlation between knowledge acquisition and core competencies management dimensions. The correlation coefficient between knowledge acquisition and total core competencies management was (0.502 **) that shows the positive relationship between knowledge acquisition and core competencies management.
•
A correlation relationship was found between the intellectual capabilities and core competencies management dimensions, the correlation coefficients were (0.536 **, 0.493 **, 0.624 **, 0.239 **), respectively, indicating the available correlation between the intellectual capabilities and core competencies management dimensions. Also, the correlation coefficient between the intellectual capabilities and the total core competencies management was (0.635 **) confirming a correlation relationship between the intellectual capabilities and the core competencies management.
•
A correlation relationship has emerged between challenge and achievement dimension and dimensions of the aforementioned core competencies management dimensions. The correlation coefficients were (0.368 **, 0.521 **, 0.312 **, 0.410 **) serially indicating a positive correlation between the challenge and achievement and core competencies management dimensions. It was also seen that the correlation coefficient between the challenge and achievement and the total core competencies management was (0.523 **) confirming the existence of a strong correlation relationship between them.
•
A correlation relationships were shown between the excellence dimension and core competencies management dimensions of each of (resources, shared vision, teamwork, and empowerment). The correlation coefficients were (0.207 **, 0.315 **, 0.433 **, 0.339 **) serially at the level of significant (0.01), indicating a significant correlation coefficient between the excellence and core competencies management dimensions. The correlation coefficient also was found between the excellence and the total core competencies management (0.496 **) confirming the existence of a significant correlation relationship between the excellence and core competencies management but it was the weakest.
• A correlation relationships were found between the total knowledge workers and core competencies management dimensions mentioned before. The correlation coefficients were (0.572 **, 0.586 **, 0.601 **, 0.503 **), serially, indicating a strong positive correlation between the total knowledge workers and the core competencies management dimensions.
•
The correlation relationships , between the total knowledge workers and the total core competencies management were significantly shown. The relationship was significant. This is proved by the correlation coefficient value (0.671 **) at the significant level (0.01) indicating a vivid and strong correlation between the two variables.
The results of the regression analysis of the research variables combined and individually and presented in Table 4 below enhance the results of the correlation mentioned above, As the calculated value of (F) reached (41.396) which is greater than the tabular value of (F) of (6.630). This indicates the significant influence of knowledge workers in the core competencies management. The explanation the value of the coefficient (R 2 ) of (0.439), which means that knowledge workers explains what percentage (43.9%) of the changes in the core competencies management, while the remaining percentage rated (56.1%) is due to the effect of other variables that do not exist in the model.
These results reflect the fact that usefully proposed the acceptance of the research model with its hypotheses and questions expressed through the model with a confidence level at (99%).
IMPLICATION
The research results proved a good statistically significant relationship to knowledge workers and its dimensions combined and individually with its core competencies management at the level of the researched universities reflected by the significant correlation and regression coefficients . Therefore, this study would be useful for supporting and encouraging Iraqi universities to pay extraordinary attention to the knowledge workers dimensions to create great opportunities for their core competencies management in favor of work. As well as, to enhancing the universities managements of the variables that have shown good and clear results such as intellectual capabilities, challenge and achievement, acquisition knowledge in favor of the secretions core competencies management and their sub-dimensions to ensure superiority in performance levels.
1.
Employing the knowledge workers dimensions for core competencies management in the higher education Institutions, in particular, and the service Institutions in general to show the ability of the knowledge workers for managing their core competencies.
2.
The contribution of the scientific research in stimulating the interest of the searched Institutions for the importance of the current research variables and their role in raising the level of readiness in providing educational services with high quality and entering the field of the competition.
3.
It was shown that the researched universities were able to employ the knowledge workers dimensions in activating the core competencies management that was clearly shown in teamwork, shared vision and then resources in a weaker level after empowerment. This is due to actual circumstances identified in the Iraqi universities.
4.
The results showed that the employment of universities for excellence dimension was modest in spite of its importance and did not rise to other knowledge workers dimensions, and this may be due to the exceptional circumstances experienced by Iraqi universities.
5.
The research results proved that there is a good statistically significant relationship to knowledge workers and its dimensions combined and individually with its core competencies management at the level of the researched universities reflected by the significant correlation and influence coefficients.
1.
Enhancing the universities managements of the variables that have shown good and clear results such as intellectual capabilities, challenge and achievement, acquisition knowledge in favor of the secretions core competencies management and their sub-dimensions to ensure superiority in performance levels.
2.
The necessity to increase more the interest of the universities of their core competencies management, specifically to raise the level of attention to the empowerment dimension, and work to connect them with the strategic performance to increase the level of achieving the sustainability of competitive advantage.
3.
The necessity of the universities to pay more attention to the excellence dimension through the adoption of the programs and models that distinguish university performance to achieve the desired success of their core competencies management.
4.
The necessity for supporting and encouraging the Iraqi universities to pay extraordinary attention of the knowledge workers dimensions to create great opportunities of their core competencies management in favor of work.
5.
The possibility of conducting the research in other sectors and adding other variables to support the results achieved by the current research and work on generalizing them.
|
2020-03-12T10:26:30.677Z
|
2020-03-02T00:00:00.000
|
{
"year": 2020,
"sha1": "476f376629b64ab4674fba62110f4c1f4324c129",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.6000/1929-7092.2020.09.17",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ed1529a6b649195188b7bd3a6dc6e5909aa3aa08",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Business"
]
}
|
222175089
|
pes2o/s2orc
|
v3-fos-license
|
Assigning single clinical features to their disease-locus in large deletions: the example of chromosome 1q23-25 deletion syndrome
Aim: Assigning a disease-locus within the shortest regions of overlap (SRO) shared by deleted/duplicated subjects presenting this disease is a robust mapping approach, although the presence of different malformation traits and their attendance only in a part of the affected subjects can hinder the interpretation. To overcome the problem of incomplete penetrance, we developed an algorithm that we applied to the deletion region 1q23.3-q25, which contains three SROs, each contributing to the abnormal phenotype without clearly distinguishing between the different malformations. We describe six new subjects, including a healthy father and his daughter, with 1q23.3-q25 deletion of different sizes. The aim of this study was to correlate specific abnormal traits to the haploinsufficiency of specific gene/putative regulatory elements. Methods: Merging cases with those in the literature, we considered four traits, namely intellectual disability (ID), microcephaly, short-hands/feet, and brachydactyly, and conceived a mathematical model to predict with what probability the haploinsufficiency of a specific portion of the deletion region is associated with one of the four
INTRODUCTION
The usual method to identify the shortest regions of overlap (SRO) in contiguous gene syndromes relies on the graphical identification of the area of minimal overlap between deletions in patients sharing the same phenotype. Although this approach is very efficient when dealing with traits present in all the subjects who share the deletion region, it is much less productive when the trait is shared by some of the patients only. The usual way to overcome this uncertain correlation is to attribute an incomplete penetrance to the trait, a definition that may hide multiple factors such as the influence of any other genetic factors necessary for the manifestation of the trait, the differences in the breakpoints of the deletion involving any different dynamics of chromatin interactions between enhancers and promoters, environmental factors, or more simply inaccurate assignment of phenotype. Obviously, "non-penetrant" deletions may either overlap the disease locus (DL) or not include it, so that they constitute a limitation to defining SRO boundaries. However, they still strongly modulate the probability profile of the DL location along the SRO, i.e., the probability for the DL to map at a given position, considering the whole body of experimental data (i.e., all the deletions, either penetrant or non-penetrant, overlapping a given genomic position inside the SRO). In fact, the trait(s) considered in a given genomic region are often de novo and present in restricted numbers of subjects, so that the exclusion of even a single case can really be limiting to a correct locus assignment. Therefore, it is highly desirable to find a probabilistic model that, by considering also the "non-penetrant" cases, makes more reliable the assignment of specific traits to specific genomic portions. For this purpose, we propose a new genotype-phenotype correlation approach, applying our statistical procedure to interstitial deletions of 1q23.3-q25, of which more than 30 cases have been reported, with the imbalance being mainly de novo with the exception of three subjects who have inherited the deletion from the affected mother (Patients P10 and P17 [1] , Patient A [2] , and Case 1 [3] ). These deletions are associated with a complex malformation condition consisting in proportionate pre-and postnatal growth deficit, cardiac malformations, small hands and feet with brachydactyly, intellectual disability (ID) of various degrees, and craniofacial dysmorphisms such as microcephaly, micrognathia, short nose with bulbous tip, dysplastic ears, elongated upper lip, and small chin have been reported in most subjects [1,3] . The relationship between the size and localization of the copy number variants and phenotypic abnormalities in thirty-five patients [1][2][3][4][5][6][7][8][9] allowed identifying three non-overlapping regions whose haploinsufficiency seemed crucial for the manifestation of some specific characteristics [1] . The SRO associated with growth and developmental delay has been progressively narrowed from 1.9 Mb [4] to a 179-kb region (chr1:172,460,683-72,281,412 hg19) [2] .
A subregion of 2.5 Mb (chr1:164,501,003-167,022,133 hg19), located proximally to the SRO (SRO-P), adds further complexity to the observed phenotype, being more commonly associated with cardiac and renal malformations. A third distal region of 2.7 Mb (chr1: 178.514.910-181.269.712 hg19; SRO-D) could also contribute to intrauterine and postnatal growth retardation [1] . Finally, deletions involving SERPINC1 (chr1: 173.872.942-173.886.516; MIM: 107300) result in low antithrombin-III activity, a risk factor for thrombophilia. We present the detailed phenotypic and molecular description of six new cases whose partially overlapping 1q24q25 deletions were identified by chromosome microarray analysis (CMA). Four cases, each encompassing at least one of the three critical regions, were sporadic and identified by studying unrelated patients with syndromic intellectual disability. The fifth case, with a 1q24.3q25.2 deletion that did not involve any of the three critical regions, was ascertained in a newborn baby after the unexpected detection of the same deletion in the healthy father. The latter was studied as the parent of a previous child carrying two CNVs, neither of them located on chromosome 1, which later turned out to be both inherited from the healthy mother.
Clinical data
All six subjects are described in more detail in the following section and their clinical characteristics are summarized in Table 1. Patient photographs are shown in cases where parents have consented to publication.
Case 1
The patient was a 17-year-old male born after a previous miscarriage. At birth, his mother and father were 29 and 31 years old, respectively. He has a younger healthy brother. Family history was remarkable for cognitive delay in the paternal lineage, not otherwise specified, and bipolar disturbance in the maternal one. The delivery was at term with fetal distress consisting of decreased heart rate patterns and meconiumstained amniotic fluid. Birth weight was 3,450 g (50th percentile), length was 51 cm (50th percentile), and cranial circumference-OFC was 35 cm (25th percentile). Apgar scores were 8/9 at 1'/5' , respectively. The perinatal period was remarkable for hypotonia and limb dyskinesia. Early motor milestones were slightly delayed: he sat between 7 and 8 months, crawled at 12 months, and walked autonomously at 18 months. Language and learning difficulties were noticed early. He started babbling at 18 months and language development was delayed. At the age of 13 years, Griffiths scale scores revealed moderate ID (overall IQ: 54) with pragmatic and narrative language difficulties and social skills impairment. Physical examination revealed craniofacial dysmorphisms including short neck with slight pterigium colli, hypoplasia of auricles with flat helix. His OFC was 53.8 cm (25th-50th percentile). In addition, ligamentous laxity, hyperextensibility of the finger joints, bilateral flat foot with sandal gap, bilateral genu valgus, spinal kyphosis, and decreased lumbar lordosis were observed. A supraclavicular cartilage cyst (3-mm diameter) was noted. Brain magnetic resonance imaging, kidney ultrasound, urinalysis, and functional blood tests were normal. The thyroid function test gave normal results. At the age of 16 years, he was classified as suffering from severe intellectual disability with marked repetitive movements, obsessive-compulsive traits, apathy, and abulia with episodes of coprolalia and soliloquy, without any self-hetero-aggressive behavior. Treatment with Risperidal or Abilify was recommended. At the age of 18 2/12 years, his height was 171 cm (< 5th percentile), weight 59.7 kg (10th-25th percentile), and OFC was 55.4 cm (25th-50th percentile). Array-CGH revealed a 4.2-Mb deletion of 1q23.3q24.2.
Case 2
The male child was born to a 40-year-old primigravida and her 43-year-old partner. Due to the father's oligospermia, the couple underwent two cycles of in vitro fertilization (IVF) through ICSI (intracytoplasmic sperm injection), which led to the conception of the patient. The delivery was normal after an unremarkable 40-week pregnancy with a birth weight of 2,380 g (third percentile), length of 45 cm (-2 SD), and cranial circumference-OFC of 33 cm (-1.75 SD). Apgar scores were 10/10 at 1'/5' , respectively. At 5.5 months, he began experiencing recurrent episodes of non-febrile seizures when falling asleep or waking up. Electroencephalogram (EEG) recording showed bilateral and rare paroxysmal slow abnormalities in the fronto-temporal region. A therapy with levetiracetam achieved a reduction of seizure frequency. At age 9.5 months, his height was -2.5 SD. At clinical examination, facial dysmorphisms including prominent forehead, hypertelorism, saddle nose, micrognathia, smooth philtrum with vermillion upper lip, small ears with hypoplastic helix, slight neck pterygium, sparse hair [ Figure 1A], and micropenis were observed. Audiological testing revealed mild sensorineural hearing impairment. At the age of 16 months, he started walking alone and expressive speech was absent. At the age of 22 months, his height was 74 cm (-3 SD), weight was 10 kg (-2 SD), and OFC was 47 cm (-2 SD). His hands and feet were broad with brachydactyly [ Figure 2A]. X-ray showed delayed bone age of 1.3 years. EEG displayed focal epileptiform abnormalities: spike-wave complexes on the left hemisphere. Patient was stable on the therapy with levetiracetam (2 cp × 90 mg). The brain MRI showed an enlarged third ventricle. Aarskog-Scott syndrome (OMIM 305400) was excluded following normal results of FDG1 gene mutation analysis. By 4.5 years of age, his height had decreased to -3.5 SD. His thyroid function and insulin-like growth factor-1 (IGF-1) level were normal. At the last evaluation at the age of five years, Griffiths scale scores revealed moderate ID (IQ: 50), language was absent, and the previously friendly behavior was now characterized by aggression and hyperactivity. Karyotype was normal and array-CGH revealed a 10.3-Mb deletion of 1q24.1q25.2 [Supplementary Figure 1A].
Case 3
The patient was born to an 18-year-old mother and a 20-year-old father, after a pregnancy characterized by the risk of miscarriage. He was delivered vaginally at 36 weeks with weight, length, and OFC far below third percentile. He was admitted for prematurity to Neonatal Intensive Care Unit. Peculiar dysmorphic Figure 4 and Table 2. According to Chatron et al . [1] and Lefroy et al . [2] . Targeted MLPA analysis for GJB2 , GJB6 , GJB3 , WFS1 , and POU3F4 genes (SALSA MLPA P163-C1 GJB-WSI MRC-Holland, Amsterdam features and hypotonia were noted. A diagnosis of Aarskog syndrome was suggested but not confirmed by the molecular analysis of FDG1gene. He had a normal karyotype, 46,XY. His medical history was positive for failure to thrive and psychomotor delay. He started to walk unsupported at the age of three years and never developed verbal language. At the age of four years, he underwent surgical correction for unilateral cryptorchidism. At the last evaluation at the age of seven years, he showed severe psychomotor delay. His weight was 15 kg (< 3rd percentile), height was 99 cm (-4 SD), and OFC was 44 cm (-4.2 SD). Physical examination revealed high frontal hairline, down-slanting palpebral fissures, hypertelorism, depressed nasal bridge, mild malar hypoplasia, anteverted ears, deep philtrum, and macrostomia [ Figure 1B]. His hands were small with short fingers, bilateral clinodactyly of the fifth finger, and bilateral single palmar crease. His feet were small with short toes, broad hallux, and bilateral "sandal gap" [ Figure 2B]. Other findings included mild hypotonia and joint laxity. Griffiths scale scores revealed severe ID (IQ: 34; Performance: 34) with absent language. EEG recordings showed an excess of fast rhythms particularly over anterior areas. During sleep, bursts of paroxysmal slow abnormalities were present bilateral, diffuse, and prevalent in anterior left areas. Non-epileptic myoclonus was present both during wakefulness and sleep. His behavior was characterized by impulsiveness. Array-CGH revealed a 13.7-Mb deletion of 1q24.2q25.3 [Supplementary Figure 1B].
Case 4
The 10-year-old patient was the first child born to 34-year-old healthy, non-consanguineous parents. Familiarity for cleft lip/palate and deaf-mutism was recorded. He has a younger healthy nine-yearold brother. The patient was delivered by caesarian section at 43 weeks of gestation after a pregnancy characterized by IUGR and poor fetal movements. His birth weight was 2,900 g (-3 SD); length and OFC were not recorded. Cleft lip/palate was surgically corrected at the age of one year. At the age of two years, his psychomotor and language development was moderately delayed and characterized by inattentivehyperactive behavior. At the same age, left-side cryptorchidism was surgically corrected. Mild growth hormone deficiency was documented but without the need for pharmacological treatment. When evaluated at the age of 10 years, his weight was 24 kg (< 10th percentile), height was 120 cm (3rd-10th percentile), and OFC was 49 cm (-3 SD Figure 1B].
Cases 5 and 6
The pedigree is shown in Figure 3. The index patient (III.2) was a four-year-old child born to a 27-year-old mother, who during pregnancy suffered from preeclampsia and was treated with anticoagulant drugs (aspirin and heparin) for thrombophilia and eutirox for hypothyroidism. The delivery was induced at 36 weeks of gestation for oligohydramnios. His birth weight was 2,800 g (10th-50th percentile). The perinatal and neonatal period was unremarkable despite feeding difficulties characterized by gastroesophageal reflux until the age of nine months. He crawled at 10 months and walked alone at the age of 18 months. His speech development was delayed and, at the age of four years, he was able to pronounce incomplete words. His behavior was characterized by low frustration tolerance associated with heteroaggressivity and bruxism. A diagnosis of autism spectrum disorder (ASD) was made (QS: 75, F 84.0 ICD 10, 299.00 ICD 9). CMA revealed two Figure 1C] or other abnormal features but for mild fingers ligamentous hyperlaxity at the hands [ Figure 2C]. His height was 164 cm, at the 25th percentile for the Sardinian population [10] , and his cranial circumference-OFC was 54 cm (25th percentile). The 1q24.3q25.2 deletion was established while his wife (Subject II.2) was 27 weeks pregnant (Subject III: 4, Case 6) and undergoing therapy for gestational diabetes and platelet aggregation inhibitors due to a previous miscarriage (III.1) at the 10th week of pregnancy and a subsequent intrauterine fetal death at the 39th weeks (III.3). This stillborn male was of 2,850 g (10th percentile), the cranial circumference-OFC of 29 cm (-3 SD) and length of 45 cm (< 3rd percentile). The morphological examination did not reveal any congenital malformation, while autoptic microscopic observation revealed macerated internal organs and venous thrombosis of umbilical cord, leading to a diagnosis of IUFD consistent with mild-moderate chorioamnionitis and fetoplacental thrombotic vasculopathy. DNA analysis was not performed. CMA on mother's blood revealed two deletions at chromosomes 8q24.3 of 124 kb and Xp22.2 of 58.9 kb [Supplementary Figure 2].
Patient III.4 (Case 6) was a female delivered by caesarian section at 37 weeks of gestation because of growth retardation (IUGR) and poor fetal movements. Her birth weight was 2,170 g (3rd percentile), length was 45 cm (10th percentile) and cranial circumference-OFC was 30 cm (-2 SD). Apgar scores were 10/10 at 1'/5' , respectively. The perinatal period was unremarkable, although, due to her inability to attach to the breast, she was fed on infant formula. The first neuropediatric assessment occurred at three months of age, showing OFC parameters of 35.2 cm (-3 SD), weight of 4,600 g (25th percentile), and length of 58 cm (50th-75th percentile). At the same age, cerebral ultrasound gave normal results. At last evaluation at the age of 8 months, her OFC was 39 cm (-3 SD) and weight was 7.5 kg (10th-25th). Minor facial dysmorphisms were noted [ Figure 1D]. CMA analysis, performed at birth in light of her father's CMA finding, highlighted the same 1q24.3q25.2 deletion of 5.8 Mb [Supplementary Figure 1D]. At the age of 41 days, routine chromogenic plasma testing revealed low antithrombin activity 3 (32%, normal 80%-120%), similar to what was documented in the father (45%, normal 70%-130%) [3] .
Molecular investigations
After obtaining the informed consent approved by the ethics committee for research at the corresponding institutions, DNA samples were prepared from blood of all six subjects and their parents. The study was conducted in accordance with the Declaration of Helsinki and national guidelines.
Gene content analysis
The gene content for each SRO was analyzed taking into account the haploinsufficiency (HI) and loss-offunction intolerance (pLI) scores. The HI score is defined as the predicted probability that a gene is more likely to exhibit haploinsufficiency (0%-10%) or more likely not to exhibit haploinsufficiency (90%-100%) based on differences in characteristics between known haploinsufficient and haplosufficient genes (https:// decipher.sanger.ac.uk/).
The pLI score represents the probability that a gene is extremely intolerant of loss-of-function variation (pLI ≥ 0.9). Genes with low pLI scores (≤ 0.1) are loss-of-function tolerant. This score is based on proteintruncating variants in the GnomAD database (https://gnomad.broadinstitute.org/). Moreover, according to gnomAD Gene constraint suggestions, to evaluate highly likely haploinsufficient genes, we also used the observed/expected score.
Whole-exome sequencing analysis
Whole-blood samples of all available family members [ Figure 3], except for the newborn baby (Case 6), were collected for WES analysis, which was performed by an external service provider (BGI Genomics, Hong Kong). According to the provider's description, whole-exome enrichment was carried out using Illumina kit and sequenced with the DNB-SEQ500 to generate 100-bp-paired end reads that were aligned to the human genome (UCSC GRCh38), at an average coverage of 150 ×.
Probability profiling of genomic regions linked to selected traits
To computationally infer the genomic segments being most likely associated with selected clinical features, we assumed that a specific trait was predominantly the outcome of the hemizygosity of specific DL, either a protein-coding gene or a putative regulatory element, rather than the synergistic effect of the haploinsufficiency of several genomic elements.
Given this assumption, the probability for a DL to map at a given genomic location essentially depends on the penetrance of its haploinsufficiency and on the causative and non-causative deletions that overlap the genomic position. Briefly, molecular data from patients, in whom the clinical status for a specific trait was assessed, were grouped and analyzed independently. Clearly, as not all patients were evaluated for a specific trait, the number of individuals in each group varied. In the first step of the procedure, we identified SRO regions, taking into account only overlaps between deletions associated with the trait. By definition, these SROs have probability 1 to contain the DL. The next step was to estimate the probability distribution inside SRO(s). At this purpose, we used a Bayesian approach to calculate, for each non-overlapping sliding window (Δ) of 1 kb within the SRO, the posterior probability to intersect the DL, conditioned by the experimental data (i.e., all the deletions overlapping the specific window inside the SRO). In this regard, we assumed that the a priori probability P (Δ overlaps DL) was inversely proportional to the SRO size and that the best estimator for the penetrance of the DL was the value which maximizes the likelihood function P (Experimental data given that Δ overlaps DL) (see Supplementary Materials, Mathematical Model).
In the last phase of the procedure, for each clinical feature (intellectual disability, microcephaly, kidney malformations, dysplastic ears, hypertelorism, short hands and feet, hypotonia, brachydactyly, microretrognathia, speech delay, and walk delay), custom UCSC tracks were automatically built to visualize in their genomic context the set of deletions and the probability profiles, calculated either in absolute or in log-scale. The software is available on request.
Parental origin analysis
The parent of origin was determined for three subjects, two (Cases 3 and 5) paternal and one (Case 2) maternal [Supplementary Table 1]. The parental DNA samples of the remaining subjects (Cases 1 and 4) were unavailable.
Whole-exome sequencing
Exome sequencing of Subject II.1 and his parents (trio analysis) did not provide a strong candidate variant likely relevant for ASD. Taking advantage of whole-exome sequencing data on the mother, we ruled out the possibility that the paternally inherited 1q24.3q25.2 deletion on Patient III.4 might have unmasked a recessive allele lying on the maternal chromosome. We also explored the hypothesis that variants in genes involved in coagulation cascade or fibronolysis could cause inherited predisposition to thrombophilia, possibly linked to the recurrent miscarriages observed in the mother. Interestingly, while we did not identify any candidate variants in genes already associated with thrombophilia, WES analysis demonstrated in the mother two missense variants (NM_001061.6:c.796C > T:p.R266W and c.1279G > A:p.A427T) in previously defined [1,2] . The size of each SRO is indicated (see Discussion) the TXBAS1 gene (MIM 274180) [Supplementary Figure 3]. These variants were in trans as only one of them (c.796C > T) was identified in the son (Subject II.1), rare (AF < 0.001 on several databases), and predicted to impact on protein function by in silico analysis. Both variants were technically verified by Sanger sequencing. The TBXAS1 gene encodes the enzyme thromboxane synthase (TXAS), which catalyzes the conversion of prostaglandin H2 to thromboxane A2 (TXA2), a potent vasoconstrictor and inducer of platelet aggregation [14] . Biallelic missense mutations in TBXAS1, accounting for a decreased TXAS activity, have been associated with the Ghosal hematodiaphyseal syndrome (MIM 231095), a disease characterized by abnormal bone remodeling and anemia.
Short region of overlaps
According to the genotype-phenotype correlations emerging from our study, we defined three new SROs, each associated with specific phenotypic traits, such as ID, microcephaly (MCH), and skeletal anomalies including short hands and feet and brachydactyly. Their details are summarized in Table 2 and visualized in Figure 5.
Probability profiling of genomic regions linked to selected traits
Computational prediction of the DLs localization allowed us to identify in the eleven traits examined a total of 26 SROs with sizes ranging from 0.038 to 22 Mb (mean: 3.5 Mb) [Supplementary Table 2].
Importantly, genomic intervals having a cumulative probability to contain the DL > 85% are considerably shorter than their corresponding SROs, reducing the number of high-priority candidate genes. Striking examples of this reduction concern SROs related, respectively, to ID (Peaks 1 and 2), brachydactyly (Peaks 1 and 2), and microcephaly (Peak 1) [Supplementary Table 2].
To check whether the intrinsic limitation of CMA to precisely map the breakpoints could affect the results, we decided to perform the analysis taking into account for each rearrangement either the smallest or the largest deletion regions, as defined by CMA. Interestingly, while nine out of eleven trait-related analyses remained unaffected, results concerning the traits "dysplastic ears" and "speech delay" markedly differed in the localization and size of the intermediate SRO [ Supplementary Figures 4 and 5], essentially due to a different set of overlapping deletions involved in the definition of that SRO. This finding suggests that caution should be applied in drawing firm conclusions when interpreting the results, as uncertainty about exact breakpoint localization as well as inaccuracy of clinical assessment may lead to erroneous SRO localization and probability profiles.
DISCUSSION
In this paper, we describe six individuals with deletions scattered within chromosome 1q23.1q25 where three specific SRO deletion syndromes are reported: a proximal one of 2. In particular, two SROs (SRO-I and -D) are described as associated with IUGR resulting in short stature up to -5 SD, microcephaly up to -4 SD, small hands and feet with fifth finger clino-brachydactyly, and variable degree of ID, in addition to peculiar facial dysmorphisms [1,2,4,6] .
Indeed, our subjects with deletion including both SRO-I and -D (Figure 4, Cases 2-4) exhibited all these features [ Figures 1 and 2, Table 1]. On the contrary, our Case 5 with a de novo 1q24.3q25.2 deletion (5.9 Mb, chr1:172,667,560-178,548,677) that did not contain the SRO-I and only partly the proximal portion of SRO-D [ Figure 4] has no ID or microcephaly, works in a qualified profession, and has a stature at the 25th percentile for the Sardinian population [7] . He also does not show any of the dysmorphic features [ Figure 3A and Table 1] reported for the overlapping 1q24q25 deletions. However, his nine-month-old daughter (Case 6) with the same 1q deletion [ Figure 4] had a history of IUGR and showed mild craniofacial dysmorphisms, including microcephaly (-3 SD), micro-retrognathia, and short neck [ Figure 3B], as observed in our Cases [1] , while the light blue vertical box represents the new SROs indicated by 1-3 defined by this study (see Table 2 for the details). (bottom) A graph showing the estimated probability distribution of the genomic location of the disease loci associated with the traits Figure 6A and B is indicated as N.1.
This deletion is common to the 17 cases, including our Case 1, listed with an asterisk in Figure 5A with deletions ranging from 276 kb to 14.1 Mb. All have from severe to moderate ID but not microcephaly [ Figure 5B]. The probability mass distribution at this region, as computationally calculated according to methods [Supplementary Table 2], shows that highest values for ID, kidney abnormalities, dysplastic ears, hypotonia, and speech delay [ Figure 6A, Supplementary Figures 4 and 5].
Consistent with these findings, de novo, deleterious PBX1 sequence variants result in a highly variable syndromic form of intellectual disability, which includes external ear abnormalities and congenital defects of the kidney and urinary tract. In fact, most patients with 1q deletion including PBX1 have renal abnormalities and the association between Cakut syndrome and PBX1 variants/deletions is well demonstrated [15,16] . PBX1 alterations may also contribute to severe behavioral traits such as autism and obsessive-compulsive disorder [15] . Indeed, in addition to moderate ID, the phenotype of our Case 1 was characterized by repetitive movements and psychiatric traits resembling Tourette syndrome, such as obsessive-compulsive behavior with episodes of coprolalia and soliloquy [ Table 1]. Interestingly, PBX1 gene has been recently identified among pleiotropic risk loci that play important roles in the neurological development processes associated with psychiatric disorders [17] .
Within this region, the ATP1B1 gene [ Figure 6C] shows the highest pLI value with observed/predicted scores indicating haploinsufficiency intolerance [Supplementary Table 3]. ATP1B1 (OMIM 182330) encodes for the subunit ß1 of Na,K-ATPase family, responsible for the homeostasis of the electrochemical gradients of Na and K ions across the plasma membrane. While mutations of the of Na,K-ATPase α-subunits have already been associated with neurological diseases (reviewed by Clausen et al. [18] ), no confirmed mutations in any of the ß-subunits have yet been correlated with human disorders. Interestingly, ATP1B1, as part of an Na,K-ATPase multiprotein complex, interacts with the calcium channel TRPV4 (MIM *605427) [19] whose heterozygous de novo variants, either or missense, associate with skeletal disorders. Similar to our patients (Cases 2 and 3), those with TRPV4-related skeletal dysplasias include short stature, small hands and feet, and brachydactyly [20][21][22] , suggesting a role for ATP1B1 in these disorders.
Deletions within SRO-I (chr1:172,460,683-172,281,412) show high probability to include a disease-locus for ID, microcephaly, and, as previously defined by Lefroy [2] , skeletal anomalies including short hands and feet and brachydactyly [ Figures 5 and 6A].
Specifically, the SRO-I includes Dynamin-3 (DNM3, OMIM *611445), a gene harboring a 7.9-kb antisense transcript for miR199-214 genes [23] . These two miRs are involved in vertebrate skeletogenesis [24,25] , thus suggesting a role for the skeletal phenotype in 1q24 deleted patients [2,6] . Indeed, the phenotype of Cases 2-4 with 1q24q25 deletions fully including the DNM3, with its two guests miR199-214 [ Figure 6D], share significant pre-and postnatal growth deficiency, microcephaly, and small hand and feet with fifth finger clino-brachydactyly [ Figures 1A and B and 2A BRINP2/FAM5B is the only gene of the region that is intolerant to haploinsufficiency but its role is unknown [ Figure 6E and Supplementary Table 4].
In contrast, our probability distribution profiles indicate that deletions for the distal portion of SRO-D are significantly associated only with ID and brachydactyly [ Figures 5A and 6A], although it should be noted that the microcephaly area of probability does not encompass this region due to the dubious effect of the deletion in our Patient 5. This region includes CEP350, RALGPS2, TDRD5, and XPR1 genes that are intolerant to haploinsufficiency; all have a low brain expression but none of them is thus far recognized as a disease gene [Supplementary Table 4]. In addition, LIM homeobox 4 (LHX4, OMIM 602146), a gene implicated in the etiology of congenital hypopituitarism [26] (OMIM #262700), was previously evoked as possible candidate gene for growth deficiency [7,27] .
Altogether, we have to assume that deletion for the proximal region 1q23.3q24.1 (SRO-P [1] ) is associated with kidney anomalies of high penetrance for total PBX1 loss [ Supplementary Figures 4 and 5] and ID but not microcephaly [ Figure 5]. Microcephaly is fully associated with deletions of more than one SROs (SRO-2, -I, and -D, Figures 6 and 7) and the most favorable new candidate gene is ATP1B1 in SRO-2.
Cases 5 and 6 are puzzling for the apparently different phenotypes in the presence of identical deletion. Indeed, Case 5 is a healthy adult and Case 6 is still a newborn with a nuanced disorder and perhaps would have been considered healthy if we had not incidentally identified the deletion in the father. Among genes mapping in their 5.9-Mb deleted region, between 172,667,560 and 178,548,677 [Supplementary Table 5], at least two are associated with systemic diseases: TNFSF4 (MIM *603594), related to systemic lupus erythematosus (OMIM#152700), and DARS2 (MIM*610956), involved in recessive leukoencephalopathy with brainstem and spinal cord involvement and lactate elevation (MIM#611115). None of these conditions were consistent with the clinical presentation of our Cases 5 and 6. We reasoned that the deletion might have unmasked a maternally inherited recessive variant in that region. However, WES analysis on the mother did not support this hypothesis, suggesting that other genetic or environmental factors may modulate the phenotype associated with this deletion. Indeed, the microcephaly, which is the main feature, observed in our Case 6, is a neurological sign that may be caused by a multitude of disease-causing genes with recessive or dominant inheritance [28] . However, by WES, we did not highlight any possible pathogenic variants in all genes associated with microcephaly having a frequency < 1% [28] that could explain this trait in our Case 6.
Similarly, IUGR, which characterized the prenatal life of our Case 6, is an end result of various etiologies that include maternal, placental, fetal, and genetic factors [29] . Looking at the clinical history of our family [ Figure 3], IUGR was documented not only in the child carrying the 1q deletion, but also in her brother without this deletion (Subject III.2), and even in the IUFD at 39th week of gestation (Subject III.3). Indeed, by WES data, we identified in the mother's genome, two heterozygous missense variants, c.796C>T:p. R266W and c.1279G>A:p.A427T in TXBAS1 [ Supplementary Figure 3], a gene having a possible role in thrombotic events [30,31] . Since both variants are rare (AF< 0.001%), predicted deleterious in several databases, and almost certainly in trans, only one being present in the first son of the couple, the two variants most likely represent a risk factor for the recurrent pregnancy losses and IUGR observed in this family.
Taken together, the link between 1q deletion identified in this family and the phenotype in our patient remains elusive. A long-term clinical follow-up of our newborn patient will help to clarify whether this deletion represents a benign CNV or a rearrangement showing incomplete penetrance.
In conclusion, we confirmed and identified several genes whose haploinsufficiency appears crucial in the manifestation of the main phenotypic abnormalities associated with 1q23.3q25.2 deletions [Supplementary Table 6]. In particular, PBX1, in addition to its well-known role in kidney abnormalities, is strongly associated with ID and contributes to the behavioral traits along with psychiatric disorders. DNM3 and LHX4 are hereby confirmed as responsible for growth retardation [7,27] while ATP1B1 represents a new candidate gene for microcephaly.
It should however be underlined that, apart from SRO-1, the other three SROs contain genes belonging to different TADS, some of which are interrupted by the deletion (http://3dgenome.org).
We cannot therefore rule out that some phenotypic abnormalities are due to an altered expression of some of the non-deleted genes following the breakdown of the TADs, rather than the haploinsufficiency of specific genes [32] .
Finally, we propose a method to computationally predict the probability that a given DL lies in a specific genomic segment. Although this approach may be hampered by long-term position effects of regulatory elements, synergistic cooperation of several genes, and incomplete clinical assessment, it can be useful, especially for contiguous gene syndromes that show a complex pattern of clinical characteristics. Obviously, functional approaches are needed to warrant its reliability.
|
2020-06-25T09:09:19.165Z
|
2020-06-18T00:00:00.000
|
{
"year": 2020,
"sha1": "afb041127cdce0599bbc18afceb6ef5c16fa92ff",
"oa_license": "CCBY",
"oa_url": "https://jtggjournal.com/article/download/3505",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a869c8ce4e8bb89a0eefa4ddb39f8101cabc7e83",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
}
|
241154633
|
pes2o/s2orc
|
v3-fos-license
|
STUDY OF GRIDHRASI AS SCIATICA AND ROLE OF SCIATICA NERVE ( GRIDHRASI )
Gridhrasi is such an entity enumerated into eighty types of Nanatmaja Vataj Vyadhies . It is characterized by its distinct pain emerging from buttock and goes towards the heel of afflicted area. On the basis of symptom complex it can be broadly correlated with disease Sciatica in modern science. Ayurveda is a simple practical science of life and its principles are universally applicable to each individual for day to day life. Ayurveda speaks of every elements and facts of human life. Each and every human being desires to live happy and comfortable life but it is not possible owing to multiple factors related with changing lifestyle, environmental factors etc. As the advancement of busy professional and social life improper sitting posture in offices, factories, continuous and over exertion, jerk movements during travelling and sports. All these factors create undue pressure to the spinal cord and play a chief role in producing low back ache and Sciatica .
INTRODUCTION
Gridhrasi is derived from the word or name of a bird Gridhra. Gridhra is a bird who is fond of meat and eat flesh of an animal in such a fashion that he deeply pierce his beak in the flesh then draws it forcefully, such type of pain in Gridhrasi and hence the name [1] . Further as in this disease the patient walks like bird Gridhra and his legs become tense and slightly curved so due to the resemblance with gait of Vulture. According to Acharya Sushruta, Gridhrasi is a condition in which Vata invading the Kandaras of ankle and toes producing Kshepan in the thighs this disease is known as Gridhrasi. According to Acharya Charaka, in Gridhrasi Nitamba (gluteal region), Kati (lumbar), Purushtha (posterior of thigh), Uro (knee), Jangha (calf) and Pada (foot) are affected [2] .
Nidana of Gridhrasi
In classics of Ayurveda disease are grouped under two main heading.
1. Samanaja 2. Nanatmaja Nanatmaja disease results due to vitiation of a particular Dosha and only Gridhrasi is such an entity enumerated under eighty types of Nanatmaja Vataj Vyadhies [3] . As for as Nidana are concerned there is no description available regarding it, but Gridhrasi is said to be Nanatmaja vata Vyadhi hence general Vataprakopakanidana can be taken as Gridhrasi nidana [4] . The cardinal signs and symptoms of Gridhrasi are Ruk (pain), Toda (pricking sensation), Muhuspandhana (tingling sensation) Stambha stiffness In the Sphik, Kati, Uru, Janu, Jangha and Pada in order and Sakthikshepingraha, restriction in upward lifting of lower limbs [5] .
Chiktsa of Gridhrasi
According to Acharya Charakabasti, Sira vedha and Agni karma have mentioned as line of treatment. [6] Acharya Sushruta has mentioned general Vatavyadhi Chiktsa and many oral preparation have been described in classics [7] while Chakrapani has mentioned as surgical procedure for Gridhrasi [8] . Along with all these Snehasvedna and Virechna are also indicated for the management of disease Gridhrasi on the basis of symptom complex the disease Gridhrasi can be correlated with disease sciatica in modern science [9] . Symptomatology of sciatica is same as given in Charaka Samhita. Sciatica Syndrome is rather than a disease resulting due to neuritis of Sciatic Nerve. [10] Acharya Charaka has mentioned that in Gridhrasi there is severe pain from Kati-pradesha to Padanguli (foot). [11] In various Samhita of Ayurveda there are lots of references regarding Gridhiasi and elaborated as a separate disease with specific management. Sciatica is the term given to pain down the leg where the nerve passes through an emerge from the lower bone of spine i.e., lumbar vertebrae the causative factor of sciatica are mostly degenerative arthiritis and disc prolapse. There is irritation at 4 th , 5 th Lumbar and 1 st sacral roots which forms the sciatic nerve, causes the sciatic syndrome due to main pathological lesions in the intervertebral disc of lumbosacral region [12] . The severity of pain makes an individual wretched. Sciatic Syndrome is rather a disease resulting due to neuritis of Sciatic nerve. Sciatic nerve is the largest nerve in human body. Previously this disease was mentioned as 'Cotugno' disease.
Tumour of Cauda equina, Protrusion of intervertebral disc, Pott's disease spondylosis, Osteomyelites, Fracture of Lumbar Vertebra, Neurofibroma, Tuberculosis, Gluteal bursitis, neoplasm of Sacrum and pelvic bones and penetrating injury to Sciatic Nerve are known as chief causes of Sciatica [13] .
Sciatic Nerve
The sciatic nerves are the largest as well as longest in the body reaching about the size of your thumb in diameter and running down the back of each leg, each sciatic nerve is composed of five smaller nerves that leave the spinal cord from lower spinal column join together and then travel down each leg [14] . It then divides into many smaller nerves that travel to the thigh, knee, calf, ankle, foot and toes.
When these nerves are irritated or affected by the inflammation of nearby soft tissues, doctor refers to this as sciatica. The sciatic nerve -a branch of the Sacral Plexus (L4 L5 S1 S2 S3), the largest in diameter in the body measure at its commencement 2cm in breadth. It passes through the greater sciatic foramen below the piriformis, descends between the greater trochanter, of the femur and the tuboristy of the ischium and along the back of thigh to about its lower one third where it divides into two large branches named Tibial and common peroneal nerves. Sciatic nerve also gives off articular and muscular branches. Tibial Nerve: The larger terminal branch of the sciatic nerve, the tibial nerve arises in the lower third of the thigh. It runs downward through the popliteal fossa lying first on the lateral side of the popliteal artery then posterior to it and finally medial to it. The popliteal vein lies in between the nerve and artery throughout its course. The nerve enters the posterior compartment of the leg by passing beneath the soleus muscle Its branches are as below: Cutaneous: The sural nerve descends between the two heads of the gastrocnemius muscle and is usually joined by the sural communicating branch of personal nerve. Numerous small branches arises from the sural nerve to supply the skin of the calf and the back of the lateral malleolus and is distributed to the skin along the lateral border of the foot and the later all side of the little toe. Muscular: Muscular branches of tibial nerve supply both heads of the gastrocnemius and the plantaris, soleus and popliteus. Articular: These branches supply the knee joint.
The smaller termin a branch of the sciatic nerve-the common peroneal nerve arise in the lower third of the thigh it runs downwards through the popliteal fossa, closely following the medial border of the blacks muscle. It leaves the fossa by crossing superficially the lateral head of the gastrocnemius muscle. It then passes behind the head of the fibula, winds lateral around the neck of the bone, pierces the peroneus longus muscle and divides into two terminal branches. 1) Superficial peroneal nerve.
2) Deep peroneal nerve Sciatica Nerve Distribution:-L4-5(S1-2-3 ) Motor distribution The hamstring in the thigh. The superficial and deep muscles of the calf by the medial popliteal and posterior tiblal nerve. The sole muscle by the medial and lateral plantar nerve. The pernoiby the superficial peroneal.
Extraneural disease
Sacroiliac joints subluxation tuberculosis and non-tubercular arthritis ankylosing spondylitis and other spondylar thropathies. Sacrum and pelvicbones-Primary and secondary neoplasm. Soft tissue-Gluteal bursitis. Prodromal Symptoms: The onset of sciatica may be preceded by recurrent attacks of pain in the lumbar region often sufficient severity to produce locking of the back in the flexed position. Symptoms of Sciatica: Lumbago-lumbar pain and its onset is subacute and disease is preceded by lumbar pain due to injury, strain or fall there may be latent interval of days or even weeks.
After two or three days pain in lumbar spine the pain radiate down to back of one leg from the buttock to the ankle. Pain in the back, aching in character and intense by spinal movements, pain deep in the buttock and thigh also aching or gnawing in character and influenced by posture of the limb pain radiating to the leg and foot and momentarily increased by coughing and sneezing when the first sacral root is compressed the pain was radiated to the outer border of the foot when the pressure is upon the fifth lumbar root pain spreads from the outer aspect of the leg to the inner border of the foot. In general, pain is intensified by stooping, sitting and walking. The patient being usually most comfortable lying in the bed on the sound side with the slight flexion of affected leg at the hip and knee there is often a feeling of numbness, heaviness or deadness in the leg especially along the outer border there are muscular hypotonia and slight wasting not only of the muscular supplied by the sciatic nerve but also the gluteal and sometimes of all the muscle of lower limb. There is tenderness on pressure in buttock and thigh, straight leg raising is limited by pain and stretching the sciatica nerve by extending the knee with hip, flexed cause severe pain 'Lasegue' sign.
Protrusion of disc in between L5 and S1 between L4 and L5 is common hence effects of compression of nerve roots are enlisted below:
Conservative Treatment
a) Complete rest in bed supine for 3 to 6 weeks b) When pain relieved, plaster jacket to immobilize the lumbar spine for 3 to 6 months. c) A lumbar corset wornat all time during the day.
CONCLUSION
In Sciatica there is pain in distribution of Sciatic Nerve which begins in lower back and radiate through posterior aspect of the thigh and calf and to the outer border of foot. Gridhrasi is included under 80 types of Nanatmaja Vata Vikara. Sushruta has emphasized the involvement of Antara Kandara Gulpha producing the disease Gridhrasi. Acharya Sushruta has described treatment Vatavyadhi Chikitsa. Acharya Charaka has described Siravyadha, Basti Karma and Agnikarma in the management of Gridhrasi.
Sciatica pain (Gridhrasi) is a painful condition and mainly Vata Vyadhi chikitsa has been advocated. Gridhrasi is commonly seen in society as prominent problem. Sciatic nerve located in buttock behind the hip joint is responsible for the sciatic pain. The sciatic nerve may be effected any where during travel in the leg clinical feature are mainly low back ache, radiculopathy (distribution of sciatic nerve). Conservation treatment, surgery is indicated in modern and Vata vyadhiher chikitsa is indicated in Ayurveda. Lumbar Spine is the site of most expensive orthopedic problem for world. It is the seat of miracles. The central nervous system as well as autonomic nervous system work through the spine and entire nervous system. Sciatic nerve located in buttock behind the hip joint is responsible for the sciatica pain. Sciatica or Sciatica Syndrome a condition described in modern medicine resembles with Gridhrasi.
|
2021-09-01T15:03:25.042Z
|
2021-06-30T00:00:00.000
|
{
"year": 2021,
"sha1": "9a35ec6020477c63f7231177ba3d3c9c9d970814",
"oa_license": "CCBYNCSA",
"oa_url": "https://ijapr.in/index.php/ijapr/article/download/1936/1364",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "7a9e3ed372ef04b15d90a59ae190f5ecbc4e638f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
174210
|
pes2o/s2orc
|
v3-fos-license
|
Red fluorescence increases with depth in reef fishes, supporting a visual function, not UV protection
Why do some marine fishes exhibit striking patterns of natural red fluorescence? In this study, we contrast two non-exclusive hypotheses: (i) that UV absorption by fluorescent pigments offers significant photoprotection in shallow water, where UV irradiance is strongest; and (ii) that red fluorescence enhances visual contrast at depths below −10 m, where most light in the ‘red’ 600–700 nm range has been absorbed. Whereas the photoprotection hypothesis predicts fluorescence to be stronger near the surface and weaker in deeper water, the visual contrast hypothesis predicts the opposite. We used fluorometry to measure red fluorescence brightness in vivo in individuals belonging to eight common small reef fish species with conspicuously red fluorescent eyes. Fluorescence was significantly brighter in specimens from the −20 m sites than in those from −5 m sites in six out of eight species. No difference was found in the remaining two. Our results support the visual contrast hypothesis. We discuss the possible roles fluorescence may play in fish visual ecology and highlight the possibility that fluorescent light emission from the eyes in particular may be used to detect cryptic prey.
The second, alternative explanation derives from the fact that fluorescence emits photons at longer wavelengths following light absorption at shorter wavelengths [21]. Hence, by adding light to the long-wavelength range, fluorescence acts as an additive colour mechanism. This feature is unique to fluorescence and other forms of luminescence (e.g. chemi-or bioluminescence [21]). Virtually all animal colours, however, merely reflect or transmit light that is not absorbed. Consequently, they display a down-sampled subset of the ambient spectrum, which is why they are called subtraction colours. This applies to pigments as well as structural colours [22,23]. The key question is under what conditions additive fluorescent coloration can be significant for colour vision, given the evolutionary success of subtraction colours.
(a) The role of fluorescence in colour vision
Natural luminescence, whether fluorescence or chemiluminescence, has one drawback: it is weak compared with the ambient sunlight in the same spectral range. As a consequence, the bioluminescent eyes of flashlight fish, for example, are only functional in twilight or darkness [24]. Such restrictions also apply to animal fluorescence. Although short-wavelength light is required to induce it, there should be little if any ambient light at the longer wavelengths where the fluorescent light is emitted. Hence, whenever the ambient spectrum covers the full visual spectrum-as is the case in terrestrial environments-fluorescence in animals may usually be insignificant relative to subtractive colours, explaining why the latter are usually the mechanism of choice [25] (see [10,[26][27][28][29][30][31] for exceptions). This reasoning can be extended to clear shallow aquatic habitats [32,33]. We call these environments 'euryspectral' (i.e. they are characterized by an ambient spectrum that is so broad that it exceeds the visual spectrum of most animals at both ends of its range).
Conditions change in favour of fluorescence when descending further down the water column. In addition to getting darker, the spectrum quickly narrows in width because water absorbs long wavelengths (580-700 nm) particularly efficiently [23,34,35] (electronic supplementary material, S1; figure 1). Whereas total irradiance in the blue 450-500 nm range is balanced relative to that in the red 600-650 nm range just below the surface (blue/red ¼ 0.856), this rapidly changes to a ratio of 186.4 at 220 m. The depth range in which the sunlight spectrum is narrower than the visual spectrum of many of its inhabitants will be called the 'stenospectral' zone hereafter. Near reefs the stenospectral zone starts between 210 and 225 m, depending on conditions such as waves, time of day, cloud cover and turbidity [36].
The stenospectral zone is ideal for a visual function of fluorescence [21]; while still offering sufficient light to induce fluorescence, there is little ambient light in the 580-700 nm emission range. As a consequence, even weak red fluorescence may become visible to an observer with the appropriate sensitivity. This is due to the way in which eyes perceive chromatic contrast; it depends on the ratios of cone photoreceptor types that are stimulated by light coming from an object compared with an adjacent object or background [25,37]. Hence, even when quantitatively weak, red fluorescent structures could produce a perceptible colour contrast against the cyan background of the stenospectral zone. In the shallow euryspectral zone, the contrast of a fluorescing structure would be insignificant against the broad spectral background.
Based on these considerations and as a non-exclusive alternative to the photoprotection hypothesis, the visual contrast hypothesis hypothesizes that fluorescence is used to generate patterns for long-wavelength vision in the stenospectral zone, as recently proposed for green fluorescence in midwater animals [13] and red fluorescence in barnacles [9].
(b) Fluorescence in fish: photoprotection or visual contrast?
Recently, we described the presence of red fluorescence in several reef fish species [8] (see [12] for a further expansion). Many of these show a concentration of fluorescence in the head region and around the eyes, particularly in small fishes with an otherwise rather transparent body (figure 2). Tissues close to the eyes or the brain are particularly sensitive to photo-damage [38]. This is further substantiated by the fact that the ocular media of many reef fishes block UV [38,39]. All this indicates that the photoprotection hypothesis may be a valid explanation for fluorescence in fish-at least in species where fluorescence is located in sensitive structures. The visual contrast hypothesis, however, offers an attractive alternative for fish in the stenospectral zone. Many marine fishes possess photoreceptors with sensitivities extending into the long-wavelength part of the ambient spectrum, including families with many fluorescent representatives such as wrasses [40], pipefish [41] and gobies [8,42]. Hence, such species seem ideally adapted to use and perceive red fluorescence, as already suggested for the neon pygmy goby [8] and shown experimentally in the fairy wrasse Cirrhilabrus solorensis [43]. Although these hypotheses are non-exclusive, their relative effect can nevertheless be assessed because of the opposite predictions they make. Under the photoprotection hypothesis, fish should fluoresce more brightly in the euryspectral zone. Under the visual contrast hypothesis, fish are expected to fluoresce more brightly in the stenospectral zone. This allowed us to use a simple sampling design to competitively test which of the two hypotheses is more plausible: by measuring the brightness of red fluorescence in the eyes of eight different marine fish species from three fish families at 25 and 220 m (figure 3), we directly assessed whether fluorescence is linked more to the euryspectral or the stenospectral zone.
Material and methods (a) Focal fish species
Data were collected at sites in the Mediterranean Sea, Red Sea and Eastern Indian Ocean (see §2b). We selected species based on three criteria: (i) the presence of fluorescence in the iris, (ii) small size and benthic lifestyle to facilitate collection, and (iii) sufficient abundance at both sampling depths. Based on prior knowledge regarding the presence of red fluorescence and depth distribution [8] (N.K.M. 2007-2013, personal observation), we focused on eight species from three fish families. Gobies (family Gobiidae) are the most species-rich marine fish family with correspondingly great diversification in terms of distribution, ecology and morphology [44]. They are mostly tropical and sub-tropical. The free-swimming redeye goby (Bryaninops natans [45]) usually forms groups from five to more than 50 individuals around compact Acropora coral heads, where they feed on plankton. The brightness of red fluorescence in B. natans irides is among the strongest recorded to date [8] ( figure 3). The remaining four study species from this family (figure 3), the spotted pygmy goby Eviota guttata [46], the pygmy goby Eviota zebrina [46], Michel's ghost goby Pleurosicya micheli [47] and a sand goby, Fusigobius cf. duospilus [48], represent a species-rich guild of small bottom-dwelling predators that forage individually or in loose groups on benthic and planktonic prey. While E. guttata, E. zebrina and P. micheli primarily live on live hard corals (e.g. Porites boulders) and adjacent bare reef rock, F. cf. duospilus prefer the sediments at the reef base. All four species share reasonably strong fluorescence in the iris, with additional fluorescence on the head and upper flank in the two Eviota species (figure 3).
We included the black-breasted pipefish Corythoichthys nigripectus (cf. [49]) to represent the family Syngnathidae. This species inhabits sediment-rich reefs in coastal lagoons and seaward reefs, often in loose pairs or groups. Fluorescence is known from several members of this genus [8,12], with C. nigripectus displaying fluorescent patterns on the upper iris and to a variable extent along the upper body (figure 3).
Finally, triplefins (family Tripterygiidae) are mostly cryptobenthic, predatory blennioids with a worldwide distribution in tropical and temperate waters [50]. The black-faced blenny, Tripterygion delaisi [51] (ii) Indo-pacific Ocean The triplefin H. striata was collected at Hoga Island in the Wakatobi archipelago off the southeast Sulawesi coast, Indonesia in September 2011. Collection took place in the context of a general permit of Operation Wallacea to conduct scientific and educational projects on the reefs at Hoga (sampling registered accordingly). We sampled fish along the wall of an exposed reef (the 'Pinnacle') that slopes down to below 240 m.
(c) Fish collection and maintenance
Fish were collected on scuba diving with hand nets after partially anaesthetizing individuals using clove oil where required (5% clove oil in 5% ethanol and 90% seawater [8]). Every dive focused on a single species. We usually reached our goal of approximately 10 individuals at each target depth on a single dive with two to four divers. We sampled at or below 220 m during the first half of the dive and at or above 25 m during the second half of the dive. After brief transportation in perforated 50 ml Falcon tubes or 1 l plastic bags (for C. nigripectus), fish were maintained in aerated containers at 24 -268C for rspb.royalsocietypublishing.org Proc. R. Soc. B 281: 20141211 1 -8 h. All individuals were measured on the collection day and released in their natural environment within 24 h. Sample size in F. cf. duospilus was initially 15 and 13 for the 25 and 220 m sites, respectively, but had to be reduced to 5 and 13 due to the inadvertent presence of 10 individuals of a sibling species (F. neophytus) in the 25 m sample, which we only discovered a posteriori when analysing the photographs.
(d) Spectrometry and photographic documentation of fish fluorescence
We employed a standardized work flow in which each individual fish was (i) put in a plastic bag with a small amount of seawater, placed in ice water for about 1 min to tranquilize, (ii) placed in 1 cm of approximately 208C seawater in a large glass Petri dish lined with non-fluorescent black cloth for spectrometric measurements for less than 5 min (details below), (iii) moved into 2 cm of approximately 208C seawater in a photography chamber for standardized fluorescence pictures for less than 5 min (details below), and (iv) returned to a recovery tank with 20 l of aerated seawater at room temperature. Spectrometric measurements were taken with an Ocean Optics QE65000 spectrometer for fluorescence and a bifurcated OceanOptics QR600-7-UV125BX fibre optics cable with a single saltwater proof tip in which six peripheral bundles of glass fibres emit the excitation light and one central bundle of glass fibres collects the emitted light. Excitation light was generated using a green laser (ThorLabs CPS532, a 532 nm laser diode module with an AHF narrow-band laser clean-up filter ZET 532/10) and guided into the illumination arm of the bifurcated fibre. With this excitation illumination, the fluorescent signal is maximized when the submerged probe is held at a distance of 4.5-5 mm. At this distance, the viewing angle of the central, light-accepting fibre has a diameter of 1.51-1.67 mm (area 1.79-2.19 mm 2 ). The fibre guiding the accepted light to the spectrometer included a filter holder with a Semrock EdgeBasic 532R-25 long-pass filter to eliminate reflected laser light.
Each new fish measurement series included a control measurement of a Labsphere Spectralon Fluorescence Standard (type USFS-336-010) to check for fluctuations in measurement sensitivity. Spectrometer integration times were usually 800 ms, but adjusted to shorter integration times when emission intensities exceeded the dynamic range of the spectrometer (e.g. in B. natans). All final measurements were uniformly expressed as counts ms 21 nm 21 . The basic set-up at Hoga Island (for H. striata) was similar, but used a bifurcated Avantes 7UV200 fibre optics cable and generated excitation light with a different green 532 nm laser pointer (Conrad, part number 776301-62). The set-up at Stareso (for T. delaisi) used another green laser pointer (Z-Bolt Scuba-1/Dive Laser) as excitation source, but was otherwise identical. As a consequence of these differences between set-ups, the readings for the three sites cannot be compared quantitatively. However, because our goal was to examine differences in fluorescence brightness within species, this limitation does not affect the interpretation of our results.
For the actual fluorescence measurements, the tip of the spectrometer probe was handheld by one person, pointing at the fish with the tip submerged at the optimal focal distance (approx. 0.5 cm) and an angle of approximately 458 to the fish held in an upright position in the Petri dish. Both eyes were measured. The emission signal fluctuations inherent to this type of spectrometry were mitigated by repeating measurements up to 10 times per individual fish eye. To exclude sequence or handling effects, fish were measured in a randomized order with respect to depth of origin, with the person doing the measurements blind for fish origin. All measurements were taken in a dark room, only dimly illuminated with 450 nm LEDs (invisible to the spectrometer set-up).
(e) Statistical analysis
We calculated fluorescent emission brightness as the integrated area under the emission curve (counts ms 21 nm 21 ; 'total brightness' [53,54]) with the highest fluorescent signal from either of the two eyes for each individual fish, limited to the focal 'red' emission range between 580 and 750 nm (figure 4). Although 'counts' are closely and linearly related to 'quanta' (the Ocean Optics QE65000 has a 90% quantum efficiency in the target emission range), we did not actually measure the excitation curves and quantum yields of the fluorescent pigments and therefore need to treat these measurement as 'arbitrary units', which are useful for relative comparisons within the same dataset, but not between datasets obtained with different (artificial or natural) excitation sources. Given that these measurements tended to show left-skewed distributions with inhomogeneous variances between depths, we performed our analysis using log 10 -transformed values. Alternative measures of fluorescence brightness (maximum or mean peak emission per fish, or mean integrated total emission per fish) yielded qualitatively identical results.
Because of the differences in ecology and behaviour of the species tested, it is possible that a visual contrast or photoprotection function apply differently to each of them. For this reason, we refrained from carrying out an overall statistical analysis, but analysed the data for each species independently. Our prime analysis compares fluorescent brightness between individuals caught in shallow (25 m) and deep (220 m) water for each species. In addition, our analysis took into account individual body length as a covariate (ANCOVA). In five species (B. natans, C. nigripectus, E. guttata, E. zebrina and F. cf. duospilus), individuals from shallow water tended to be larger on average than those from deeper water (Welch's t-test, all p , 0. Figure 4. Fluorescent emission spectra of the eight study species averaged and sum-normalized across all measured individuals using the maximum curve for each individual. All species show a peak emission in the spectral range where absorption by water increases rapidly (cf. figure 1).
rspb.royalsocietypublishing.org Proc. R. Soc. B 281: 20141211 with the reverse pattern in T. delaisi ( p ¼ 0.026) and no difference in P. micheli and H. striata (all p . 0.42). This non-independence between our main factor (depth) and the covariate (body length), however, did not qualitatively affect our results: first, fluorescence brightness was independent of body length in seven species (linear regression, all p . 0.11) and only showed slight positive covariation in H. striata (adj. R 2 ¼ 0.139, F 1,14 ¼ 3.42, p ¼ 0.086). Second, we found no heterogeneity in covariate regression slopes between the two depths (ANCOVA, interaction body length  depth, all p . 0.54) except for H. striata ( p ¼ 0.03). Finally, visual data inspection (electronic supplementary material, figure S2) reassures that the reported depth effects on fluorescence brightness are not confounded by covariation with body length within the body size range of our measured fish. All our findings are robust to inclusion or exclusion of body length as a covariate, as well as to alternative non-parametric testing. Statistical analyses were performed in R (v. 3.0.1, R Development Core Team).
Results
The irides of all eight species sampled showed a fluorescence emission peak in the range of 600-620 nm ( figure 4). There was conspicuous individual variation in fluorescence brightness in some, but not all species ( figure 3).
In six out of eight species, fluorescence was significantly brighter at 220 m than at 25 m (table 1 and figure 5). The effect was particularly strong in the gobies E. guttata and P. micheli, the two triplefins H. striata and T. delaisi, and the pipefish C. nigripectus. It was less pronounced but still significant in the goby E. zebrina. No effect was found in the two gobies B. natans and F. cf. duospilus.
Body size did not contribute significantly to variation in fluorescence brightness in seven species, and marginally so in only one, namely C. nigripectus (see Material and methods; electronic supplementary material, S2).
Discussion
Fluorescence brightness differed significantly between the euryspectral and stenospectral sampling zones for six out of eight species, indicating that fluorescence is adjusted to depth. In all six species, the fluorescing irides were significantly brighter at greater depth, and none of the species examined showed the opposite pattern. Although we used species from three fish families and seven genera, fluorescent peak emission was very similar at 600-620 nm in all eight species. Our findings are consistent with the hypothesis that six species of fish studied here have red fluorescence mainly for visual, contrast-enhancing functions. This does not imply that photoprotection is irrelevant, but it is likely to be of secondary importance in the species sampled here. Whether a corresponding depth effect is absent in B. natans and F. cf. duospilus because both mechanisms act simultaneously or because those fish lack depth-based adaptations is currently not clear. The role of fluorescent pigments for photoprotection has previously been investigated for corals [16][17][18][19], but, to our knowledge, not for vertebrates. We now provide indirect evidence that photoprotection is probably not the primary function in at least some marine fish. We suspect that the observed difference between the two depths involves differences in the number of melanophores covering the iris, the number of fluorescent chromatophores and/or the concentration of fluorescent pigment within the fluorescent chromatophores. Because we did not correct for the size of the fluorescent patch, we cannot exclude that fluorescent patch size may also have contributed to the observed effect. The differences in fluorescence brightness could originate from phenotypic plasticity during development or in the adult stage, or due to local genetic adaptation. Tripterygion delaisi is known to exhibit high levels of self-recruitment [55] and population genetic substructure [56], but only when its rocky shore habitats are isolated by large discontinuities of sand or deep water at a scale of kilometres. While quantifications of depth-related population sub-structure are missing, this renders smallscale local adaptation as known for other fish [57] at least unlikely. In the adult stage, however, all investigated fish inhabit spatially limited, benthic home ranges, with adult dispersal of T. delaisi estimated at just a few dozen metres [56]. The resultant depth-range fidelity may offer sufficient time to phenotypically adjust the machinery controlling iris fluorescence to the local light conditions. The contributions of plasticity and genetic differentiation to observed differences in fluorescence are subjects of current research.
The suggestion that fluorescence has a visual function in marine fishes fits well with the recent discovery that males of the fairy wrasse Cirrhilabrus solorensis respond to the deep red fluorescence typical of this species in a mirror image stimuli experiment [43]. It also adds to a small but growing collection of corresponding case studies in other animal systems. Fluorescence has been proposed to have a signalling function in mantis shrimps [6], jumping spiders [26] and budgerigars [28]. In deep-sea dragonfish, it is used to transform green bioluminescent light into red light [58]. We expect that visual functions of fluorescence may be widespread in animals with well-developed colour vision living under spectrally skewed environments.
Our study does not answer the underlying question of why reef fish may benefit from increasing visual contrast using red fluorescence. Observations on many marine fish species show that red fluorescence can be present on many parts of the body and in a variety of patterns, suggesting visual functions in intra-and interspecific signalling, camouflage or warning [8,12]. Fluorescence around the eyes is often found in small, highly cryptic, benthic, predatory fish [8] (figures 2 and 3), suggesting a functional link. Bruce [59] proposed that fluorescence around the eyes may not act as a signal to an observer, but as an active light source used by the sender. Being close to the pupil makes fluorescent irides ideally positioned to generate reflections in the eyes of cryptic prey. Under stenospectral conditions, such reflections generated using red fluorescence would contrast strongly with the cyan visual background. This idea has striking analogies with a similar mechanism described for nocturnal, bioluminescent fish [24] and deserves more attention in future research.
Conclusion
Fluorescence brightness increased with depth in six out of eight marine fish species. This is opposite to the pattern expected if long-wavelength fluorescence were to primarily serve photoprotection. Our data are, however, consistent with the alternative hypothesis, which states that fluorescence can serve a visual contrast function when the wavelengths emitted by fluorescence are rare or absent from the ambient light. Visual contrast enhancement offers an intriguing new adaptive function for fluorescent pigments in marine environments, which calls for investigations of the physical properties, perceptive abilities and behavioural consequences of signalling using locally rare colour hues.
|
2017-04-30T04:05:37.638Z
|
2014-09-07T00:00:00.000
|
{
"year": 2014,
"sha1": "2bcfcd9ddae65f0405f1959387712ee49dc9ab70",
"oa_license": "CCBY",
"oa_url": "https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2014.1211",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2bcfcd9ddae65f0405f1959387712ee49dc9ab70",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
264394096
|
pes2o/s2orc
|
v3-fos-license
|
Perinatal asphyxia and hypothermic treatment from the endocrine perspective
Introduction Perinatal asphyxia is one of the three most important causes of neonatal mortality and morbidity. Therapeutic hypothermia represents the standard treatment for infants with moderate-severe perinatal asphyxia, resulting in reduction in the mortality and major neurodevelopmental disability. So far, data in the literature focusing on the endocrine aspects of both asphyxia and hypothermia treatment at birth are scanty, and many aspects are still debated. Aim of this narrative review is to summarize the current knowledge regarding the short- and long-term effects of perinatal asphyxia and of hypothermia treatment on the endocrine system, thus providing suggestions for improving the management of asphyxiated children. Results Involvement of the endocrine system (especially glucose and electrolyte disturbances, adrenal hemorrhage, non-thyroidal illness syndrome) can occur in a variable percentage of subjects with perinatal asphyxia, potentially affecting mortality as well as neurological outcome. Hypothermia may also affect endocrine homeostasis, leading to a decreased incidence of hypocalcemia and an increased risk of dilutional hyponatremia and hypercalcemia. Conclusions Metabolic abnormalities in the context of perinatal asphyxia are important modifiable factors that may be associated with a worse outcome. Therefore, clinicians should be aware of the possible occurrence of endocrine complication, in order to establish appropriate screening protocols and allow timely treatment.
Introduction
Perinatal asphyxia (PA) is defined as critical reduction in the oxygenated blood supply to the fetus that occurs around the time of birth because of a variety of events, including maternal or fetal hemorrhage, intermittent or acute umbilical cord compression, uterine rupture, or dystocic delivery.PA represents one of the three most important causes of neonatal morbidity and mortality (1).
Most asphyxiated babies recover successfully from the hypoxic insult, but some patients experience permanent damage of both vital and non-vital organs, especially the brain, heart, kidney, and lungs.Hypoxic brain damage may result in hypoxic ischemic encephalopathy (HIE), which has an incidence of 1−8 per 1000 live births in developed countries and is the most common cause of long-term disability in full-term infants (1,2).The endocrine system plays a critical role in coordinating metabolic, respiratory and vasomotor responses to hypoxia (3).Moreover, in a small but not negligible percentage of cases, PA may be associated with endocrine dysfunctions including electrolytes and glucose disturbances, adrenal insufficiency (AI), thyroid hormone abnormalities, and damage to the pineal gland.
Hypothermia treatment (HT) represents the standard treatment for near-term infants with moderate-to-severe HIE (4).Despite leading to a clinically relevant reduction in major neurodevelopmental disability and cerebral palsy (CP), hypothermia is only partially effective and may in turn cause organ damage leading to endocrine disturbances (4)(5)(6).HT has become routine care relatively recently and while its effects on the nervous, cardiopulmonary, and renal systems have been thoroughly investigated (7)(8)(9), the endocrine and metabolic effects have not yet been sufficiently considered.
Endocrine alterations during PA represent important modifiable factors that can be associated with increased mortality, and worse neurodevelopmental outcome, and therefore clinicians need to be aware of their possible occurrence.Screening strategies for endocrine complications are essential to ensure timely diagnosis and therapeutic intervention as well as to improve neuroprotection.
Aim of this review is to summarize the main studies evaluating the effects of perinatal asphyxia and of HT on the endocrine system, with a focus on the pathogenic mechanisms, monitoring and treatment strategies of asphyxiated children.blood flow is driven by carotid chemoreceptors activation during hypoxic-ischemic injury causing massive catecholamine release with peripheral vasoconstriction.Despite this adaptive process, prolonged hypoxia causes cerebral anaerobic metabolism causing lactate production and metabolic acidosis (1).Similarly, in case of prolonged hypoxemia, cardiac output fails to maintain myocardial oxygenation resulting in metabolic acidosis, myocardial failure, and shock (9).Moreover, nephron activity is depressed and kidneys show elevated susceptibility to reperfusion injury, resulting in decreased excretory function, with both electrolytes and pH imbalance (1,2).Other non-endocrine organs possibly affected by hypoxic injury are the liver, with hyper-transaminasemia and coagulopathy, and the lungs with pulmonary hypertension and hemorrhage (1,2).
Hypoxic brain damage mainly results in HIE, which is the most common cause of long-term disability in full-term infants.
The pathogenesis of neonatal HIE involves an early phase of energy failure, followed after at least six hours by reoxygenation and reperfusion injury, with depletion of the antioxidant defense system due to oxidative stress and subsequent tissue damage (10).The severity of HIE is commonly classified as mild, moderate or severe according to the Sarnat grading, which correlates with the degree of neuronal damage and is predictive of adverse neurodevelopmental outcomes (6).
The interval between the two pathophysiologic phases represents the therapeutic window for HT in infants with moderate-to-severe HIE.Such treatment has been shown to reduce mortality or the risk of long-term neurodevelopmental disability (4), by reducing cerebral metabolism and by attenuating pro-inflammatory pathways that lead to necrosis and neuronal apoptosis, including the release of excitatory amino acids, and the production of free radicals and nitric oxide (11).However, in severe cases, despite maximal care, only little improvement is observed, with important repercussions on the family, health care system, and society.Indeed, when the severity or duration of the neuronal insult exceeds the capacity of the CNS to repair the damage, depending upon susceptibility characteristics that the genome confers on neuronal tissue, inflammation persists and the damaged brain tissues lose the support of neurotrophic factors (12,13).In this respect, experimental evidence in animal and/or human models indicate that more prolonged administration of additional treatments, including growth factors, stem cells, antioxidants, substances reducing excitotoxicity, local inflammation, and antiapoptotic agents may improve the therapeutic efficacy of HT, by preventing more severe neuronal and synaptic injury and potentiating repair and regeneration of the damaged brain tissue (12-14).
Eligibility criteria for HT are: a) gestational age of at least 35 weeks and weight at least 1.8 kg b) less than 6 hours of life, c) asphyxia as defined by the presence of at least two of the following: Apgar score less than 6 at 10 minutes or persistent need for resuscitation at 10 minutes, any acute perinatal event associated with cordon arterial pH <7.0 or base excess≤ -12 mmol/L obtained within the first hour of life, and d) moderate/severe HIE according to Sarnat staging (15).HT is not recommended in case of oxygen requirement greater than 80%, major congenital abnormalities, severe uncontrolled coagulopathy or low probability of survival (15).
The HT protocol consists of 72 hours of hypothermia (core temperature around 33.5°C), followed by a gradual rewarming phase (15).All children undergoing HT should perform a brain MRI within the first month of life which allows for early recognition of cerebral abnormalities and helps predict neurodevelopmental outcomes (16).
Endocrine effects of perinatal asphyxia
The clinical spectrum of PA encompasses several endocrine manifestations that can lead to acute decompensation and even lifethreatening events (Figure 1).The variability of the clinical presentation depends on several factors, such as gestational age, the severity and/or duration of hypoxia, and the use of HT (17,18).
Glucose homeostasis
Oxidative metabolism accounts for almost all glucose uptake by the brain (19).Under hypoxic conditions, excess lactate serves as a substrate for gluconeogenesis, which is in turn stimulated by the release of glucocorticoids and catecholamines (20, 21).The brain increases utilization of glucose and reactive vasodilation increases the glucose availability for anaerobic glycolysis; nevertheless, the worsening acidosis is ultimately associated with impaired cardiac function, decreased glucose release, loss of cerebrovascular autoregulation, and depletion of local glucose storage.The neonatal brain compensates initially by lowering cerebral energy requirements and enhancing the ability to utilize lactate as an alternative energy source; however, asphyxia leads to a failure of compensatory mechanisms with impaired antioxidant system and worsening of encephalopathy (19,20).
Both hypoglycemia and hyperglycemia are known to occur more frequently in high-risk categories of newborns, including asphyxiated neonates (22,23).In fact, almost half of infants receiving HT for HIE may experience at least one episode of hypoglycemia, with the first episode usually occurring within the first 24 hours, and a progressive reduction in the frequency over the following days (23)(24)(25).Symptoms of neuroglycopenia in the neonatal period are highly non-specific and may include lethargy, cyanosis, irregular breathing, hypotonia, irritability, abnormal cry, feeding problems, seizures, myoclonic jerks, coma, and apnea.Thus, they can easily be masked by the symptoms of HIE (22).
The pathogenesis of hypoglycemia in PA is multifactorial involving severe glycogen depletion secondary to catecholamine release, reduced glucagon response, increased insulin release, and reduced adiponectin (which promotes insulin sensitivity) (23,(26)(27)(28).In response to increased insulin signaling, activation of the insulin receptor (IR) results in increased expression of lipogenic genes, and inhibition of the expression of gluconeogenic genes, through coordinated activation of specific transcription factors, such as cyclic AMP-responsive element-binding protein (CREB), Forkhead box O (FOXO), and Sterol regulatory element-binding protein 1 (SREBP1) (29).While stimulating effects of insulin on lipogenesis are largely mediated by SREBP1 and CREB, IRmediated phosphorylation of FOXO leads to the exclusion of the protein from the nucleus, with reduced transcription of genes involved in gluconeogenesis and glycogenolysis (30),.Signaling through CREB and FOXO is also crucial for b-cell survival as well as for insulin gene transcription, and glucose-mediated insulin exocytosis (29,30).These processes are disrupted in the context of perinatal asphyxia, due to the marked sensitivity of pancreatic b cells to oxidative stress agents.In fact, under hypoxic conditions, activation of hypoxia-inducible factors stimulates anaerobic glycolytic flux independently from blood glucose concentrations, resulting in higher insulin and lower glucose concentrations in fasting conditions, and impaired insulin response to postprandial hyperglycemia (31).In a few cases, inappropriate insulin secretion becomes clinically relevant to require specific medical treatment (26) (Table 1).Indeed, a recent retrospective study showed that PA and greater than standard resuscitation accounted for 3% and 33% of the causes of perinatal stress-induced hyperinsulinemic hypoglycemia, respectively (26).
Hyperglycemia has also been reported in up to 50% of the neonates with encephalopathy (7).Parmentier et al. showed that 35.4% of 223 infants receiving HT had hypoglycemia, which was severe in 22.4% (19).In this study 80% of the infants with hypoglycemia also had later episodes of hyperglycemia, which might result either from interventions aimed at increasing glucose levels or from hypoxiarelated hepatic and pancreatic islet dysfunction (19,25).
Several studies have shown that hypoglycemic episodes correlate with the severity of both asphyxia and HIE (20, 50) and that glucose instability is predictive of adverse neurodevelopmental outcome in neonates with HIE (25,(51)(52)(53).Indeed, Montaldo et al. (25) reported that 35% of infants with unfavorable outcome had out-of-range glucose values, compared with only 18% in the group with favorable outcome; moreover, longer duration of hypoglycemia and greater area under the hypoglycemic curve were associated with adverse neurodevelopmental outcomes at 18-24 months.Similarly, Basu et al. reported that infants with HIE who experienced hypoglycemia or any glucose derangement during the early postnatal period had a 3-to 6-fold increased risk of unfavorable outcomes (death or severe neurodevelopmental disability at 18 months) compared with normoglycemic infants (50).In a recent study, hypoglycemia predicted lower motor and cognitive scores in preschool age (19), after adjustment for severity of HIE.
Conversely, Pinchefsky et al. documented that in neonates with HIE on 3-days Continuous Glucose Monitoring (CGM), periods of hyperglycemia, but not of hypoglycemia, were associated with worse background EEG scores, reduced sleep-wake cycling, and seizures (54).Similarly, in a more recent study, Kamino et al. found that maximum glucose concentrations over the first 48 hours, but not minimum, predicted basal ganglia and watershed injury in neonates suffering from HIE (55).Glucose concentrations above 10.1 mmol/L during the first 48 hours of life predicted higher composite outcome of severe disability or death, higher Child Behavior Checklist T-scores, worse neuromotor score, and higher risk of cerebral palsy at 18 months of life (55).
A growing body of evidence suggests that glucose derangement (especially hypoglycemia) makes certain areas of the brain more vulnerable to hypoxic-ischaemic injury.While neonatal hypoglycemia has classically been linked to parieto-occipital injury, the pattern of hypoglycemia-related injury in neonates with HIE seems to include involvement of the corticospinal tract, the basal ganglia, the sensorimotor cortex, and watershed areas (23).In a more recent study of neonates with HIE undergoing HT, even higher peak glucose concentrations on day one of life were associated with changes in MRI spectroscopy in many areas other from those associated with hypoglycemia (anterior and posterior white matter, corpus callosum, lentiform nucleus, pulvinar, and Endocrine manifestations of birth asphyxia.optic radiations) (56).So far, no clear-cut explanation has been found for the susceptibility to dysglycemia of these specific areas of the brain in asphyxiated neonates.Various mechanisms have been proposed, including altered patterns of regional perfusion, hypoglycemia-induced excitatory neurotoxins active at cell-typespecific N-methyl-D-aspartate receptors, increased mitochondrial free radical generation and initiation of apoptosis (57).Additional hypotheses may be immaturity of the white matter in some infants, and reduced myelin fiber formation, due to inhibited proliferation, migration, and differentiation of oligodendrocyte precursors and accelerated oligodendrocytes apoptosis induced by hypoglycemia (58).
These observations indicate that proactive avoidance of glucose instability is a neuroprotective strategy in the context of neonatal encephalopathy (25,32).The Pediatric Endocrine Society advises that a "safe target" during the first 48 hours should be close to the mean for healthy newborns and above the threshold for neuroglycopenic symptoms (50 mg/dl; 2.8 mmol/L) ( 59).This can be obtained through regular monitoring of glucose levels every 4 to 6 hourly during HT and within the 48 hours of warming, together with adequate energy supply, via enteral or parenteral route (32, 59) and correction of hypoglycemia (32, 33, 59) and hyperglycemia episodes (34, 35) (Table 1).However, reference ranges in healthy full-term newborns may not be appropriate in infants at risk of impaired metabolic adaptation, as individual susceptibility to brain injury can vary depending on comorbid conditions and an infant's ability to produce and use alternative substrates (32).
Despite being considered the gold standard in infants, intermittent glucose monitoring might not detect low glucose concentrations or conversely might detect only temporary hypoglycemia, leading to unnecessary treatment.CGM has the advantage of tracking glucose levels continuously, potentially improving clinical management.However, the accuracy and functioning of sensors during cooling remains to be determined (25).In all patients who have recurrent episodes of hypoglycemia during weaning from parenteral nutrition, or higher than normal glucose requirements to maintain euglycemia (>8 mg/kg/min), evaluation of insulin concentrations at the time of hypoglycemia is mandatory (33).Incremental introduction of enteral feeds by orogastric tube during HT and rewarming period, possibly with maternal breast milk, is safe and may be beneficial for glucose metabolism, for gut microbiome early stabilization and may also have a neuroprotective effect (59).Non-nutritive enteral feeding (10 ml/kg/day) can be started immediately during HT with small increases, if well tolerated.At the end of hypothermia, oral feeding can be re-established with caution, if suction is good, otherwise the orogastric tube should be continued (59).
Syndrome of inappropriate antidiuretic hormone release
A marked increase in serum ADH and copeptin has been reported after vaginal delivery, triggered by activation of the hypothalamic-pituitary axis and the sympathetic nervous system in response to stress (60).Copeptin is the C-terminal part of preproADH, released in a 1:1 ratio.Its in vitro stability makes it an ideal surrogate marker of peripheral ADH release (60).The reasons for this increase are unknown, but ADH seems to exacerbate brain edema, vasoconstriction, disruption of the brain blood barrier, and neuroinflammation during ischemic brain injuries (61).
Newborns with HIE are at risk of developing hyponatremia due to acute kidney injury, overload of administered fluids, SIADH, urinary sodium loss related to decreased tubular sodium reabsorption or treatments such as hypothermia (62,63).Hypothyroidism and hypocortisolism may contribute to hyponatremia especially in premature newborns (64, 65).Multifactorial SIADH may occur in newborns with HIE due to hypoxic brain injury, pain, vomiting or drugs with ADH-like effect (especially anticonvulsivants).Strict monitoring of fluid balance and daily electrolyte assessment (every 8 hours in the first 24-48 hours) evaluation is mandatory in neonates affected by HIE (36).
In the case of hyponatremia, firstly, the evaluation of fluid administration rate and sodium content is necessary.The biochemical evaluation should include measurement of TSH, fT4, ACTH, cortisol, serum osmolality, urine osmolality, urinary sodium and, if available, serum copeptin.
SIADH should be considered in hyponatremia associated with low serum osmolality (<275 mOsm/kg), urine osmolality higher than 100 mOsm/kg, urinary sodium higher than 40 mmol/L, low urinary output and, when available, copeptin inappropriate for serum osmolality (generally above than normal range) (37).
Fluid restriction should be considered as first choice in case of SIADH and 50-70 ml/kg/day is initially considered appropriate for neonates in the first day of life and a further increase should be evaluated daily by 20 ml/kg/day for each subsequent day (37, 38, 66) (Table 1).Fluid restriction is often difficult to obtain or maintain and may always not be effective.
Under these conditions or when SIADH becomes chronic with potentially severe neurological symptoms, a class of aquaretic agents called vaptans can be used (39) to drive electrolyte-free polyuria.Low-dose titration approach should be used at treatment initiation and subsequent dose modulation should be performed based on monitoring of serum sodium and urinary output.Fluid restriction should gradually turn towards normal daily fluids intake.Vaptans are still considered off-label in pediatric age in both Europe and USA, and in addition to monitoring fluid and electrolyte balance, liver enzymes assessment during treatment is also required.Given the negative effects played by ADH during ischemic brain injury, a role has been proposed for Vaptans to mitigate neurological consequences of ischemic stroke in adults (67).Rapid corrections of hyponatremia should be avoided.The correction rate should be of 4-6 mmol/L in the first 4-6 hours, 10-12 mmol/L in the first 24 hours, <18 mmol/L in the first 48 hours (37).Finally, in the presence of concomitant biochemical features of overt hypothyroidism or AI, the decision to initiate treatment should be considered on a case-by-case basis (37).
Central diabetes insipidus
HIE has been associated with central diabetes insipidus (CDI) in anecdotal cases (67, 68).The supraoptic and paraventricular hypothalamic nuclei are resistant to hypoxia and reduced blood supply due to the presence of many neurosecretory cells, more than 90% of which should be destroyed before CDI occurs.Moreover, the posterior pituitary gland receives its blood supply by the inferior hypophyseal artery which functions under high pressure and is preserved by low-pressure or hypoxic damage (69).Finally, glucocorticoid-dependent mechanisms of brain tolerance to hypoxia and HT-related decreased energy demand may further preserve hypothalamic nuclei from hypoxic damage (70).Thus, only severe hypoxic/ischemic insults can interfere with the integrity of ADH release (67, 68).In case of hypernatremia, firstly, fluid balance and sodium content of the administered fluid should be carefully evaluated.Biochemical assessment requires measurement of serum and urine osmolality and, if available, a serum sample for copeptin.Hypernatremia, high serum osmolality (associated with low copeptin level), low urinary osmolality and high urinary output are diagnostic for CDI (39).During hypernatremia correction, hypotonic solutions should be avoided and the daily sodium intake should be provided.Desmopressin administration should be started at the lowest dose possible (1 mcg/kg/day), with further dose adjustments based on fluid balance and serum sodium levels (Table 1) (39).
Hypocalcemia
Hypocalcemia, defined as total serum calcium (Ca) level <4 mEq/L in term newborns and <3.5 mEq/l in preterm newborns or ionized serum Ca level <2.0 mEq/l in newborns at term and <1.75 mEq/l in preterm newborns, is common in asphyxiated children, and is often associated with other electrolyte disturbances, such as hyponatremia, hyperkalemia, hypomagnesemia, and even hyperphosphatemia (71).Hypocalcemia was also reported in 17% of the newborns undergoing HT, even though, in contrast to what was observed for hyponatremia, it seems that implementation of HT has led to a reduction in incidence despite lower Ca intakes, suggesting positive effects of hypothermia on Ca metabolism (40).
Asphyxia-related hypocalcemia is explained by several possible mechanisms, including a slow PTH secretory response by the parathyroids to the postnatal fall in plasma Ca concentration, an increased phosphate load due to tissue catabolism or excess parenteral supply, low Ca intake due to delayed feeding, excess bicarbonate therapy, renal failure and finally increased calcitonin concentrations (62,71).While for hyponatremia and hypokalemia several studies have indicated a close correlation with the severity of asphyxia, for hypocalcemia the results are still conflicting.Some authors have failed to find a correlation between hypocalcemia and severity of hypoxicischemic encephalopathy, while others have reported that severely asphyxiated children are more prone to develop severe Ca impairments requiring prompt medical intervention (41).
Symptoms of hypocalcemia are non-specific and can be masked by the asphyxiated condition, being related to alterations of neuromuscular and CNS activity (irritability, agitation, apnoea, lethargy with poor sucking, seizures), and of cardiac rhythm (arrhythmia with even increased risk of sudden death).
Ca flow inside neurons and glial cells due to glutamatemediated excitotoxicity results in activation of calcium-dependent lytic enzymes, oxidative stress, mitochondrial dysfunction, cytotoxic edema, and apoptosis (6).Thus, serial evaluations of ionized Ca, the biologically active fraction of Ca, are required to adequately diagnose hypocalcemia, avoiding unnecessary or prophylactic calcium administration (62).
Acute treatment of symptomatic hypocalcemia involves the use of intravenous (iv) 10% Ca gluconate at a dose of 100 mg/kg (1 mL/ kg) infused slowly, over 10-20 minutes, under close ECG monitoring to avoid arrhythmias (72, 73) (Table 1).Alternatively, iv Ca chloride (20 mg/kg or 0.2 mL/kg) can be given, a more rapidly metabolized preparation that may be preferable in life-threatening situations.In case of symptoms persisting after the initial dose, the dose of Ca can be repeated after 10 minutes.After acute treatment, Ca gluconate maintenance can be administered at an iv dose of 100 mg/kg (1 mL/ kg) elemental Ca daily (72,73).If enteral feedings are tolerated, oral Ca glubionate can be given at a dose of 30-50 mg/kg/day in four divided doses, although its high osmolality and sugar content may cause gastrointestinal irritability or diarrhea.Alternatively, 10% Ca gluconate (up to 500 mg/kg/day) can be used and divided over foursix feedings (72,73).Serum Ca concentrations usually improve within 1-3 days of treatment; Ca supplements should be withdrawn gradually, when serum Ca levels have normalized, and the newborn is able to feed sufficiently for needs (74).
To enhance Ca absorption, vitamin D3 at 400-800 IU/day should be added, depending on the gestational age and vitamin status of the neonate and the mother.Calcitriol at a dose of 0.08 to 0.1 mcg/kg/day may represent an alternative therapy, for possible hepatic or renal failure or immaturity.In case of concomitant hypomagnesemia, the latter should be treated before correcting the hypocalcemia, with 50% Magnesium sulfate at a dose of 50-100 mg/kg (0.4-0.8 mEq/kg/day) divided in 2 doses, iv over at least 2 hours or intramuscular (IM), until serum magnesium concentration is >1.5 mg/dL (0.62 mmol/L) (72,73).In infants with associated hyperphosphatemia, breast milk is preferable for its correct Ca/P ratio; alternatively, a low phosphate formula should be used, even though the differences in phosphate concentrations among various formulas are small and may not be clinically significant (72,73).
Hypercalcemia
Hypercalcemia occurs rarely in asphyxiated newborns and is most often iatrogenic.Indeed, hypercalcemia can be encountered in HT, mainly associated with subcutaneous fat necrosis (SFN) (75), even though this condition may be normocalcemic or even hypocalcemic in a much lower percentage of cases, due to an immature PTH response (76).The median onset of SFN is around day 6 of life, but it has been reported up to 270 days (76).Its incidence has decreased over time because of improved skin care (42) and is currently estimated around 1% of cases undergoing HT (4,75).Neonatal fat has a relatively high concentration of saturated fatty acids (palmitic and stearic acids), with a high melting point predisposing the adipose tissue to crystallization during hypothermia.Other possible mechanisms contributing to SFN are hypoxic damage, mechanical pressure with subsequent worsening of hypoperfusion, and localization of brown adipose tissue in specific sites (76).Clinically, it is characterized by multiple indurated plaques or nodules, with or without erythema on the cheeks, posterior trunk, buttocks, and limbs (43,76).
Hypercalcemia occurs, usually within the first month, in 36-56% of affected neonates and it may be life-threatening.It is likely due to extrarenal production of 1,25-dihydroxyvitamin D3 by inflammatory skin cells expressing high levels of 1-alpha hydroxylase.Alternatively, direct release of calcium from the skin lesions has been suggested (76).
Only 50% of these neonates show classic symptoms of hypercalcemia (poor feeding, vomiting, failure to thrive, constipation, muscular hypotonia, lethargy, irritability, convulsions, hypertension); routine screening for hypercalcemia is therefore recommended for neonates with or at-risk of developing SFN (76).
SFN is a self-limiting panniculitis; however, when complicated by hypercalcemia, several treatment options are indicated on a caseby-case basis, along with ECG monitoring for possible arrhythmias (Table 1) (43).First, iv hyper-hydration, together with low Ca formula, avoiding vitamin D supplementation.A Ca-losing diuretic, such as furosemide, and/or corticosteroids represent the next step, in case of persistent hypercalcemia.Finally, bisphosphonates have recently been proposed as first-line treatment in symptomatic newborns (especially iv pamidronate in one or more doses of 0.25-0.5 mg/Kg) (43,44,76).
Prevention strategies of fluid and electrolytes disturbances
During HT there is a reduction in trans-epidermal water loss due to skin vasoconstriction, in the urinary output and in the respiratory water losses due to mechanical ventilation (77).The chances of fluid retention increase, therefore some authors recommend systemic fluids and sodium restriction to avoid hyponatremia (63).The current recommendations suggest that from birth and within the first 24 hours of HT, infusion should start with 40-50 ml/kg/day, adjusting fluid intake according to the fluid balance.It is recommended to start with an isotonic glycoprotein solution containing no sodium or potassium, but with addition of Ca (at maintenance value of 6 ml/kg/day).
However, it has been suggested that the systematic approach of fluid restriction during HT should be avoided in the absence of overt SIADH, to avoid further end-organ damage to kidneys and brain (38).On the other hand, fluid overload may worsen hyponatremia and lead to cerebral or pulmonary edema.Moreover, there is no evidence-based data that support or refute that the systematic fluid restriction approach following PA affects mortality or morbidity (77)(78)(79).
Electrolytes should be added after 24 to 48 hours, in the absence of severe dyselectrolytemia, when electrolytes and renal function are stable.It is recommended to avoid potassium supplementation during cooling, because of the risk of hyperkalemia during rewarming.Sodium, Potassium and Ca have to be checked every six hours during the 72 hours of HT, and every twelve hours during the 48 hours of warming (38).
Careful skin care by changing the neonate posture several times a day during HT has been suggested to reduce the risk of HT-related adiponecrosis (75).If a rigid mattress is used for systemic hypothermia, it may be helpful to place a sheet between the newborn's mattress.Regular skin inspection is required in the first two months of life in children receiving HT or born from a traumatic delivery or shoulder dystocia, to identify late adiponecrosis or very small lesions (43,76).Moreover, families should be informed of the possible occurrence of such lesions, as well as possible symptoms of hypercalcemia (especially vomiting and poor weight gain or feeding) (43).The concentrations of total and ionized Ca, together Ca/creatinine ratio in the urine, should be checked weekly in the first month after detection of the lesions, especially if they are large (76), and monthly thereafter or in case of symptoms of hypercalcemia within the following months (43,76).In those infants who develop hypercalcemia, determination of serum 1,25(OH)2D3 and parathyroid hormone (PTH) allows confirmation of PTH-independent hypercalcemia, while regular renal US monitoring is required to detect nephrocalcinosis (76).
Adrenal gland Adrenal hemorrhage and adrenal insufficiency
Neonatal adrenal hemorrhage (AH) occurs in up to 3% live births (45) and may be due to PA in a variable percentage of cases.In the largest series described so far (80), AH was more frequently associated with well-known risk factors for PA, such as vaginal delivery (95.9%), macrosomia (21.6%), and fetal acidosis (31%).In another study (81), among 37 cases of AH diagnosed over a 4-years period, 10.8% had HIE, while 10.8% and 18.9% were associated with traumatic delivery and the need for resuscitation soon after birth, respectively.The vulnerability of the neonatal adrenal gland to hemorrhage may be explained by its large size and peculiar vascular structure, characterized by a large arterial supply that drains into a few veins at the corticomedullary junction and eventually into a single adrenal vein with a thick muscle wall (46).AH during PA may result from a marked decrease in perfusion pressure causing ischemic necrosis of vessels at the corticomedullary junction or from reperfusion injury (82).Alternatively, AH may also be related to venous vasoconstriction and platelet aggregation favored by massive release of ACTH and/or catecholamines (82,83), or to the marked increase in venous and arterial perfusion pressure associated with traumatic delivery.The latter factor may also explain the male prevalence of AH likely due to higher birth weight (45).
The right adrenal gland is involved in about 70% of cases, because the right adrenal vein flows directly into the inferior vena cava, making it more susceptible to venous pressure fluctuations or compression between the liver and spine (46).Bilateral AH accounts for about 10% of cases (80).
AH may be identified incidentally or be symptomatic (46).Symptoms are non-specific and may include paleness, feeding difficulties, vomiting, palpable abdominal mass, indirect jaundice, hypothermia, tachypnea, hypotonia, or lethargy.In three independent series (80,81,84) jaundice was the most common symptom, being reported in 50-85% of the cases.Conversely, Fedakar et al. found hypotonia and lethargy to be the most frequent symptoms (35,7% of the cases) (85).Rarely, blood can leak through the retroperitoneal space, causing swelling and a bluish discoloration of the scrotum, mimicking acute scrotal disease (80).
Since AH is usually unilateral, AI is infrequent.In a large series of 74 cases, AI was present in only 1 patient (1.3%) (81).AI usually develops during the first week of life, but delayed presentation may be due to gradual fibrosis of the adrenal gland or complications of PA, such as sepsis, clotting problems, and intraventricular hemorrhage.
The symptoms of AI are highly non-specific, so clinicians should keep a low threshold of suspicion for AI in patients with bilateral AH, hemodynamic instability, hypotension, lethargy, hypovolemic shock, hyponatremia, hyperkalemia, hypoglycemia, acidosis, or cholestasis (86).The development of AI is more frequent in preterm than in full-term babies (84).The adrenal gland has good regenerative capacity, so that complete regression of hemorrhagic lesions and even AI can be achieved within up to 30 months (45,85,87).
Laboratory work-up may variably show anemia, indirect hyperbilirubinemia, coagulation abnormalities, as well as hormonal hallmarks of primary AI (46).
US is the technique of choice for the screening and follow-up of AH in neonates and provides important clues for the differential diagnosis (Table 2) (45,46), especially by documenting the typical changes in lesion appearance.Indeed, AH initially appears as a solid, echogenic lesion, but within 2 weeks it takes on a cystic appearance with mixed echogenicity and progressively resolves with possible residual calcifications (45).Contrast-enhanced US is more accurate than Doppler in differentiating non-vascularized AH
Adrenal gland
Upper pole of the kidney Some authors recommend abdominal US screening in all neonates with PA, especially in the presence of traumatic delivery, anemia, and prolonged jaundice (81).In patients receiving HT, routine abdominal US should be undertaken at the start of treatment and after the warm-up phase.When AH is identified, the kidneys must be scanned, to rule out concurrent renal vein thrombosis (88).Follow-up US is also required to evaluate lesion resolution.Although the timing is not well defined, a sensible approach might be to repeat the US every few days in the first 2 weeks, and then monthly until resolution.
Most cases of AH require only careful monitoring of vital signs, hydration, electrolytes and blood glucose, to allow early detection of AI (46).ACTH and Cortisol levels should be checked in all patients with bilateral AH at diagnosis and repeated regularly during the first month of life.AI is confirmed when baseline cortisol is <5 µg/dl (140 nmol/l), associated with ACTH concentrations more than twofold the upper limit of the reference range, decreased aldosterone, and increased renin concentrations (47).If baseline values are borderline, a corticotropin test may be required, showing peak cortisol <18 µg/dl (500 nmol/l) 30 or 60 minutes after administration of ACTH (250 µg/m2 or 15 µg/kg).Basal cortisol concentrations >20 mg/dL exclude AI (46).
Treatment of acute AI is based on fluid, electrolyte, and hydrocortisone replacement (Table 1).Treatment cannot be delayed until confirmatory results are received and must be started immediately after a diagnostic sample is collected (46).Hydrocortisone is given as an initial iv bolus of 50-100 mg/m2, followed by 50-100 mg/m2/day in four divided doses.When the patient is clinically stable and able to take oral medications, hydrocortisone can be switched to an oral dose of 10-12 mg/m2/day in three divided doses and fludrocortisone replacement can be added.An attempt to gradually reduce hydrocortisone should be made when the patient is clinically stable, adrenal lesions are resolved or stable, and the hormone profile is repeatedly normal.Given the possible delayed resolution of AH, more than one attempt may be required (89).
Relative adrenal insufficiency
The hypothalamic-pituitary-adrenal axis acts as a key homeostatic regulator during PA, through a different pattern of release of cortisol, based on fetal characteristics and the severity and duration of hypoxic insult (90).In response to acute asphyxia and HT ovine fetuses exhibit a transient rise in cortisol, which is comparable in magnitude between preterm and full-term fetuses in case of severe injury (90).Moreover, study in humans demonstrated a more gradual decrease in cortisol concentration in neonates undergoing HT, compared to normothermic neonates, with cortisol values related to the anti-inflammatory cytokine Interleukin-10 (91).Despite such adaptations, some neonates with PA may experience hemodynamic instability and refractory hypotension (defined as a mean blood pressure persistently below the 10% percentile for age despite adequate inotropes and crystalloids administration), associated with relative adrenal insufficiency (48).This condition, also called critical-illness-related corticosteroid insufficiency (CIRCI), consists of cortisol secretion or activity that is inappropriately low for the extent of stress or severity of illness present (92).Thus, adrenal function must be evaluated in all neonates with symptoms of AI, regardless of the presence of AH.However, there are still very few studies conducted in neonates.
The mechanisms leading to CIRCI likely include impaired cortisol production (possibly related to reduced adrenal perfusion or impaired binding of ACTH to its adrenal receptor), as well as tissue resistance to glucocorticoids related to dysfunction of their receptors (84), with an impaired translocation of glucocorticoid receptors inside the cell nucleus in response to stress (93).Kashana et al. (94) found that half of the patients with HIE had circulatory collapse which improved with glucocorticoids administration, despite cortisol concentrations comparable to other asphyxiated neonates.Moreover, hypotensive babies had a marked increase in dehydroepiandrosterone suggesting selective impairment of the 3beta-hydroxysteroid dehydrogenase enzyme activity during PA (94).In another study, asphyxiated neonates undergoing HT showed a CIRCI-compatible cortisol concentration at 24 hours of life, at least partially responsible for hypotension and multi-organ failure (91).Although neonates with refractory hypotension have an impaired response to ACTH stimulation and an inappropriately low cortisol level for the degree of stress, the diagnosis of CIRCI in this age group remains unclear, due to the lack of a specific cut-off value (49).
Administration of hydrocortisone or dexamethasone to neonates with refractory hypotension has been shown to improve blood pressure within 2 hours, mainly by acting on vascular tone (Table 1).Furthermore, it is worth mentioning that high doses of hydrocortisone in the murine model of HIE have shown some neuroprotective effects, especially in the presence of concomitant sepsis (95).
Thyroid function
Thyroid hormones are key regulators of thermogenesis, water and electrolyte balance, and growth and development of the brain (96).Moreover, during asphyxia, they play important actions in regulating the cardiac contractile function (97, 98).Since the heart turns to anaerobic glycolysis for energy production during hypoxia and ischemia, triiodothyronin (T3) is important for the regulation of cellular heart metabolism.PA may be associated with a reduction of T4 and T3 levels and an increase in rT3, not caused by an intrinsic abnormality in thyroid function, especially over the first hours/days of life.This condition is known as euthyroid sick syndrome or, alternatively as non-thyroidal illness syndrome (NTIS) (99,100).NTIS is more frequent in premature than in full-term newborns and may also be associated with severe diseases complicating PA, like respiratory distress, sepsis, cranial hemorrhage, persistent ductus arteriosus and necrotizing enterocolitis (99).Different pathogenic mechanisms have been hypothesized, such as abnormal setting of the hypothalamus and pituitary thyroid hormone receptor, asphyxia-induced changes in iodothyronine deiodinases expression and variations of intracellular thyroid hormone uptake (100,101).The typical pattern of NTIS includes reduced T3 and increased rT3 concentrations, in the presence of low-normal TSH and suppressed response of TSH to thyrotropin-releasing hormone (TRH), while decreased T4 and FT4 concentrations are seen when the disease becomes more severe, correlating with poor prognosis (96)(97)(98).Tahirovic (96) and Sak (102) demonstrated lower cord blood FT4 and T4 concentrations in full-term babies with a low Apgar score compared to matched controls.Low serum concentrations of T4, FT4, T3 and FT3 have also been reported at 18 and 24 hours of life in asphyxiated newborns (103).Contrasting results for cord blood TSH concentrations in asphyxiated newborns have been reported.Sak (102) and Gemer (104) found higher values compared to the control group, possibly due to the catecholamine increase and/or redistribution of fetal blood flow to the brain.Conversely, Tahivoric (96) and Pereira (103) found no difference in cord blood TSH concentrations, thus leading to the of a low TRH secretion in response to hypoxia and/or stress.Alterations in thyroid function tests in asphyxiated newborns are likely to be influenced by TH, thus making the evaluation of thyroid function more complex.In adults, serum T3 and FT3 decrease when body temperature is low for a prolonged time (105).Studies evaluating thyroid hormones in infants who received HT have yielded contrasting results.In fact, while Yazici (97) found a higher capillary TSH in the first 4 days of life, Kobayashy et al. (64) reported TSH decrease to the lower limit of normal range at 24 hours, along with low serum FT3 and FT4 over the first 96 hours of life in asphyxiated newborns undergoing TH with abnormal MRI findings, compared with a group having normal imaging, suggesting central hypothyroidism associated with moderate/ severe HIE.
Moreover, the use of certain drugs may contribute to altered thyroid function of asphyxiated neonates.In particular, the inotropic agent dopamine may inhibit TSH secretion by regulating gene expression and suppressing T4, acting directly on the thyroid gland (98, 106).
NTIS likely represents an adaptive response to stress in an attempt to reduce the metabolic rate and protect organs from illness-related hypercatabolism (106).In case of intact pituitary function, NTIS normalizes within 5 days from the hypoxic-ischemic event (107) or at least on discharge, following resolution of the acute disease (64).
Moreover, there is evidence that low serum thyroid hormone concentrations do not necessarily reflect tissue concentrations, which may depend upon the organ and type of insult (108).Therefore, NTIS does not usually require therapeutic intervention.
Although there is no convincing evidence regarding the usefulness of administering thyroid hormones to neonates with HIE, in case of moderate/severe HIE it is suggested to perform a thyroid function investigation at 72 hours or 96 hours of life, and before discharge, especially in hypoxic neonates who receive HT. Furthermore, according to current guidelines (109), screening of thyroid function must be repeated in acutely ill neonates at 2-4 weeks of life.
Substitutive L-thyroxine replacement treatment should be considered in cases of overt primitive congenital hypothyroidism or persistently low FT4 in the face of low-normal TSH concentrations, suggesting central hypothyroidism.
Pineal function
The pineal gland, together with the suprachiasmatic nucleus, the hypothalamus, and reticulohypothalamic tract, play an important role in regulating the circadian rhythm and endocrine output, thus facilitating adaptation to environmental changes.However, its role in the context of HIE is still unclear.Studies on the murine model with HIE have shown dysregulated expression in the pineal gland of major clock genes, such as Clock and Bmal1 as yet as 48 hours after the hypoxic insult (110, 111), resulting from several mechanisms, including increased hypoxia-inducible factor (HIF)-1a and reactive oxygen species and/or activation of the hypothalamus-pituitary-adrenal axis (110).In neonates, disturbed expression of clock genes may mediate the decrease in brain metabolism during HIE, with reduced energy supply and neuronal death, and exert detrimental effects on cardiovascular function, coagulation, and immune system (111).Constant exposure of patients to artificial light in intensive care services may aggravate (112), while exogenous melatonin administration may ameliorate (113) such dysregulation.
Pineal cysts are frequent in MRI examinations (114).Laure-Kamionowska et al. (115) revealed hemorrhagic, necrotic, and cystic changes of the pineal glands in autopsied fetuses and newborns with other brain lesions, suggesting a high susceptibility to injury of the fetal and neonatal pineal gland parenchyma.
A key role of ischemic injury has been hypothesized in the pathogenesis of pineal cysts.Özment et al. (116) found that the prevalence of pineal cysts was higher than in healthy controls in term babies with periventricular leukomalacia likely due to hypoxic injury.Finally, Bregant et al. (117) found a prevalence of pineal cysts around 36% in adolescents born near-term who had HIE.Ischemic insult might lead to cavitation of the pineal gland due to damage from free radicals and toxins leading to necrotic degeneration of the intrapineal gliotic layer (115,116).
Despite being mostly asymptomatic, pineal cysts have been associated with apoplexy, precocious puberty and headache (118).Furthermore, reduced production of melatonin decreases the infants' resistance to various harmful environmental agents and could be related to psychomotor retardation (119).Indeed, melatonin receptors have been found in central and peripheral tissues from the early stages of intrauterine growth (120), when melatonin plays important roles in implementing the genetic program for the development of the brain and other organs (121).In addition, this hormone and its metabolites regulate biological rhythms and acts as a potent endogenous antiinflammatory, anti-apoptotic and antioxidant agent directly or indirectly, by inducing the synthesis of antioxidant enzymes (122).Indeed, chronic hypoxia may induce adaptations in the fetal metabolism of tryptophan and serotonin (precursors of melatonin) involved in the regulation of synaptogenesis, to prevent inflammation and neuronal death (123,124).
Melatonin is currently recommended only for regulating sleepwake rhythm (125); preclinical studies have shown additive neuroprotective effects to hypothermia, allowing for reduction in infarct size and preservation of neurons (126,127).A recent randomized placebo-controlled trial (128) confirmed the positive effects of intravenous melatonin to infants in the early phase of HIE on cognitive outcome at 18 months of life, with a good safety profile.Further studies are needed to clarify the benefits of melatonin in HIE and to assess the efficacy of enteral administration.
Pituitary function
To our knowledge, there are no reports in the literature of anterior pituitary deficits developing during the early course of perinatal asphyxia or during HT.Indeed, in other periods of life pituitary ischemic necrosis mainly occurs when the gland is enlarged by a tumor or non-tumor process, which disrupts vascular microarchitecture and increases blood supply requirement (69).Nevertheless, pituitary defects potentially related to PA may occur beyond the immediate postnatal period, highlighting the need for long-term follow-up of asphyxiated infants, including regular evaluation of growth and pubertal development.
Growth hormone deficiency
Several but not all studies (129) have reported a higher prevalence of PA among children with isolated growth hormone deficiency (GHD) or multiple pituitary hormone deficiency (MPHD) than in the healthy population (130)(131)(132)(133), even though prevalence data vary greatly between studies.
An Italian study reported that 19/48 (39.6%) children with GHD had a perinatal history of breech delivery and/or prolonged asphyxia (134).A similar prevalence of asphyxia (15/42, 36%) was also reported in Japanese patients with idiopathic GHD (135).Conversely, Dasai et al. reported PA in only 7/75 (9.3%) children with idiopathic GHD (136), while a Japanese survey conducted from 1986-1998 on 23,110 patients with idiopathic GHD found a prevalence of asphyxia at delivery of 12.3% (137).Interestingly, children with more severe growth hormone deficiency showed the highest prevalence of PA (up to 21.8%) (137).More recently, another large series of 19,717 Japanese children with GHD treated from 1996 to 2015 (138) reported a prevalence of asphyxia of 6.9%, with a gradual decline in prevalence over the study span.Taken together, these observations have questioned whether PA can be considered a cause of hypothalamic pituitary dysfunction and/or GHD later in life in some cases defined as idiopathic.
Birth asphyxia has also been associated with growth impairment in children with pre-dialysis chronic kidney disease, suggesting that a history of asphyxia could help clinicians in identifying those children who can most benefit from timely GH treatment (139).
The mechanisms underlying the association between PA and GHD are still poorly understood.Magnetic resonance imaging (MRI) studies (140, 141) have shown a higher prevalence of history of PA or breech delivery in GHD patients with ectopic posterior pituitary, compared with individuals with normal pituitary gland, reaching 100% in the case of severe isolated GHD (141) or MPHD (142).These findings have led to the hypothesis that pituitary abnormalities and dysfunction may arise from a traumatic-ischemic insult.
Finally, it is worth mentioning that a role for GH in mitigating neurodevelopmental sequelae of PA has also been postulated.Devesa et al. reported in a 10-year-old girl with PA without GHD that GH treatment associated with neurorehabilitation significantly increased cognitive abilities, memory, language competence index, and IQ score (143).This data, even if referred to a single case and associated with the rehabilitation treatment, is in line with the positive effects of GH on neurocognition observed in other conditions, such as Prader Willi syndrome (144).
Central precocious puberty
Central precocious puberty (CPP) may result from acquired brain abnormalities, including neonatal HIE and CP (145-148).Previous data documented that children with a neurodevelopmental disability are 20 times more at risk of premature pubertal changes than the general population (149).In a prospective study of 161 girls with HIE (150), early sexual maturation was documented in 4.3% of cases (almost 7 folds more than in the general population).Interestingly, about half of girls with early puberty had no physical disability (150).Although the exact mechanisms by which brain lesions not involving the hypothalamus may trigger CPP is not known, it has been hypothesized that severe brain damage and the use of antiepileptic medication may affect several neurotransmitters pathways involved in the control of gonadotropins by inducing an activation of the HPG-axis (151, 152).Worley et al. evaluated 207 children with CP and demonstrated that both girls and boys appeared to enter puberty earlier than the general population, although the formers tended to mature over a longer period of time while the latters followed more regular patterns (148).More recent data from a retrospective, casecontrol study comparing pubertal patterns of children with CPP and CP with two other groups with CP without CPP, and CPP without CP, confirmed that CPP in CP seems to progress rapidly, supporting the hypothesis of a more intense activations of the HPG-axis (146).Moreover, blunted growth can make the diagnosis of CPP more difficult in patients with CP (146).
Adipokines
Data regarding the role of adipokines during PA are scanty.In a recent study, El Mazari et al. (28) reported lower concentrations of adiponectin, and higher concentrations of leptin, compared to healthy controls, which were not related to anthropometric parameters or insulin concentrations as normally observed.Such results may reflect hypoxia-related adipose tissue damage, peripheral tissue resistance or alteration of the endocrine, paracrine, and autocrine mechanisms that control adipokine release (28).These metabolic changes may represent a metabolic adaptation to hypoxia.Indeed, in vitro and in vivo studies have shown neuroprotective effects of leptin in ischemic brain injury, consisting in increased neuronal density and reduced apoptosis (153)(154)(155).
Conclusions
The implications of PA and HT from the endocrine perspective are not yet well defined.The relationship between PA and the endocrine systems is multifaceted.Indeed, if on one side PA is normally accompanied by a marked endocrine and paracrine neuroendocrine response, on the other side hypoxic-ischemic injury, as well as the failure of the compensatory physiological mechanisms may lead to several endocrine complications.HT exerts only partial neuroprotective effects and may in turn cause endocrine derangement.Therefore, it is necessary to think about alternative or additive strategies improving neuroprotection are desirable.
Noteworthy, some manifestations (ie hypercalcemia, GHD and CPP) may occur beyond the immediate postnatal period, highlighting the need for long-term follow-up of asphyxiated infants.Given the delicate balance between the various medical conditions possibly occurring in the context of PA and HT, a multidisciplinary approach is desirable to identify the best case-bycase management.Adequate monitoring of several endocrine functions and prevention of secondary injury by ensuring optimal glucose and electrolytes homeostasis are essential to improve outcome and prevent life-threatening events.
TABLE 1
Acute management and follow-up of major endocrine features.
(40,41) dose of Desmopressin at 1 mcg/kg/day, then adjust the dose according to water balance and electrolytesWater balance monitoringElectrolytes monitoring every 8 hours over the first 24-48 hours and then regularly Hypocalcemia(40,41)Altered PTH response or increased calcitonin, increased phosphate or bicarbonate load, low calcium intake, acute renal injury Mild, asymptomatic hypocalcemia If possible, oral supplementation of 10% calcium gluconate calcium gluconate 10% iv at 1.5-2 ml/kg continuously or in divided doses every 6-8 hours Severe, symptomatic hypocalcemia calcium gluconate 10% iv infusion at 0.5-1 ml/kg slowly in 10 min, followed by calcium gluconate 10% iv at 1.5-2 ml/kg every 6-8 hours Calcium measurements every 6-8 hours, especially during and after weaning calcium supplements
TABLE 2
Differential diagnosis of neonatal adrenal hemorrhage.
|
2023-10-22T15:03:02.068Z
|
2023-10-20T00:00:00.000
|
{
"year": 2023,
"sha1": "32ba1a91f41b783bf65f736c8f0c92fea68c0e96",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1249700/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d04618e650f5b2cab692282f618f29548eb13b04",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246849638
|
pes2o/s2orc
|
v3-fos-license
|
publica-ALGORITMUS ROZHODOVANIA PODNIKOVÉHO MANAGEMENTU S MOŽNOSŤAMI RIEŠENIA KRÍZOVÝCH SITUÁCIÍ Z ÚČTOVNÝCH, FINANČNÝCH A EKONOMICKÝCH RIZÍK ALGORITHM OF DECISION MAKING PROCESS BY CORPORATE MANAGEMENT AND WAYS OF RESOLVING CRISIS SITUATIONS CAUSED BY ACCOUNTING, FINANCIAL AND ECONOMIC RISKS ALGORITMUS ROZHODOVANIA PODNIKOVÉHO MANAGEMENTU S MOŽNOSŤAMI RIEŠENIA KRÍZOVÝCH SITUÁCIÍ Z ÚČTOVNÝCH, FINANČNÝCH A EKONOMICKÝCH RIZÍK ALGORITHM OF DECISION MAKING PROCESS BY CORPORATE MANAGEMENT AND WAYS OF RESOLVING CRISIS SITUATIONS CAUSED BY ACCOUNTING, FINANCIAL AND ECONOMIC RISKS
The scientific paper presents an algorithm and methods of strategic decision making process by top management and ways of eliminating and resolving crisis situations caused by accounting, financial and economic risks. The algorithm deals with a business’s lifetime stages and presents the risks, as well as the methods to analyse such risks, based on both external and internal factors of managerial environment. The aim is to define the risks and ways of eliminating them. The paper includes results of the EP 7260 (Brno, 1998–2000), GA MSM 431100007 (Brno 2000–2001)s and EP – 12/2001–2003 (Brno, 2001–2002) research projects. Methodology is based on analytical-synthetic methods, comparison, controlled interview, strategic decision making process, crisis management methods and selected methods of the accounting, financial and economic analysis. The paper also follows up the works published at conferences and in scientific journals FŠI ŽU Žilina (2000), SPU FEM Nitra (2000–2002), PEF ČZU Praha (2000–2001) and IAES (Vienna, 1999), (Montreal, 1999), South Carolina (2000), and Paris (2002). Results of the research have been verified on selected enterprises in the process of dealing with crisis situations which afflicted these enterprises owing to unsuitable reactions to changes in the managerial environment.
Introduction
Successful management of any business requires meeting various prerequisites. One such crucial prerequisite is a decision making process by the business's management, now being a much more demanding task than ever before. Currently, managerial environment finds itself in a very turbulent period involving oncoming effects of the globalisation process which is taking effect all over the world. This situation calls for an optimal combination to deal with individual aspects of this globalisation process and many other regional issues. With respect to this, what is more and more important is transformation management and crisis management.
Changes experienced in external business environment and the ability to accommodate demand many things of managers. First, it is the ability to prepare their businesses and staff to be able to cope with such changes, both in terms of mental strength and business organisation. Second, it is the ability to take an appropriate action. The changes might well concern business strategy, processes either in manufacturing or in information flow environment and many other things.
Transformation management is dealt with by many authors, both at general level and at the level of practical applications. 2 Ing. Libor Bittner, CSc., 3 Ing. Patrik Svoboda, Ph.D. 1 (Brno, 1998(Brno, -2000, GA MSM 431100007 (Brno, 2000 a EP -12/2001-12/ -2003-12/ (Brno, 2001-12/ -2002 The scientific paper presents an algorithm and methods of strategic decision making process by top management and ways of eliminating and resolving crisis situations caused by accounting, financial and economic risks. The algorithm deals with a business's lifetime stages and presents the risks, as well as the methods to analyse such risks, based on both external and internal factors of managerial environment. The aim is to define the risks and ways of eliminating them. The paper includes results of the EP 7260 (Brno, 1998(Brno, -2000, GA MSM 431100007 (Brno 2000s and EP -12/2001-12/ -2003-12/ (Brno, 2001-12/ -2002 research projects. Methodology is based on analytical-synthetic methods, comparison, controlled interview, strategic decision making process, crisis management methods and selected methods of the accounting, financial and economic analysis. The paper also follows up the works published at conferences and in scientific journals FŠI ŽU Žilina (2000), SPU FEM Nitra (2000-2002, PEF ČZU Praha (2000 and IAES (Vienna, 1999), (Montreal, 1999), South Carolina (2000), and Paris (2002). Results of the research have been verified on selected enterprises in the process of dealing with crisis situations which afflicted these enterprises owing to unsuitable reactions to changes in the managerial environment.
tions by Hron [5], Gozory [4], Šimo [13] and others. Drucker [2] states that no century in the history of mankind has ever seen so many radical changes as did the twentieth century. Drucker presents, in his publication Management at the Time of Great Changes, practical experience from business and concrete approaches which are available for us to cope with the given situations and learn practical lessons of them. Drdla and Rais [1] give advice important for successful management of a firm, as to for which changes to opt and how these changes should be scheduled. They also present some model situations for transformation management, methods of avoiding conflicts and ways of addressing possible problems. Conclusions made by many authors suggest that changes are speeding up, bringing many benefits to consumers, but large troubles to effective business management. The conclusions also suggest that only those businesses that are able to answer these changes adequately will survive.
As supported by our research, successful management of transformation process requires that various analyses be drawn, most of them being chiefly analyses of accounting, financial and economic information. What all these analyses have in common is the fundamental principles that are based on accounting and have major impact on other areas as well. The principles concerned include, in particular, chronological record of changes in accounting, systematic approach (either synthetic or analytical), doubleentry records, documentary and replica principle, principle of caution, etc. Having traced how a company follows the above principles in practice, the economic and financial data, besides respecting the company's concrete management environment, give a true picture about the company's situation and the changes in its management process.
Objective and Methodology
The objective of this paper is to present the results of research projects EP 7260 (Brno, 1999(Brno, -2000, GA MSM 431100007 (Brno, 1999(Brno, -2000 and EP 12/2001-3 (Brno, 2001-2002 on crisis management and transformation management with respect to corporate management, including specification of feedback action, i.e. formulating benefits which might lead to better fulfillment of the company's long-term goals using accounting, financial and economic data. Research results prove how crucial for decision-making process of a company's management are marketing analyses, in particular. The paper also includes results of application of company and product lifetime analyses, analysis monitoring what customers demand of a product and assessing the degree of saturation of customers' needs. Other analytic approaches were applied as well, such as BCG and SPACE analyses, marketing research and market research analyses, algorithm of forming, implementing and modifying business strategies. Results of these approaches are presented, too. Managerial decision making process is based on information systems with supporting programs in computer network and important analyses of accounting, financial and economical data.
Transformation and crisis management follow up an already developed methodology and its practical use in businesses under research. It is how business strategies are formed, implemented and modified and the need to clarify these strategies as a result of the turbulent managerial environment and other major influences.
A detailed analysis has been applied to a group of selected businesses. The results presented are for Bioveta (joint stock company) based in the town of Ivanovice na Hané, for Zemspol Studénka (joint stock company) and other businesses.
Results and Discussion
The EP 7260 research (Brno, 1997(Brno, -2000 has developed the basic methodology consisting of the following partial steps: q determine the current lifetime phase of the business concerned using the following scale: establishment, childhood, adulthood, decline, revival and crisis situation involving an overall threat to company; q determine fundamental options for the business or its part to develop (decay) with respect of general strategic alternatives, i.e. stabilisation, expansion, reduction or combination; q analyse crucial factors of managerial environment by means of applying the Ishikawa's cause-and-effect chart; q carry out economic and financial analyses according to a standard known as the European Standard; q based on the analysis, specify expected situations using a simulation model, developed by the author during research, to analyse output, its costs and profitability; q specify business strategies for a new business or formulate changes in business strategies for already established businesses as well as spelling out methods to assess these strategies; q implement new or modified strategies; q consider the option to apply vertical integration processes; q apply strategies involving the management process to control the required changes as a reply to managerial environment changes.
We gained much positive experience when verifying the above methodologies in practical operation of the selected business. Our experience helped us draw realistic conclusions in the process of responding to changes particularly of the external managerial environment. The strategic management decision-making process determines the space open to dealing with other decision-making processes, which control the processes on further stages.
The above methodology algorithm of formulating, implementing and modifying business strategies clearly shows that changes can concern all lifetime phases, as the actual lifetime of a business is not bound to follow a standard line and the homeostasis, i.e. a corresponding accord between the external and internal managerial environments, may be rapidly broken as a result of changes in managerial environment, and external managerial environment in particular.
We can split up the changes into two groups, in relation to causes and nature of changes, i.e. a change may just occur or be scheduled beforehand, i.e. planned and controlled. Changes of the first group come unexpectedly, i.e. without any plans. The research has come across many changes like that, occurring particularly in external factors. Most of those changes pose a threat. The following can be classified among them: q a new, and unknown so far, potential competitor is introduced to the market, which results in a significant reduction of current sales; q a market is lost, with the various causes being of either economic or political nature; q conditions on financial and capital markets go worse significantly and therefore financial resources open to businesses get limited, plus some other factors.
Many surveyed businesses, and those operating particularly in agriculture, had found themselves in a crisis caused by the above factors and had to apply the methods of crisis management.
An approach capable of coping with the crisis can be divided into three basic phases: q crisis degree analysis (1); q setting up a crisis strategy, i.e. reducing or eliminating the degree of crisis (2); q implementation of the crisis strategy (3).
The above approach was applied to deal with crisis at Zemspol Studénka, joint stock company. Crisis degree analysis (1) follows up the PEST and SWOT managerial environment analyses. Based on these methods, factors were specified which put the business under threat and can further do so, as they are probable to take effect, i.e. factors called threat with respect to external environment and weaknesses with respect to internal environment.
Setting up crisis strategies (2) follows up the crisis degree analysis by defining the effects which would take place if the given crisis factor occurred, i.e. defining kinds of practical economic impact on the business. Having drawn crisis probability analysis including possible impact, we obtain a thing we call crisis matrix, which gives crisis factors plus occurrence probability on lines and impact affected by the factors in columns. This method reveals four basic combinations: q high degree of threat probability plus over-average to large economic impact (I); q high degree of threat probability plus under-average to small impact (II); q medium to low degree of threat probability plus large economic impact (III); q medium to low degree of threat probability plus small economic impact (IV).
Crisis matrix is an outline for the management specifying possible ways of addressing the crises: q situation I and III requires elimination of most critical issues by: q not doing the activity; q reducing and cancelling the activity eventually; q formulating an alternative solution. q situation II requires: q reducing the activity; q or trying to find an alternative solution to the problem. q situation IV allows addressing the crisis through: q trying to find an alternative solution; q or addressing the crisis with common operating measures.
To deal with the crises, a crisis plan had been established which, being followed, contributed to regaining the balance between the external and internal managerial environments.
The other group of changes, i.e. planned changes, concerns both external and internal managerial environments. These changes can be described as gradual, long-term, large-scale and bringing major changes into the business activity proper, human resources, financing methods, market segment modification, market share modification and other things. Many authors deal with these changes. Moreover, some models have been developed, such as Levin's model of change, at the general level to address and solve such changes.
The research has revealed that both groups of changes are very important for a business to do well. The ability to respond to sudden, and often unpredictable, changes in external managerial environment is important in order to maintain the business's fundamental long-term functions and aims. Planning major changes is very important for a business to ensure its further development and prosperity in a respond to changes occurring particularly in external managerial environment.
It is crucial to monitor the key external and internal factors according to the business's current lifetime phase and answer these factors with suitable managerial tools. The above mentioned clearly shows that, in order to have effective transformation management, the business needs to have available a quality information system which is capable of monitoring the current managerial environment, analysing data and processes, and allowing specification of prerequisites for possible changes to take place in external and internal managerial environments.
The research supports the assumption that results of analyses are significantly influenced by to what extent the accounting statements and accounting as such can give a true picture in terms of the factors below: q Accounting statements as such have some weaknesses which are due to the fact that principle of reliability is preferred to the principle of data relevance for data users. These weaknesses of the balance sheet include the practice of using historical prices for accounting purposes. This means that the original input prices of assets are shown in the statements, disregarding any market value increase (apart from some exceptions such as financial investments). Assets depreciation is shown through amortisation (its value is often just a gross estimation of the expected usable life) and adjustments, respectively, but no possible price increase would appear in the balance. Therefore, such possible increase would be reflected in economic results no sooner than at the time of sale of the asset. Moreover, the balance sheet does not show the liability value in case of property lease through financial leasing. After the leasing is over ownership rights to the property transfer to the lessee for a symbolic amount and this value is included in the lessee's assets, which, again, results in assets undervaluation. The analyst can draw correct conclusions only if he or she has access to data included in appendices to accounting statements. Another weakness with the balance sheet is that it shows the assets and resources as at the date of final accounts and often does not correspond with the development during the accounting period when significant fluctuation may occur. This can be adjusted only partially by means of averaging the input data or making internal analyses in shorter intervals. q Use of accounting methods -such as write-off plan, strict obedience of some accounting principles -such as caution principle, are reflected in the formation of adjustments to assets (decisions whether to form adjustments, from which base, at what amount, how frequently the assets would be revaluated), in the (non)-formation of particularly other, non-tax reserves, and in strict accruals, etc. Drawing correct conclusions on financial situation of a business or achieving space comparability can only be successful if you know the accounting policies of individual accounting units. For instance, the caution principles get reflected in formation of adjustments to assets, methodology for reserve formation, etc. Therefore, accounting data should be interpreted with respect to how the given accounting unit constructed the data shown in the statements. There should be knowledge, for instance as far as short-term assets are concerned, of the methodology for formation of adjustments in receivables and inventories, because, with different degree of caution applied and identical assets, there are revealed different indicators of financial analysis (liquidity, assets, etc).
To achieve comparability of accounting units (or to make comparison to some standards), we can adjust accounting data in the following ways: q Rid the indicators of different ways of forming adjustments by applying some other, better-constructed ratio indicator. For liquidity, the commonly used indicators, which compare short-term assets to short-term liabilities, could be replaced with an indicator showing the cash flow solvency (and constructed by indirect method). This indicator would be divided then by the difference between foreign short-term debts and financial resources (money and bank accounts). The information we would obtain is no longer distorted by different methodologies used for adjustment formation. q Another option is to transform the balance sheet data into a financial balance sheet. Here, identical methodology for adjustment formation is used for the businesses under comparison (classification such as by time elapsed after maturity date, period of inventories turnover, etc). To adjust this, you have, of course, to know the applicable methodology (internal guidelines of the business).
These transformations should be applied to all data analysed, i.e. to balance sheet as well as profit and loss account. Two approaches can be employed: 1. Approach of the English speaking countries -it is useful to order the assets by liquidity (the ability to be changed into money) from the most liquid to the least liquid, which is to divide the assets into first class assets (cash, money on bank account, short-term marketable securities), second class assets (receivables), third class assets (inventories) and fourth class assets (fixed assets), respecting some rare exceptions such as easily realisable inventories or fixed assets. Resources for assets coverage must be analogically adjusted, with liabilities classified by date of maturity, from hose with the shortest maturity to those with the longest maturity, respecting the rule that long-term assets should be financed from long-term resources. Data adjusted this way will be used to construct the ratio indicators of financial analysis. The profit and loss account must be transformed so that revenues from manufacturing and trade activities are coupled with corresponding costs, followed by revenues and costs related to financial and extraordinary activities. Profit (loss) results revealed are adjusted by income tax and, after distribution of profit to owners, we get the undistributed profit. Having adjusted the above things, it is easy to identify which activity (operational, financial or extraordinary) was a priority for the business and which were the changes in comparison to previous year, if any. Also, this information can well be used to construct the ratio indicators.
2.
French approach -we try to rid the final accounts of individual businesses of different accounting policies (formation of adjustments, long-term assets write-off, formation and drawing of reserves, etc). The accounting balance sheet is transformed to the economic balance sheet -all assets are given in gross values and the depreciation of assets by means write-off and adjustments is given in liabilities proper as a source of finance, as well as the reserves. The capital proper is divided into internal capital formed by the activity of the business (including the already mentioned amortisation and adjustments) and external capital (put in by the owners or other entities). To adjust the profit and loss account is a much more complex process and its clarification lies beyond the scope of this paper.
q Accounting statements users, however, quite often carry out methodologically correct calculations using data that do not correspond to the reality. This might be result of insufficient knowledge on the part of accountants, or the data were intentionally distorted, main accounting principles were broken and the management is trying to manipulate the data so that they show the results the management want them to (in relation to, for instance, a credit application, a wish to present better results than those achieved in fact). Sometimes, assets are recorded that bring no economic profit for the business, technical appreciation is replaced by adjustment and vice versa, the caution principle is not observed, expected usable life is incorrectly estimated, long-term liabilities are intentionally recorded as shortterm ones in order to appear better in terms of liquidity indicators and vice versa. Also, the economic operations are carried out in such a way that the required results could be recorded in accounting (i.e. accepting a short-term credit only as at the balance sheet day). Financial investments open up rather a significant space to manipulate the profit (loss) results. You can influence the profit (loss) data already in the phase of financial investment acquisition when the relevant fact for classification into either fixed or current assets is the purpose of acquisition at the time of purchase. In both cases, these assets are valued at purchase price since 1 January 2002, but the price is converted to actual value as at the date of final accounts (in defined cases). For long-term financial investments, the difference in relation to the purchase price is an increase or decrease in the capital proper. For marketable securities, however, this difference will show in the profit (loss). For shortterm financial investments, which were valued strictly at historical prices until 31 December 2001, the accounting unit, selling the same securities purchased at different prices, had an option as to which price it would use to value the decrease and thus the unit was able to influence economic results through the sale. For inventories, the accounting unit has an option as to whether it includes, in line with applicable procedures, the credit interests into inventory prices or not, which also influences the recorded results. Moreover, the profit (loss) can be influenced by choice of method for valuing the decrease of inventories acquired for different unit prices. For record keeping reasons, it is practically impossible to value the decrease at actual purchase prices and that is why the accounting unit opts for the FIFO approach or the approach of weighted mean (constructed either continuously or periodically). In fact, the FIFO approach follows the physical flow of inventories. In comparison with actual prices, this approach results in overestimating the profit in the case of inflation environment, because higher revenues are coupled to lower operational costs. The weighted mean approach minimises the fluctuation of prices, but, on the other hand, costs are not coupled to the corresponding revenues accurately. As for inventories, there are often carried out operations which are to result in significantly influencing the profit (loss) of the accounting unit. It concerns, for instance, mutual (circular) transactions when two or more accounting units sell to each other and purchase back completely identical stock with a profit margin agreed in advance, which, of course, manipulates the accounting profit. Similarly, it is possible to evade the need to adjust the evaluation of unmarketable inventories by means of selling and buying back. A very important area is the off balance sheet financing that takes effect particularly when financial lease of assets is concerned. The majority of lessees have no long-term lease liability recorded in their respective balance sheets and record just the realised leasing payments and the actual amount of longterm resources. Foreign resources must be for analytical purposes increased by the amount of total payments (given in an appendix to reporting sheets). Moreover, such entity, because the balance sheet long-term assets are not recorded during the lease, may appear as a business with an outdated method of management and a high degree of wear and tear.
Another area vulnerable to "creative" (misleading) accounting is the long-term non-tangible assets (records and write-offs of installation expenses, goodwill, etc).
As shown by the research results, corporate accounting plays a significant role in providing important financial and economic information, particularly when increasing risk is recognised in time, and enabling the management to deal with a crisis in its early stage. The things concerned chiefly include analysing the fundamental report sheets, i.e. the balance sheet, the profit (loss) account and the cash flow. The analysis gives clear picture of how important it is to observe all accounting principles. Our conclusions are in line with conclusion drawn by some other authors, such as Krupová [6] and Rezková [8].
The point of analysing should be to recognise possible threats or initial stages of a financial crisis. Persons taking part in such analysing process include not only the management and the internal inspection body, but also those external users who are within the business's external interest groups.
Our research has identified major risks in the process of analysing accounting data. The risks include: q weaknesses following on how reports are constructed, such as the difficulty with giving a true representation of things which are hard to quantify, disregard for fluctuation of an indicator during the year, application of historical prices without regarding input price increase (silent reserves), etc; q problems with space comparability of data, or with comparing data to some standards, due to different methodologies for recording accounting transactions; q credibility of recorded data, as they might be misleading because of intentional or unintentional distortion.
As shown by the research, unintentionally distorted data are usually a result of unprofessional work or insufficient qualification and can cause a lot of problems. Intentional manipulation put a business under several threats; some of them are identified in a short time, but the other can well take effect in the long term.
As shown in the analysis carried out, each enterprise trying to do well should, through analysing its accounting, identify any risks in sufficient advance and recognise early stages of possible financial or economical crisis.
Conclusion
The paper deals with the significance of accounting, financial and economic data for effective business management, and, in particular, for analysing crisis situations and proposing solutions. The paper presents the results of the GA MSM 431100007 and 12/EP/2001-2003 research projects, with respect to analyses of accounting, financial and economic data and use of these data in analysing crisis situations in businesses and formulating measures that might lead to eliminating and resolving such situations. The research proves that it is important to follow accounting principles, which have a direct impact on the quality of all financial and economic data. The methodology is based on managerial environment analysis, analysis of accounting, financial and economic data and strategic decision making. The conclusions have been verified on selected enterprises.
|
2022-02-16T16:26:43.400Z
|
2002-12-31T00:00:00.000
|
{
"year": 2002,
"sha1": "8dd9139def1f25e20ac5fbfa0d2b7a5e76cf16de",
"oa_license": "CCBY",
"oa_url": "http://komunikacie.uniza.sk/doi/10.26552/com.C.2002.4.19-24.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "de31d59951a1d76e579a4d7ec8fe03a7eb850d46",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
201667708
|
pes2o/s2orc
|
v3-fos-license
|
First-principles study of point defects in LiGaO2
The native point defects are studied in LiGaO2 using hybrid functional calculations. We find that the relative energy of formation of the cation vacancies and the cation antisite defects depends strongly on the chemical potential conditions. The lowest energy defect is found to be the Ga_Li^2+ donor. It is compensated mostly by V_Li^-1and in part by Li_Ga^-2 in the more Li-rich conditions. The equilibrium carrier concentrations are found to be negligible because the Fermi level is pinned deep in the gap and this is consistent with insulating behavior in pure LiGaO2. The V_Ga has high energy under all reasonable conditions. Both the Ga_Li and the V_O are found to be negative U centers with deep 2+/0 transition levels.
I. INTRODUCTION
Recently, there has been an interest in ultra-wideband-gap semiconductors such as β-Ga 2 O 3 because of their potential in pushing high-power transitions to the next level of performance. 1,2 An important figure of merit for such applications is the breakdown field and the latter is directly correlated with the band gap. Here we draw attention to an even higher band gap material, LiGaO 2 . LiGaO 2 has a wurtzite-derived crystal structure 3,4 and band gap of ∼5. 3-5.6 eV (at room temperature) based on optical absorption 5-8 but potentially even as large as 6.25 eV (at T = 0) based on quasi-particle self-consistent (QS) GW calculations, 9 with G the one-particle Green's function and W the screened Coulomb potential. It can be thought of as a I-III-VI 2 ternary analog of wurtzite ZnO, in which each group II Zn atom is replaced by either a group-I Li or a group III-Ga in a specific ordered pattern with the P bn2 1 spacegroup. In this structure the octetrule is satisfied because each O is surrounded tetrahedrally by two Li and two Ga. The prototype for this crystal structure is β-NaFeO 2 . LiGaO 2 can be grown in bulk form by the Czochralsky method 3 and because of its good lattice match has been explored as a substrate for GaN. It can also be grown by epitaxial methods on ZnO and vice versa. Mixed ZnO-LiGaO 2 alloys have been reported. 10,11 It has been considered for piezoelectric properties, [12][13][14] and is naturally considered as a wide gap insulator. However, Boonchun and Lambrecht 15 suggested it might be worthwhile considering as a semiconductor electronic material and showed in particular that it could possibly be n-type doped by Ge. That study only used the 16 atom primitive unit cell of LiGaO 2 and thus considered rather high (25 %) Ge Ga doping or Mg Li doping. It did not study the site competition or native defect compensation issues. Here we study the native point defects by means of hybrid functional supercell calculations. a) Electronic mail: walter.lambrecht@case.edu
II. COMPUTATIONAL METHOD
Our study is based on density functional calculations using the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional. 16,17 The calculations are performed using the Vienna Ab-Initio Simulation Package (VASP). 18, 19 The electron ion interactions are described by means of the Projector Augmented Wave (PAW) method. 20, 21 We use a well-converged energy cut-off of 500 eV for the projector augmented plane waves. We performed the calculations with a supercell size of 128 atoms (which corresponds to 2 × 2 × 2 the primitive unit cell) and a single k-point shifted away from Γ is employed for the Brillouin zone integration. The valence configurations used were 2s 1 for Li, 3d 10 4s 2 4p 1 for Ga and 2s 2 2p 4 for O. In the HSE functional, the Coulomb potential in the exchange energy is divided into short-range and long-range parts with a screening length of 10Å and only the shortrange part of the exact Hartree-Fock non-local exchange is included by mixing it with the generalized gradient Perdew-Burke-Enrzerhof (PBE) potential with a mixing fraction α = 0.25. The band gap obtained in this way (E g = 5.10 eV) is still slightly lower than the experimental value.
III. RESULTS
The energy of formation of the defect D q in charge state q is given by where E tot (C : D q ) is the total energy of the supercell containing the defect and E tot (C) is the total energy of the perfect crystal supercell. The chemical potentials µ i represent the energy for adding or removing atoms from the crystal to a reservoir in the process of making the defect. The ∆n i is the change in number of atoms of species i. Likewise the chemical potential of the electron determining its charge state is F + v + V align with v the energy of an electron at the valence band maximum (VBM) relative to the average electrostatic potential in bulk and F the Fermi energy in the gap measured from the VBM. The alignment potential V align represents the alignment of the average electrostatic potential in the supercell far away from the defect relative to that in the bulk. This is calculated using the Freysoldt et al. approach. 22,23 The final term is the image charge correction term which corrects for the Madelung energy of the periodic array of net defect point charges in the uniform background that is added to ensure overall charge neutrality when considering a locally charged defect state. It is closely related to the alignment potential and including these corrections allows one to extrapolate the energy of formation to the dilute limit of an infinitely large supercell.
The chemical potentials µ i = µ 0 i +μ i , where µ 0 i are the chemical potentials of each species in its reference state, namely the phase it occurs in at standard pressure and room temperature, andμ i are the excess chemical potentials. The latter are viewed as a tunable parameter reflecting the growth conditions but must obey certain restrictions based on thermodynamic equilibrium. These includeμ whereμ LiGaO2 is the energy of formation of LiGaO 2 , which we calculated to be −8.55 eV. Each of the excess chemical potentialsμ i ≤ 0 on the left must be less than zero in order to avoid precipitation of the bulk elements Li and Ga or evolving O 2 gas. For example, µ 0 Li corresponds to metallic body-centered-cubic Li and thus µ Li = 0 corresponds to the assumption that the crystal with the defect is in equilibrium with bulk metallic Li as reservoir. Similarlyμ Ga = 0 corresponds to equilib-rium with metallic bulk Ga andμ O corresponds to O in the O 2 molecule. However, we need to also consider further restrictions imposed by competing binary compounds Ga 2 O 3 and Li 2 O.
These restrictions determine the region of chemical potentials in which LiGaO 2 is stable relative to the competing binaries and elements. They are bounded bỹ It is represented in the phase diagram shown in whereμ O (T, p 0 ) is the oxygen chemical potential at the standard pressure p 0 = 1 atm, k B is Boltzmann's constant, and T is the temperature in Kelvin. In the growth experiment of Ref. 8, the mixed Li 2 CO 3 and Ga 2 O 3 powders were compressed into tablets and then calcined at 1200 • C for 20 h in air. 8 We therefore choose an annealing temperature of 1200 • C and an oxygen partial pressure of 0.21 atm which represents the ratio of oxygen gas in ambient environment. The growth conditions at annealing temperature of 1200 • C and oxygen partial pressure of 0.21 atm is represented by the dashed line EF in Fig. 1. The defects considered are the vacancies V Ga , V Li and V O and the antisites Li Ga and Ga Li . The effects of spin polarization were included for cases with unpaired electrons in defect levels. Interstitial defects will be considered in the future but comparison with II-IV-N 2 semiconductors suggest that they would be of high energy. 25,26 The defect energies of formation are shown for the six chemical potential points A, B, C, D, E and F in Fig. 2.
First we see that Ga Li is the lowest energy defect for F = 0 in all cases. It is a double donor, which is in the 2+ charge state over most of the gap. Still, it has a well-defined 2 + /0 transition making it a negative U system. In Fig. 3 we can see that while for the neutral charge state, the O around Ga Li move outward, they move inward for the 2+ charge state with an in-between outward relaxation for the 1+ state. The additional stabilization by outward motion of the O when adding two electrons rather than one causes the negative U behavior where the 1+ charge state is never the lowest energy one for any Fermi level position. It is thus not behaving like a simple shallow donor, consistent with the relatively deep donor binding energy of 0.74 eV below the conduction band minimum (CBM). We thus do not expect it to be an effective n-type dopant. We can see that this defect has negative energy of formation at F = 0 in most cases. This reflects that even in the most Ga-poor case, this defect is hard to avoid because we cannot make the system poor enough in Ga without reaching the stability limit imposed by Li 2 O. On the other hand, a Fermi level F = 0 is not expected to be realistic as discussed later. The Li Ga antisite on the other hand is a double acceptor which can occur in 0, −1, −2 charge states. It is the lowest energy defect in its 2− charge state near the CBM in cases A, D and E. These are the cases richest in Li.
As for the vacancies, V Li occurs in 0, −1 charge states, while V Ga occurs in 0, −1, −2, −3 charge states. We can see that V 0 Ga has a high energy of formation in all cases. Although its negative charge states have significantly lower energy for F close to the CBM, it never becomes the lowest energy defect and therefore does not play a role in determining the Fermi level. The V Li is more interesting. Although it has high energy in the Lirich case D (which is somewhat unrealistic and O-poor) it has low energy in the Li-poor cases, B, C, F . Even in case E, its intersection with the Ga 2+ Li occurs close to that of the intersection of the latter with Li 2− Ga . We thus expect that both these acceptors may play a role in compensating the Ga 2+ Li . Turning now to the O-vacancies, there are two nonequivalent sites for the oxygen in LiGaO 2 : on top of Li (O 1 ) or on top of Ga (O 2 ). We find that both V O1 and V O2 are only stable in the neutral and 2+ charge states (with V O2 slightly lower in energy than V O1 ) with the transition level (2+/0) at 2.48 eV above the VBM or 2.62 eV below the CBM. This is a quite deep donor level and indicates that the vacancy is also a negative U center. In Fig. 4 one can see that also in this case the relaxations are strongly charge-state dependent. This figure shows the relaxations near a V O2 but similar results hold for V O1 . In the neutral charge state, the Ga move inward, while the Li move outward. In the 2+ state both move strongly outward. This is similar to the V O in ZnO 15,27 although the level is here even deeper and close to mid gap. We find that the V 2+ O energy of formation is negative for Fermi levels close to the VBM for points C, D, E, F . They become positive for the O-rich limits (A, B). Its energy of formation is always higher than that of the Ga 2+ Li and thus it is not expected to play a significant role in the charge balance.
Using the charge neutrality condition between free electron concentration n e (T, F ), free hole concentration where N D is the number of available sites per cm 3 and g(q) a degeneracy factor depending on the charge state, we can find the equilibrium Fermi level and the defect concentrations for a given temperature following the procedure of Ref. 25. For the electron and hole concentrations we use a parabolic band with effective density of states masses m * e ≈ 0.4 and m * h ≈ 1.8 (as obtained from the calculate hybrid functional band structure and averaging over directons.) For a temperature of T = 1500 K close to the growth temperature, we find that under chemical potential conditions C, the equilibrium Fermi level is F = 3.815 eV, close to the intersection of the Li is still mostly compensated by V −1 Li but partially also by Li 2− Ga . The electron concentration at n e = 3.2 × 10 12 cm −3 is then only slightly higher than the hole concentration n h = 1.6 × 10 11 cm −3 but both free carrier concentrations are in fact negligible under both chemical potential conditions considered. Even un- der the most Ga-poor conditions (point A), Ga 2+ Li is the dominant defect and is compensated mostly by V 1− Li . In this case, F = 1.92 eV is closest to the VBM and the material would then be slightly p-type with n h = 9.7 × 10 13 cm −3 . It is instructive to compare the defect physics in this system to that in II-IV-N 2 semiconductors like ZnGeN 2 , 25 . The similarity is that in both cases, the antisites play a crucial role. However, the dependence on chemical potentials of the elements is more important here because a wider region of stability occurs. Furthermore the Ga Li antisite is here not a shallow but a deep donor and is thus not expected to lead to unintentional n-type doping. This is consistent with the insulating behavior of LiGaO 2 . However, it does not exclude the possibility of n-type doping by Si or Ge or Sn which will be studied separately.
The main defect transition levels in the gap are summarized in Table I and in Fig. 5.
IV. CONCLUSIONS
In this paper we have studied the native defects in LiGaO 2 . We find that the relative energy of formation of vacancies and antisites depends strongly on the chemical potential conditions. The Ga Li antisite is a dominant donor defect. However, it has a rather deep 2 + /0 donor level and is a negative U center. It is thus not expected to lead to significant n-type doping. It furthermore becomes compensated mostly by V 1− Li and in part by Li 2− Ga depending on how rich the system is in Li. The V O is found to be an even deeper double donor negative U center. The defect transition levels are all relatively deep in to the gap with no truly shallow levels.
|
2019-08-29T01:06:34.000Z
|
2019-08-29T00:00:00.000
|
{
"year": 2019,
"sha1": "ab5a42343b4bfa57cb23af463ff6ba1b0c27a733",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.11002",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ab5a42343b4bfa57cb23af463ff6ba1b0c27a733",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
261114684
|
pes2o/s2orc
|
v3-fos-license
|
Heritage Stone 9. Tyndall Stone, Canada’s First Global Heritage Stone Resource: Geology, Paleontology, Ichnology and Architecture
Tyndall Stone is a distinctively mottled and strikingly fossiliferous dolomitic limestone that has been widely used for over a century in Canada, especially in the Prairie Provinces. It comprises 6–8 m within the lower part of the 43 m thick Selkirk Member of the Red River Formation, of Late Ordovician (Katian) age. It has been quarried exclusively at Garson, Manitoba, 37 km northeast of Winnipeg, since about 1895, and for the past half-century extraction has been carried out solely by Gillis Quarries Ltd. The upper beds tend to be more buff-coloured than the grey lower beds, as a result of groundwater weathering. Tyndall Stone, mostly with a smooth or sawn finish, has been put to a wide variety of uses, including exterior and interior cladding with coursed and random ashlar, and window casements and doorways. Split face finish and random ashlar using varicoloured blocks split along stylolites have become popular for commercial and residential buildings, respectively. Tyndall Stone lends itself to carving as well, being used in columns, coats of arms and sculptures. Many prominent buildings have been constructed using Tyndall Stone, including the provincial legislative buildings of Saskatchewan and Manitoba, the interior of the Centre Block of the House of Commons in Ottawa, courthouses, land titles buildings, post offices and other public buildings, along with train stations, banks, churches, department stores, museums, office buildings and university buildings. These exhibit a variety of architectural styles, from Beaux Arts to Art Deco, Châteauesque to Brutalist. The Canadian Museum of History and the Canadian Museum for Human Rights are two notable Expressionist buildings. The lower Selkirk Member is massive and consists of bioturbated, bioclastic wackestone to packstone, rich in crinoid ossicles. It was deposited in a low-energy marine environment within the photic zone, on the present-day eastern side of the shallow Williston Basin, which was part of the vast equatorial epicontinental sea that covered much of Laurentia at the time. Scattered thin bioclastic grainstone lenses record episodic, higher energy events. Tyndall Stone is spectacularly fossiliferous, and slabs bearing fossils have become increasingly popular. The most common macrofossils are receptaculitids, followed by corals, stromatoporoid sponges, nautiloid cephalopods, and gastropods. The relative abundance of the macrofossils varies stratigraphically, suggesting that subtle environmental changes took place over time. The distinctive mottles—‘tapestry’ in the trade—have been regarded as dolomitized burrows assigned to Thalassinoides and long thought to have been networks of galleries likely made by arthropods. In detail, however, the bioclastic muddy sediment underwent a protracted history of bioturbation, and the large burrows were mostly horizontal back-filled features that were never empty. They can be assigned to Planolites. The matrix and the sediment filling them were overprinted by several generations of smaller tubular burrows mostly referrable to Palaeophycus due to their distinctive laminated wall linings. Dolomite replaced the interiors of the larger burrows as well as smaller burrows and surrounding matrix during burial, which is why the mottling is so variable in shape.
ashlar using varicoloured blocks split along stylolites have become popular for commercial and residential buildings, respectively. Tyndall Stone lends itself to carving as well, being used in columns, coats of arms and sculptures. Many prominent buildings have been constructed using Tyndall Stone, including the provincial legislative buildings of Saskatchewan and Manitoba, the interior of the Centre Block of the House of Commons in Ottawa, courthouses, land titles buildings, post offices and other public buildings, along with train stations, banks, churches, department stores, museums, office buildings and university buildings. These exhibit a variety of architectural styles, from Beaux Arts to Art Deco, Châteauesque to Brutalist. The Canadian Museum of History and the Canadian Museum for Human Rights are two notable Expressionist buildings.
The lower Selkirk Member is massive and consists of bioturbated, bioclastic wackestone to packstone, rich in crinoid ossicles. It was deposited in a low-energy marine environment within the photic zone, on the present-day eastern side of the shallow Williston Basin, which was part of the vast equatorial epicontinental sea that covered much of Laurentia at the time. Scattered thin bioclastic grainstone lenses record episodic, higher energy events. Tyndall Stone is spectacularly fossiliferous, and slabs bearing fossils have become increasingly popular. The most common macrofossils are receptaculitids, followed by corals, stromatoporoid sponges, nautiloid cephalopods, and gastropods. The relative abundance of the macrofossils varies stratigraphically, suggesting that subtle environmental changes took place over time.
The distinctive mottles -'tapestry' in the trade -have been regarded as dolomitized burrows assigned to Thalassinoides and long thought to have been networks of galleries likely made by arthropods. In detail, however, the bioclastic muddy sediment underwent a protracted history of bioturbation, and the large burrows were mostly horizontal backfilled features that were never empty. They can be assigned to Planolites. The matrix and the sediment filling them were overprinted by several generations of smaller tubular burrows mostly referrable to Palaeophycus due to their distinctive laminated wall linings. Dolomite replaced the interiors of the larger burrows as well as smaller burrows and surrounding matrix during burial, which is why the mottling is so variable in shape.
INTRODUCTION
Tyndall Stone is a highly fossiliferous dolomitic limestone quarried northeast of Winnipeg, in southern Manitoba. It is arguably Canada's best recognized building stone, thanks to its unique composition and appearance, and widespread use in prominent buildings across the country. Its distinctive 'tapestry' is due to a striking colour mottling that is not exhibited by other building stones in Canada, and indeed elsewhere in the world. Tyndall Stone is a trade name that has been in use since the early 1900s, soon after numerous quarries were opened in the village of Garson beginning in 1895, because it was shipped by rail from nearby Tyndall. The name is now trademarked by Gillis Quarries Ltd., which is the sole remaining quarry operator.
Tyndall Stone belongs to the Selkirk Member of the Upper Ordovician (Katian) Red River Formation. It was deposited on the northeastern side of the Williston Basin, part of a shallow, tropical epicontinental sea that covered most of North America some 450 million years ago. It has been studied in detail owing to its conspicuously fossiliferous nature and the distinctive diagenetic dolomitization that was related to burrows made by infaunal invertebrates.
Stone from the Selkirk Member that is similar to Tyndall Stone was first used for masonry purposes in the construction of Lower Fort Garry, near Selkirk, which began in 1832. Subsequently, Tyndall Stone was used extensively in western Canada, notably for the Saskatchewan andManitoba legislative buildings, completed in 1912 and1920, respectively (Fig. 1A, B), but also in many other government buildings such as courthouses, town and city halls, and post offices, as well as banks, department stores, train stations, hotels and so forth in a variety of architectural styles. It was used to spectacular effect in the interior of the rotunda of Confederation Hall in the House of Commons, Ottawa, completed in 1922 ( Fig. 2A-D). In recent decades, its use has expanded to other commercial buildings, museums, hospitals, universities and churches, as well as in residential applications, both exterior and interior. Tyndall Stone has been used for several public buildings in the USA and for Canada House (Kanada Haus) in Berlin, which houses the Embassy of Canada to Germany, completed in 2005. Upon our nomination, Tyndall Stone was formally designated as a Global Heritage Stone Resource by the International Union of the Geological Sciences in November 2022.
This paper aims to bridge geology and architecture. It reviews the geological attributes and use of the Tyndall Stone and explores in detail the nature and origin of the mottling and the burrow fabrics that were overprinted by dolomitization during burial diagenesis.
Heritage Stones
Building stone has long been the purview of quarry workers, architects, masons, tilers and interior designers, especially in North America, but in recent years there has developed increased recognition amongst geoscientists and the lay public that building stones and dimension stones are noteworthy components of both historical and modern constructions, and that they have considerable cultural, historical, archaeological, educational and scientific significance. Some stones, such as the Carrara Marble of Tuscany, have been extracted for thousands of years. For certain stratigraphic units, whose quarries have been exhausted, existing dimension stones represent a critical geological and historical record. The desire to enhance recognition of the importance of building stones led to the establishment of the Heritage Stone Subcommission of the International Commission on Geoheritage of the International Union of Geological Sciences (Pereira and Page 2017;Kaur 2022). The focus of the companion Heritage Sites and Collections Subcommission is on 'geodiversity', especially via the designation of key 'geosites'. The task of the Heritage Stone Subcommission is to encourage nominations for formal designation as Global Heritage Stone Resources.
To date, 22 stones belonging to a wide range of lithologies have been formally recognized, such as Carrara Marble (Primavori 2015), Tennessee Marble from the United States (Byerly and Knowles 2017), Larvikite from Norway (Heldal et al. 2014) and Makrana Marble from India (Garg et al. 2019), the last having been used to build the Taj Mahal. In turn, there has been media coverage of heritage stone recognition, for example, the Makrana Marble. Many others have been documented and await formal nomination and approval (e.g. Hannibal et al. 2020). We submitted a formal nomination of Tyndall Stone for heritage status in July 2022 and the proposal was ratified by the Executive Committee of the International Union of Geological Sciences in October 2022. It is the first and only Canadian stone to be nominated and receive this recognition.
Building Stone in Canada
Canada, being a comparatively young country and originally heavily forested in proximity to sites of early colonization, does not have a long tradition of building with stone, and Indigenous groups in southern Canada did not employ it for permanent structures before the arrival of Europeans. Some of the earliest stone buildings include a number of windmills, houses, towers, mills and forts in Quebec City and in the Montreal area, from the late 1600s and early 1700s. Notre-Dame de Québec church in Quebec City dates from 1647, and a stone chapel was built in Montreal in 1675. The early 1700s saw construction of the Fortress of Louisbourg in Nova Scotia and the striking Prince of Wales Fort on the shore of Hudson Bay by Churchill, Manitoba. With population growth in the 1800s, stone was used more frequently, especially in expanding urban areas like Montreal, Kingston, Ottawa and Hamilton, where there was ready access to nearby strata, mostly Middle Ordovician limestone units in eastern Ontario and adjacent Quebec, and Silurian dolostone and sandstone beds in the Niagara region (for examples of different building stone use, visit https://raisethehammer.org/authors/197/gerard_v_middleton). As the means of transportation evolved, stones were imported from further afield.
The situation was somewhat different after Canada became a dominion in 1867 and Manitoba joined the Canadian Confederation in 1870, followed later by the Northwest Territories, which included the areas that would become the provinces of Saskatchewan and Alberta. Aided by the completion of the Canadian Pacific Railway in 1885, the late 1800s saw a large influx of settlers arriving on the Prairies, and the corresponding growth of several cities, especially Winnipeg, Manitoba, which in 1911 was the third largest city by population in Canada. Like southern Ontario and Quebec, but unlike many other places on the Prairies, suitable building stone was at hand near Winnipeg, primarily Upper Ordovician dolostone and dolomitic limestone belonging to the Selkirk Member of the Red River Formation. This stone, quarried along the Red River north of Winnipeg at Saint Andrews and East Selkirk, was first used for the walls of Lower Fort Garry in the 1840s. It was later used as blocks for foundations in Winnipeg and elsewhere, and as finished stone in structures such as the Stony Mountain Penitentiary (1877) and Holy Trinity Church, Winnipeg (1884). By contrast, overlying dolostone units from the Stony Mountain and Stonewall formations were used for some foundations and walls (Young et al. 2008), but this was limited due to the difficulty of shaping these tough stones. As public and commercial building increased in the 1890s, the Tyndall Stone quarries at Garson were opened. Stone from the Selkirk Member, and Tyndall Stone in particular, was used in numerous other buildings especially as exterior cladding, and often carved for ornamentation.
Relatively few other Canadian limestone and dolostone units have been extracted for similar purposes. Light grey Missisquoi Marble was quarried during the first half of the 20 th century at Philipsburg, Quebec, by Lake Champlain. It belongs to the Strites Pond Formation of late Cambrian age (Salad Hersi et al. 2002). It has been used for cladding, but it takes a good polish so it was mostly used as an indoor dimension stone, including in the Centre Block of the House of Commons, and several provincial legislature buildings (Lawrence 2001;Burwash et al. 2002;Ledoux and Jacob 2003;Brisbin et al. 2005). The light grey to buff Adair limestone (actually dolostone) and the strikingly laminated, grey to brown Eramosa Formation are two Silurian dolostone units extracted from southern Bruce Peninsula, Ontario.
Quarry Location
Tyndall Stone proper was first quarried at the village of Garson in about 1895. Garson is the only place where the distinctive stone is extracted, and it has been quarried there for over a century (Fig. 3A, B). At the end of the 1800s it was known as Garson stone, from the name of the person who opened the first quarry and whose name lent itself to the village. It was also called Manitoba limestone, Manitoba Tapestry limestone and Winnipeg limestone. In the early days the stone was transported on spur lines using small steam locomotives to the village of Tyndall, about 2 km east of the quarries, where there was a freight depot on the Canadian Pacific Railway. Thus, it became better known as 'Tyndall Stone', i.e. stone shipped from Tyndall (https://www.tyndallstone.com). Tyndall Stone was extracted from several adjacent quarries owned by a number of companies in the early years (Goudge 1933, fig. 7), but most notably since 1925 by family-owned Gillis Quarries Ltd., which was incorporated in 1922. Gillis Quarries Ltd. has been the exclusive producer since 1969. Exposures of equivalent strata to the north, beyond Grand Rapids and The Pas and into adjacent east-central Saskatchewan (Nicolas et al. 2010), while containing similar fossils and burrows, lack the visually contrasting mottling against a limestone matrix because they are fully dolomitized. These rocks are utilized only for aggregate.
Quarry Operation
Tyndall Stone is extracted using standard methods for stratified limestone (Fig. 4A-D). The stone is cut vertically, using either an eight-foot (2.44 m) diameter saw or a nine-foot (2.74 m) long belt saw mounted on one hundred-foot (30.5 m) tracks. It is then split into 6-8 tonne blocks using a jackhammer and wedges inserted by hand parallel to bedding; the blocks are then moved using front-end loaders. Gillis Quarries Ltd. operates a large finishing plant with an area of about 4000 m 2 . Stone is processed along advanced cutting lines that feature three primary saws and three gantry saw/line stations, four saw/profiler stations, as well as a tile line and lathe, allowing it to be cut into a variety of sizes, shapes and finishes (Fig. 4E, F) as specified by the architects (https://www.tyndallstone.com). These finished pieces are delivered to the customer and no further fabrication is required. Even though Tyndall Stone is extracted from just a single, privately owned and operated quarry, the property is large and the Selkirk Member is widely distributed in the Garson area. Thus, there http://www.geosciencecanada.ca is no prospect of stone supplies running out in the foreseeable future.
TYNDALL STONE GEOLOGY Stratigraphy
The Upper Ordovician to lower Silurian succession cropping out in west-central to southeastern Manitoba consists of nearly flat-lying limestone and dolostone beds dipping imperceptibly to the west (Fig. 5). These strata originated as carbonate sediment deposited on the eastern side of the Williston Basin, a shallow epeiric (epicontinental) sea in the centre of Laurentia, the early Paleozoic North American craton (Fig. 6A, B). Tyndall Stone is formally part of the Selkirk Member of the Red River Formation (Fig. 7). In the early 20 th century before modern stratigraphic nomenclature was established, it was called the Upper Mottled Limestone (Dowling 1900). Tyndall Stone occurs within the lower half of the 43 m thick member; the lowest horizon in the Garson quarries is about 10 m above the top of the underlying Cat Head Member (Goudge 1944;Cowan 1971;Young et al. 2008). In terms of North American Late Ordovician chronostratigraphy, the Selkirk Member is Maysvillian to early Richmondian (~ 450 Ma) in the Cincinnatian Series (Young et al. 2008), which is equivalent to the middle part the global Katian Stage. The Red River Formation correlates with the Surprise Creek Formation of the upper part of the Bad Cache Rapids Group across the Severn Arch in the Hudson Bay Basin (Jin et al. 1997;Lavoie et al. 2022). Tropical, shallow-water conditions were present across much of Laurentia at this time and correlative strata are widely distributed, from west Texas and New Mexico to the Arctic Islands and northwest Greenland (e.g. Sweet and Bergström 1984;Holland and Patzkowsky 2009;Jin et al. 2012Jin et al. , 2013Cocks and Torsvik 2021). The invertebrate biota defines the Red River-Stony Mountain Faunal Province due to its similarity over the whole region, which contrasts with that of parts of eastern North America (Elias 1981(Elias , 1991Young et al. 2008). Broadly similar limestone units of Middle and Late Ordovician age were deposited in other areas of the world, notably in the Baltic Basin, exposed in southern Sweden, southern Finland, Estonia and the St. Petersburg area of western Russia (e.g. Nestor et al. 2007). These strata bear some similarity to fabrics present in the Upper Ordovician units in Manitoba. Tyndall Stone is dolomitic limestone. Dolomite is secondary, having replaced limestone during burial. Most of it is concentrated in and around burrows; this gives the rock its characteristic mottled appearance. The relative proportion of dolomite to calcite is therefore variable. According to Goudge (1933Goudge ( , 1944, chemically it is 83.21-89.26% CaCO 3 and 9.43-14.91% MgCO 3 . According to Parks (1916), the light-coloured matrix averages 94% calcite and the darker coloured mottles average 71% CaCO 3 , the rest being MgCO 3 . Silica makes up 1.5% in both. There is a slight increase in iron oxide and clay in the burrows. The former could be due to a small amount of pyrite or iron enrichment in the dolomite, or both; it may also reflect contamination from iron-bearing groundwaters. The cream to light-buff colour of much of the Tyndall Stone is likely due to the effects of groundwater flow during Quaternary interglacial episodes, which affected the surface deposits by oxidizing trace amounts of iron in both the matrix and the dolomitic mottles. Some beds, particularly those lower in the quarries, retain a greyish colouration.
Lithology
Tyndall Stone is a massive dolomitic limestone; a pseudo-bedding is locally imparted by horizontal stylolites. It is an abundantly fossiliferous, bioclastic packstone and locally wackestone (in Dunham terminology). Intraclasts are rare, confined to the bases of some grainstone lenses. Stylo-bedding surfaces exposed by splitting and removal of overlying rock are lumpy due to the differences between the limestone matrix and the dolomite and may show the scattered areal distribution of robust fossils such as large stromatoporoid demosponges (Fig. 8A). Sawn quarry walls show that the stromatoporoids and colonial corals are typically concentrated at certain horizons ( Fig. 8B) but other macrofossils seem to be more sporadically distributed ( Fig. 8C-F), except where they have been collected together by redeposition during high-energy events (Fig. 8C, D).
In thin section, the matrix around the macrofossils is a biomicrite (in Folk terminology). Bioclasts, as whole and fragmented shells and skeletons of a wide range of sizes, occur in variable amounts and are surrounded by a microcrystalline calcite matrix ( Fig. 9A-D). These small fossils are mostly not visible on rock surfaces and include a variety of taxonomic groups, such as crinoid ossicles, trepostome bryozoan skeletons, gastropods, brachiopods, dasycladalean calcareous algae, tetradiids, small solitary rugose corals and problematical skeletons of unknown but possible algal affinity. The variable orientation of the bioclasts within the matrix indicates that the sediment was mostly completely mixed due to bioturbation.
Some preferential alignment of nautiloid conchs and solitary rugose corals is apparent (Wong 2002), which suggests the presence of comparatively weak, west-east oscillatory currents. There is an upward increase in the presence of planarlaminated, normally graded, fossiliferous bioclastic grainstone lenses and remnants of lenses that escaped complete bioturbation ( Fig. 8C, D), which have been interpreted as sediment deposited by occasional storms (Westrop and Ludvigsen 1983;Wong 2002). If so, this suggests a gentle shallowing such that the seafloor came within ambient storm wave base, or that storms became more frequent and/or stronger. Alternatively, if these record weak tsunami effects (cf. Pratt and Bordonaro 2007), then they may reflect episodic faulting, likely in the basin centre where syndepositional fault movements are recorded in overlying laminated facies by synsedimentary deformation structures (El Taki and Pratt 2012).
Paleontology
Fossils are commonly visible on sawn surfaces of Tyndall Stone. There is no comprehensive taxonomic listing, but the most complete one is in Young et al. (2008). Among the most conspicuous of the macrofossil biota are the molluscs. They include hyperstrophic gastropods belonging to Maclurina (Fig. 10A;also Fig. 11E) and turbinate gastropods probably belonging to Hormotoma (Fig. 10B). These are commonly preserved as shell moulds filled with dolomite microspar from replacement of microcrystalline calcite that records lime mud that infiltrated the cavities. Nautiloids are represented by a diverse assemblage that includes the straight-shelled actinocerid Armenoceras (Fig. 10B, E), straight-shelled endocerids possibly belonging to Cameroceras (Fig. 10C) and cyrtoconic nautiloids with curving conches such as the discosorid Winnipegoceras (Fig. 10D, F). Often the septa and conch walls have been either abraded or dissolved, or both, with partially preserved moulds filled with dolomitized microcrystalline calcite, and all the primary shell material that remains is the heavily calcified axial siphuncle. The segmented, beaded siphuncles of the actinocerids, whether exposed on glacially transported boulders or in the walls of buildings, are commonly misidentified by casual observers as vertebrate backbones. The dolomite that fills shell moulds points to dissolution of aragonite at and just under the sediment surface.
Corals include horn-shaped solitary rugose corals, most of which belong to Grewingkia (Fig. 11A), colonial rugose corals belonging to Crenulites (Fig. 11C, D) and tabulate corals, which are colonial, belonging to the chain corals Catenipora (Fig. 10C) and Manipora (Fig. 11B), the honeycomb corals such as Saffordophyllum, and the common, domical Calapoecia (Fig. 11F). Dis- tinctive on many Tyndall Stone surfaces are white laminar fossils with a dense microstructure that cannot be discerned even with a hand lens. These are the tabulate corals Ellisites and Protrochiscolithus, which were obligate encrusters, especially on stromatoporoids (Fig. 11E), but are difficult to distinguish on sawn surfaces. They often preferentially exhibit vertical borings termed Trypanites, which are absent in other shells and skeletons, apart from generally lesser numbers in solitary rugose corals and stromatoporoids (Elias 1980;Stewart et al. 2010). Labechiid stromatoporoids are also common, forming tabular to dome-shaped masses of varying diameter and height, with internal growth lamination and ragged margins reflecting episodic lateral expansion and contraction (Figs. 8C, E, 11E). Siliceous sponges are rare.
GEOSCIENCE CANADA
Receptaculitids assigned to Fisherites are distinctive fossils in both plan and vertical views of Tyndall Stone (Fig. 12A-C).
They are circular in plan view, but in vertical view are tabular to undulating to gently domical, and they may appear variable depending on the plane of horizontal section. Receptaculitids are composed of individual, interlocking skeletal elements termed meroms with a spiral orientation, which is why they have been called 'sunflower corals'. Specimens with the meroms partially disaggregated are also observed. They are replaced mostly by blocky calcite which is suggestive of a primary aragonite composition, yet they do not appear to have been leached and infiltrated with lime mud like the molluscs. The affinity of receptaculitids is unknown but they are commonly regarded to have been a form of calcareous algae (Nitecki et al. 1999). This is supported by their absence in deeper water deposits in the Saskatchewan subsurface (Kendall 1976).
26
Brian R. Pratt and Graham A. Young http://www.geosciencecanada.ca Figure 9. Thin section photomicrographs of biomicrite matrix of Tyndall Stone. Plane-polarized light, greyscale; oriented perpendicular to bedding. A. Possible calcareous alga (lower) overlain in turn by wackestone (middle) and packstone (upper) consisting mostly of crinoid ossicles plus lime mud, with a trepostome bryozoan (lower left). Scale bar is 2 mm. B. Wackestone to packstone matrix with abraded solitary rugose coral (right), dasycladalean green alga (centre left), and smaller bioclasts many of which are crinoid ossicles. Scale bar is 2 mm. C. Wackestone to packstone containing common crinoid ossicles, dasycladalean algae (lower right) and other bioclasts. Scale bar is 1 mm. D. Wackestone overlain by crinoidal packstone with a small gastropod (lower right) and abraded fragment of gastropod shell (upper centre). Scale bar is 1 mm.
https://doi.org/10.12789/geocanj.2023.50.196 Figure 10. Fossil molluscs in Tyndall Stone, sawn parallel to bedding. Apart from calcitic siphuncles, shells were dissolved and the moulds are filled with dolomudstone. Scale bars are 5 cm. A. Two large hyperstrophic gastropods with a flattened base belonging to Maclurina. Polished, memorial wall to students who fell in the Second World War, Department of Geological Sciences, University of Saskatchewan. B. Large high-spired gastropod belonging to Hormotoma (upper centre) and part of actinoceratid nautiloid cephalopod, belonging to Armenoceras, preserving mostly the beaded siphuncle, with some septa (lower right). Same location as A. C. Endoceratid nautiloid (middle) preserving septa and siphuncle (left) but dissolved towards the aperture (right), with chain coral belonging to Catenipora (upper centre). Sawn surface, Gillis Quarry. D. Cyrtoconic nautiloid with dolomudstone-filled siphuncle, with rugose coral probably belonging to Grewingkia (lower left). Honed finish, exterior of TCU Financial Group building, Saskatoon. E. Siphuncle of actinoceratid nautiloid belonging to Armenoceras, preserving a few septa (left of centre). Honed finish, interior presentation wall, same location as D. F. Partially burrowed endoceratid nautiloid (upper centre) and possible cyrtoconic nautiloid (lower left) encrusted with a thin Prototrochiscolithus coral (white lamina). Same location as D.
Paleoecology
Receptaculitids are the most common macrofossil type (Wong 2002;Brisbin et al. 2005;Young et al. 2008). Collectively the corals rival the receptaculitids in abundance, but in terms of individual groups the next most common fossil is solitary rugose corals. Cephalopods, stromatoporoids and gastropods are the next most common groups. Trilobites and brachiopods are present (Westrop and Ludvigsen 1983;Jin and Zhan 2001) but are difficult to identify on sawn surfaces. A striking feature of the biota is that many groups tend to be large in comparison with the same or related taxa in correlative strata elsewhere, such as in eastern Ontario (Young et al. 2008;Jin et al. 2012). The reason for this 'gigantism' is uncertain but may reflect abundant food resources at the base of the food chain, which was passed onto some of the higher trophic groups. Alternatively, some aspect of seawater temperature may have been conducive to enhanced growth rate or longevity. Stable environmental conditions also would have permitted organisms such as corals to grow to larger size, in comparison with environments with frequent disturbance of the seafloor.
In the Gillis Quarry, there is an overall upward increase in the abundance of stromatoporoids, receptaculitids, Protrochiscolithus, and colonial rugose corals, and increased abrasion of the solitary rugose corals, which suggests a gradual shallowing (Wong 2002). On the other hand, the relative abundances of solitary rugose corals and nautiloids decrease upward (Wong 2002). Other elements like tabulate corals show no obvious trends. Both Maclurina and Hormotoma appear to show an upward increase in size, whereas average nautiloid size remains more or less constant (Wong 2002). The relative proportion of tabular to domical coral and stromatoporoid growth forms is similar throughout (Wong 2002).
Near the base of the quarry section, more than half the solitary rugose coral skeletons are abraded, the proportion rising to more than 80 percent in the overlying strata. This interval shows a decrease in the number of stromatoporoids and receptaculitids, and the proportion of solitary rugose corals, and is thought to record a slight deepening (Wong 2002) and a small increase in sedimentation rate (Young et al. 2008). Other skeletons and shells, however, do not exhibit a similar degree of abrasion, although tabulate corals may also be broken and some actinocerid siphuncles are broken transversely. Breakage of small bioclasts is evident. It seems that physical reworking does not explain all these observations, and thus the cause is unclear.
The seafloor substrate was apparently soft, as indicated by the distribution of the skeletons of large benthic fossils, the abundance of lime mud and absence of evidence for distinct firmground or hardground surfaces. At the same time, it was somewhat consolidated, being able to support large skeletons, the visible burrows retained their shape, and moulds of dissolved molluscs did not collapse before they were infilled with lime mud. Shells and skeletons comprised the only hard substrates, and as a result they were encrusted by a variety of organisms (Young et al. 2008). Common examples included obligate encrusters such as the corals Protrochiscolithus and Ellisites, and also stromatoporoids, the coral Calapoecia and bry- ozoans. The occurrence of 'stacks', consisting of the skeletons of several different organisms that grew sequentially on top of one another, demonstrates that hard substrates were sporadically developed, and some of them were exposed on the seafloor for considerable lengths of time (Young et al. 2008). Many of these hard substrates also exhibit macroborings (Elias 1980;Stewart et al. 2010), a further indication of long-term exposure on the seafloor. Calcite cementation began under relatively shallow burial. This may have been below ~ 20 cm, as suggested by the vertical extent of burrows and the presence of intraclasts only in some grainstone lenses, eroded and redeposited during the stronger scouring events.
GEOSCIENCE CANADA
Combining the macrofossil biota with petrographic observations, the muddy seafloor had large quantities of small shells, skeletons and fragments mixed in, on which grew meadows of crinoids, representing the upper-tier suspension feeders. Lower-tier suspension feeders were tabulate corals, stromatoporoids, bryozoans, siliceous sponges, brachiopods, as well as Maclurina (Novack-Gottshall and Burton 2014). Rugose corals may have been microcarnivores. The trilobites were mobile detritus feeders and suspension feeders and/or scavengers. The turbinate gastropods were mobile deposit-feeders or herbivores. Delicate photosynthetic dasycladalean calcareous algae were rooted in the muddy sediment. Receptaculitids may have been sessile photosynthesizers. The nautiloids were likely nektobenthic predators. A variety of infaunal organisms burrowed the sediment. Undoubtedly there were soft-bodied animals and possibly green algae that are not preserved. An important point is that, despite the presence of common corals and stromatoporoids, and the propensity of some corals and stromatoporoids to encrust one another, there are no framework reefs or bioherms in the Tyndall Stone, or anywhere in the outcrop belt of the Red River Formation, although there are some small patch reefs at the top of the Selkirk Member equivalent in the Saskatchewan subsurface (Pratt and Haidl 2008).
Tyndall Stone's abundantly fossiliferous nature has inspired museum reconstructions of the paleoecological setting of the Late Ordovician tropical seafloor, such as the exhibit at the Manitoba Museum which blends the Selkirk Member and Stony Mountain Formation ( Fig. 13A; Young et al. 2008, fig. 5). The display in the Stonewall Quarry Park, Stonewall (Fig. 13B, C) is supposed to reflect the biota in the Selkirk Member rather than the younger, less fossiliferous Stonewall Formation, which is the interval exposed in the quarry. Nevertheless, as is usual with such reconstructions, there is some artistic license taken, especially in the unrealistic crowding of the various biotic elements and the seafloor topography. In older dioramas receptaculitids were portrayed as globular (Fig. 13A), but a lower domical shape is more likely (Fig. 13C). Reconstructions in the USA are based on approximately correlative strata from the Cincinnati, Ohio area which consist of a different facies (e.g. https://www.priweb.org/blog-post/vanishedworlds; https://lsa.umich.edu/paleontology/resources/ beyond-exhibits/life-through-the-ages.html), and that in the Redpath Museum, McGill University, Montreal, Quebec, reflects the Upper Ordovician of the Saint Lawrence Low-lands which also consists of different facies (https://www.mcgill.ca/redpath/article/ordovician-diorama).
TYNDALL STONE MOTTLING Bioturbation Description
The limestone exhibits brownish mottling due to the presence of dolomite that has incompletely replaced the original limestone. This unique and aesthetically desirable 'tapestry' of Tyndall Stone comes alive with its appearance on surfaces sawn parallel to bedding (Fig. 14A-D). From a distance the margins of the mottles appear sharp, but in detail they may be somewhat diffuse; in no case do they exhibit a distinct wall that comprises their margins. Locally the sharpness has been enhanced in vertical view by pressure solution and subhorizontal stylolite formation (Fig. 15A).
In horizontal view, the dolomitic mottles range from irregular to roughly circular and lobate patches, to elongate and seemingly branching to commonly crudely reticulate. Occasionally there are strikingly long, straight to curvilinear, sinuous features up to ~ 50 cm in length (Fig. 14C). Width of these domains is variable, up to ~ 4 cm wide. Where linear mottles are well defined, they are typically ~ 1-2 cm and occasionally up to 3 cm wide. In vertical view, the dolomitic domains are also variable in shape, from similarly circular to lobate to branching both vertically and horizontally (Fig. 15A-E).
Where grainstone lenses are interbedded, mottles are typically concentrated in the matrix just under them (Figs. 8C, 15E). Larger mottles intersect these lenses subvertically from the top, and they range from cylindrical to irregularly lenticular in shape and some penetrate the whole layer. Narrow, horizontally oriented, cylindrical mottles are also present.
The interiors of the mottles exhibit a swirly aspect imparted by various shades of brown and greyish brown; multiple generations of cross-cutting, cylindrical to tubular burrows can be discerned (Fig. 16A-F). These darker coloured, more distinctly defined curvilinear to irregularly sinuous burrows, are dominantly roughly horizontal (Fig. 16A-D) but also locally oblique and rarely vertically oriented (Figs. 15D, 16F). These burrows possess darker coloured linings. Their diameter is 5-13 mm. Many have a core 2-10 mm wide, cemented by dolomitic blocky microcrystalline calcite that is often leached leaving linear pores (burrow porosity); in some cases, there is geopetal dolomite on the bottoms of these pores. In addition, the dolomite that fills mollusc shell moulds commonly contains similar curvilinear burrows. Branching burrows 1 mm in width are locally preserved in the matrix inside nautiloids. Packstone-filled burrows that are not dolomitized are also locally visible in the limestone matrix.
In thin section, the mottles are seen to consist of brownish, variably dolomitized biomicrite in which bioclasts are still typically evident in the dolomicrite (Fig. 17A, B), although fewer in number than in the matrix; the larger, mostly robust particles like crinoid ossicles have escaped replacement (Fig. 17C). The interiors of the mottles typically show one or more horizontal burrows exhibiting the same features as those visible on sawn surfaces, that is, dolomitic calcite-cemented tunnels sur-rounded by brownish, crudely concentric laminae and haloes. The margins of the dolomite mottles are not confined to these burrows and, rather, extend beyond them. Whereas the biomicrite matrix is clearly churned, in that bioclasts are variably oriented, in places straight to curvilinear burrows are recognizable. In cross-section some of these also have concentric linings and are filled with biomicrite in which the bioclasts range from variably to crudely concentrically oriented (Fig. 17D).
Comparable Facies
While the distinctive colouration and dolomitic mottling selec-tively overprinting limestone are unique to Garson, comparable burrow types are common to other Ordovician limestone and dolostone occurrences deposited in a similar low-energy, subtidal setting, including in equivalent dolostone beds of the Red River Formation nearby and far to the north on the northeastern side of the basin (Fig. 18A), and in broadly correlative limestone units in the Hudson Bay Basin across the Severn Arch (Fig. 18B). The former show well-defined, cross-cutting burrows that are 1-1.5 cm wide. More detailed fabrics are not visible, however, due to the complete dolomitization. The Gunton Member of the Stony Mountain Formation, which is younger than the Selkirk Member, also shows dense burrow patterns including long and reticulate features (Elias et al. 2013, fig. 29). In limestone of the Chasm Creek Formation of the Hudson Bay Basin, dolomite mottles are slightly narrower but well delineated, indicating a close morphological relationship with the original burrow fabrics (Fig. 18B). Besides some vertically oriented burrows, the networks may exhibit primary branching, unlike those in Tyndall Stone, suggesting a somewhat different behaviour, although this is not certain. Middle
GEOSCIENCE CANADA
Ordovician limestone units of the Baltic Basin show comparable features but dolomitization of burrows is patchier, so only parts of burrows are replaced (Fig. 18C). Many younger finegrained limestone beds deposited in low-energy subtidal conditions exhibit similar fabrics of backfilled branching and intersecting burrows (Fig. 18D). Local iron staining in the Stony Mountain Formation, while in a different facies than the Selkirk Member, shows the bioturbation fabrics in striking detail, suggesting that burrowing in Tyndall Stone was likely much more complex than is readily apparent. Short curvilinear burrows with concentric linings, 0.2-0.3 cm in diameter, overprint the churned matrix which exhibits some narrower, seemingly mostly vertical burrows with curvilinear parts lacking linings (Fig. 19A, B). Some of these might be U-shaped. The lined burrows include many circular cross-sections as seen on surfaces cut parallel to bedding, and they only locally cross-cut each other. Brachiopod valves and trilobite sclerites are unoriented in the matrix. Larger burrows, 0.5 cm wide, have a lining and meniscate backfilling (Fig. 19B). Longer horizontal burrows 0.5-1 cm wide without linings are present and are cross-cut by many smaller burrows (Fig. 19C). Also present are indistinct horizontal burrows lacking walls but exhibiting a poorly defined meniscate backfilling. These are cross-cut by the smaller unlined and lined burrows.
Interpretation
Although they were initially regarded as plant and algal fossils and termed 'fucoids' (Wallace 1913) due to their vague resemblance to shoreline-inhabiting seaweed belonging to Fucus (which was a common view of such features at the time), it was later recognized that the mottles in Tyndall Stone reflected bioturbation by infaunal invertebrates, especially worms (Birse 1928). The apparent burrow networks were compared to gallery systems belonging to Spongeliomorpha (Kendall 1977), which in much younger rocks (and modern sediments) are ascribed to excavating crustaceans (e.g. Gibert and Ekdale 2010). Later, these kinds of burrows in Ordovician limestone were referred to Thalassinoides, which also consists of galleries (Sheehan and Schiefelbein 1984;Myrow 1995;Eltom and Goldstein 2023), and this identification has persisted for the mottles in Tyndall Stone and in subsurface equivalents and correlative units (Pak and Pemberton 2003;Cherns et al. 2006;Young et al. 2008;Jin et al. 2012Jin et al. , 2013. Sheehan and Schiefelbein (1984) suggested that these burrow systems reached a depth of one metre, although this is not apparent in the Gillis Quarry and in rock samples where the vertical expression seems to be no more than ~ 10 cm. The presence of trilobites associated with burrows led Cherns et al. (2006) to suggest that they had created the galleries which later became filled with biomicrite. No such relationship with dolomite mottles has been observed in Tyndall Stone. Kendall (1977) reconstructed the burrows as empty galleries made by arthropods that excavated before, during and GEOSCIENCE CANADA Figure 8C). A. Relatively large lobate mottles, with stylolites common near top. Scale bar is 3 cm. B. Elongate and variably shaped mottles, with some sharply abutting colonial rugose coral belonging to Crenulites (at top). Scale bar is 3 cm. C. Branching mottles with the edge of a solitary rugose coral calice possibly belonging to Grewingkia (upper centre). Scale bar is 3 cm. D. Small mottles including one with a vertical orientation (at right), with oblique cut through receptaculitid (also Figure 16F). Scale bar is 3 cm. E. Massive packstone with two lenticular, planar laminated grainstone beds that were burrowed from above. Lower bed shows mostly vertical and subvertical dolomitized burrows, ~ 1 cm wide. Upper bed is mostly burrowed away but exhibits some horizontal burrows (at left). Scale is in centimetres.
after sediment cementation, such that these burrows as well as aragonite shell moulds were filled after lithification and below the depth of active tunnelling, by lime mud that was then available to be burrowed by worms. The pervasive presence of . Agriculture Building, University of Saskatchewan. Scale bar is 5 cm. B. Narrow, mostly linear mottles conforming to Palaeophycus, including some with calcite microspar-filled tubes, with gastropods (upper right of centre). Scale bar is 3 cm. C. Wide mottles with numerous burrows conforming to Palaeophycus. Scale bar is 2 cm. D. Narrow mottles with numerous burrows. Scale bar is 2 cm. E. Horizontally and vertically oriented mottles containing horizontally oriented burrows conforming to Palaeophycus. Close-up of left side of Figure 15B. Scale bar is 2 cm. F. Horizontally and vertically oriented mottles containing horizontally and vertically oriented burrows, the former corresponding to Palaeophycus and the latter to a vertically oriented Palaeophycus or possibly Skolithos. Close-up of right side of Figure 15D. Scale bar is 2 cm.
large, empty galleries in lithifying sediment, however, is contradicted by the absence of sharp boundaries of the large burrows, their variable width, the presence of smaller burrows cross-cutting the margins of the mottles, and the sparse biomicrite filling large burrows. The multiple generations of smaller burrows in the biomicrite matrix and rarity of biomicrite intraclasts, present only in some grainstone lenses, also argue against excavation of cementing or cemented matrix. Some burrows in the correlative Yeoman Formation in the Saskatchewan subsurface (Kendall 1976, plate VIIIB;Pak and Pemberton 2003, figs. 11, 15) may be an exception to this. In these cases, it is possible that the matrix may have been lightly cemented, because the smaller burrows within do not penetrate the margins. These burrows may have been empty due to winnowing, before being infiltrated with lime mud. Washedout burrows are present in the Gunn Member. There, the upper surface of some grainstone interbeds is a scoured surface showing grooves ~ 2 cm wide with smoothed margins. Comparable surfaces seem to be absent in Tyndall Stone. Moreover, it is difficult to envisage a complex system of interconnected, three-dimensional galleries continually being created by winnowing followed by filling due to sediment infiltration. The small, millimetre-sized, curvilinear burrows in the biomicrite matrix and the dolomitic mottles mostly correspond to the ichnogenus Palaeophycus (Pak and Pemberton 2003). This taxon is distinguished from Planolites due to the lining of the burrow walls (Pemberton and Frey 1982;Keighley and Pickerill 1995). Palaeophycus appears to be common in lower Paleozoic limestone units, but in some cases the lining may instead be a diagenetic halo (Pak and Pemberton 2003;Pratt and Bordonaro 2007). Small, unlined, backfilled burrows correspon- ding to Planolites are not prominent in the Tyndall Stone but are present in the correlative Yeoman Formation in the Saskatchewan subsurface (Pak and Pemberton 2003). Pak and Pemberton (2003) identified a number of other ichnogenera in cores from the subsurface Yeoman Formation, including Asterosoma, Rhizocorallium, Tricophycus, Skolithos and Chondrites. However, Asterosoma is a radiating trace fossil and nothing resembling it is evident on horizontal surfaces of Tyndall Stone. Rhizocorallium consists of a looping horizontal burrow with curving spreiten in between, and it is also not apparent on horizontal surfaces. Burrows attributed to Tricophycus may be unusually wide mottles. Tyndall Stone exhibits rare vertical burrows that might be Skolithos but without a threedimensional view it is also possible they are vertically oriented Palaeophycus. It is unlikely that they are Arenicolites as no Ushaped burrows have been identified in vertical section and pairs of circular burrow openings are not apparent on horizontal surfaces. Small branching burrows belonging to Chon-drites are rare in Tyndall Stone. In the Stony Mountain Formation, Zheng et al. (2018) identified Palaeophycus, Planolites, Nereites, Phycosiphon, Chondrites, Teichichnus, Rhizocorallium and Balanoglossites. Parts of this unit exhibit a complex ichnofabric with seemingly abundant Planolites and small bioturbation features that are not readily discernible in Tyndall Stone. Knaust (2021, p. 18) identified the dolomitic mottles in Tyndall Stone as Balanoglossites, which are empty galleries associated with firmgrounds. However, no firmground surfaces were observed in the Selkirk Member and in correlative dolostones farther north. Other, more sharply defined burrow systems in Ordovician limestones have also been assigned to Balanoglossites, including those in correlative Upper Ordovician units in Laurentia that lack evidence of seafloor cementation. Balanoglossites was identified in the Stony Mountain Formation (Zheng et al. 2018). While that unit does have evidence for early lithification in some beds, such as intraclasts in grainstone, and corroded or encrusted erosion surfaces that may represent firm-or hardgrounds, the burrows appear to have been made before consolidation and erosion. Partially pyritized grooves ~ 1 cm wide on the surfaces were interpreted as tracks (Zheng et al. 2018), whereas similar grooves were named Sulcolithos by Knaust (2020) and interpreted as burrows or borings made on firmground and hardground surfaces. However, those in the Stony Mountain Formation represent relatively large burrows that were exhumed by the high-energy events that deposited the grainstone beds. Burrows attributed to Balanoglossites in the Stony Mountain Formation appear to be similar to the larger burrows in the Tyndall Stone and similarly have variable shapes. On the other hand, in dolomitic limestone of the Hudson Bay Basin, the larger burrows may be interconnected but it is unclear if they were ever empty gallery systems.
GEOSCIENCE CANADA
In Tyndall Stone, the variably well-defined, large curvilinear burrows containing biomicrite, oriented dominantly horizontally, were likely created by deposit-feeding worms ranging up to about 1 cm in diameter, that backfilled the burrows as they moved through the sediment. Examples of apparent branching represent mostly false branching due to crisscrossing burrows created by other worms active at the same time, as well as multiple generations of worms. The fact that the reworked sediment in the burrows is still a mixture of lime mud and bioclasts, although containing fewer of the larger grains such as crinoid ossicles, means that it does not strongly contrast texturally from the matrix. The large burrows and matrix were reburrowed by generations of smaller worms that produced Palaeophycus. Calcite microspar cement commonly fills an empty tube or the upper part of a tube with a geopetal micrite floor. Thus, the wide curvilinear and unlined burrows belong to neither Thalassinoides nor Balanoglossites, but can be assigned to Planolites, albeit a very large form.
GEOSCIENCE CANADA
As is typical for shallow-marine carbonate rocks in shelf and epicontinental seas, dolomitization is a diagenetic phenomenon that post-dated microcrystalline and blocky calcite cementation and took place during burial, followed locally by crystal size increase due to neomorphism (Zenger 1996a, b), rather than by near-surface biogeochemical reactions as proposed by Gingras et al. (2004). In Tyndall Stone, the brown dolomite is only crudely selective, in that it is a replacement of mostly microcrystalline calcite and small bioclasts in the larger burrows as well as Palaeophycus and some of the surrounding matrix. Thus, the mottles are not confined to discrete burrows, which is part of the reason why the mottles range so widely in size and shape. By contrast, in completely dolomitized carbonate rocks of the Red River Formation to the north, the outlines of the larger burrows are more distinct.
Aesthetics and Uses
Tyndall Stone is rarely used as a polished dimension stone, with notable exceptions including the lobby floor and staircases of the Banff Springs Hotel, a feature wall in the former Royal Alberta Museum and the memorial wall in the Geology Building, University of Saskatchewan, commemorating the geology students who fell in the Second World War. For the latter two, slabs particularly rich in macrofossils were selected.
Tyndall Stone used for cladding on both interior and exterior walls is usually sawn parallel to bedding (Fig. 20A, B) and a smooth finish (rubbed or honed) is most common. In some cases the original sawn surface is retained. Other surfaces can be prepared. In older buildings, surfaces that were bush-hammered to give the stone a texture were popular for stone at eye level. Uniformity in hue is selected for individual projects. Because large, conspicuous fossils are variably present, slabs with numerous fossils were typically discarded in earlier years when they were deemed visually undesirable because they interrupted the appearance. In recent years, slabs with fossils have more often been used for cladding. The unique paleontological content is increasingly being recognized as worth showcasing in some situations. For example, two bank buildings, one in Saskatoon and the other in Regina, have feature walls using eye-catching fossiliferous slabs. In the foyer of the Manitoba Museum are two walls with the fossils labelled and interpreted.
Split face finish (broken perpendicular to bedding) is increasingly being used for exterior walls (Fig. 20C). Ashlar (wall consisting of dressed stone) utilizing blocks with rustic ranch finish (split parallel to bedding) typically has hues that are mixed for a mosaic effect (Fig. 20D). This is popular especially for houses and other residential buildings. Machineshaped decorative elements like string courses, window casements, doorways and buttresses are used, especially in collegiate gothic buildings at the University of Saskatchewan (Fig. 20E). There, Berea Sandstone was used before the First World War, then Indiana Limestone was employed, but in recent decades Tyndall Stone has been used exclusively. Tyndall Stone is also now used for indoor flooring and besides large slabs (Fig. 20F), roughly one foot x two foot (297 mm x 500 mm) rectangular tiles are manufactured with a honed or polished finish, and thin veneer products have been recently introduced.
Tyndall Stone also lends itself to carving, although it is a much harder stone than Indiana Limestone and some stones popular in other countries. In earlier years, government buildings like provincial legislatures and courthouses were especially well decorated (Figs. 2A-D; 21A-C, F-H). Numerous public and commercial buildings have been adorned with carved scenes (Figs. 20A, 21E). Hand-carved elements are still occasionally produced (Fig. 21D). There are sculptors who have utilized large blocks of Tyndall Stone.
While Tyndall Stone is a particularly durable material, it is still a carbonate rock with a hardness much less than that of granite, and it is soluble in acidic water. In rare situations where the stone is under some stress, such as in exterior staircases, cracks may develop in stone that has been in place for many years (Fig. 22A). Gradual etching of surfaces close to the ground may occur due to rain splash and from salt spread on sidewalks during winter (Fig. 22B). In a few cases, receptaculitids have popped out of blocks or cladding due to water infiltration and freeze-thaw cycles (Fig. 22C). Rare chalky-textured chert has also been a problem in external walls of some older buildings due to differential weathering, but slabs exhibiting this impurity have long been avoided. In locations where there is excess moisture, surfaces may by stained somewhat by black fungal or microbial growth (Fig. 22C). Probably the most visible 'damage' is done by repairs such as patching with cement or using stone with a different size or hue (Fig. 22D). On the other hand, Tyndall Stone cladding has been recovered from some demolished buildings and re-used.
Geotechnical Specifications
Comparison of the physical properties needs to take into account that some measurements conform to American Society for Testing and Materials (ASTM) standards, while others are based on other testing procedures, not to mention difficulties in comparing European (EN) standards. This makes comparisons with European dimension stones difficult. Tyndall Stone has properties similar to those of many other fairly hard limestone and marble examples, such as Tennessee Marble (a bioclastic limestone) and Georgia Marble (a true marble), but it is less dense and has greater water absorption due to the presence of minor porosity (Table 1; Parks 1916;Goudge 1933). The porosity is probably related to leaching by groundwater. Tyndall Stone is slightly denser than Indiana Limestone, which is a softer stone that is easier to work.
Examples of Buildings
In the early years, Tyndall Stone was used almost exclusively in the Prairie Provinces of western Canada. Besides the Saskatchewan and Manitoba legislative buildings, it has been used in many other government buildings such as courthouses, post offices, land titles buildings, and city and town halls, as well as banks, department stores, train stations, office buildings, schools, and hotels . It was used to striking effect in the interior of the rotunda of Confederation Hall in the House of Commons, Ottawa ( Fig. 2A-D). The stone was used sporadically across the rest of the country prior to the Second World War. In the middle 20 th century, its use expanded to other commercial buildings, museums, art galleries, concert halls, hospitals, universities and churches, as well as residential uses both exterior and interior. Tyndall Stone has been used for several buildings in the USA and for Canada House (Kanada Haus), which is the Embassy of Canada to Germany, in Berlin. It was often used as an accent in buildings mainly constructed with red brick.
Architectural styles have varied over time as taste and construction methods evolved. Before the First World War, the most common were Neo-classical and Beaux Arts styles. A number of Art Deco-inspired buildings were constructed in the 1930s during the Depression. In the 1960s, Modernist style was commonly adopted for public buildings like museums and art galleries, as well as larger banks and other commercial buildings. Recent decades have seen a number of forays into Brutalist, Contemporary classical, Postmodern and Expressionist styles.
In addition to the iconic legislative buildings in Regina and Winnipeg, monumental buildings using Tyndall Stone that were constructed in the first decades of the 20 th century are distinctive elements in the centres of these and other cities and towns. These buildings, constructed when the Prairie Provinces were growing rapidly in population prior to the Depression, have stood the test of time and are in good condition, lending a sense of permanence. To many they are aesthetically more pleasing compared to more commonplace brick, concrete or glass and steel buildings. In modern times, many of them have been repurposed. Cities and larger towns now have historical or heritage societies and, in collaboration with various levels of government, many of these buildings have been designated as heritage properties and are protected.
CONCLUSIONS
Tyndall Stone is an iconic building stone in Canada. It has been used since the beginning of the 20 th century, especially in the Prairie Provinces. It is spectacularly fossiliferous, and slabs sawn parallel to bedding give an unparalleled snapshot of a tropical, shallow seafloor of Late Ordovician age. Conspicuous fossils include receptaculitids, corals, stromatoporoids, nautiloids and gastropods. What makes Tyndall Stone unique is especially the tapestry of brownish mottles composed of dolomite on the light grey to cream limestone background. These mottles represent dolomite replacement of burrows created by infaunal invertebrate animals, along with some of the adjacent matrix. Long thought to have originally been empty galleries and assigned to Thalassinoides, they were actually backfilled burrows likely made by large worms, and more reasonably assigned to Planolites. They are one component of several bioturbation phenomena, including churning of the bioclastic muddy sediment, and multiple generations of smaller burrows, most of which have linings on their margins and are referable to Palaeophycus, also made by worms.
Tyndall Stone is a versatile, durable stone that has been used in a variety of ways for many buildings, especially in the Prairie Provinces, including the legislative buildings of Manitoba and Saskatchewan, courthouses, land titles buildings, city and town halls, banks, stores, office buildings, train stations, hotels, schools, museums, universities and churches, as well as residential buildings. Many architectural styles have been adopted, ranging from Beaux Arts to Brutalist, Neo-classical to Postmodern. Given its spectacular paleontological content, Tyndall Stone is also a unique educational tool and at hand in most Canadian cities. In October 2022 it was designated a Global Heritage Stone Resource by the International Union of the Geological Sciences Subcommission on Heritage Stones. This was ratified and as of late 2022, Tyndall Stone is an IUGS Heritage Stone, Canada's first (Fig. 31). tion by the Heritage Stones Subcommission, and more recently Colin Sproat for collaborative effort on the Upper Ordovician of southern Manitoba. Special thanks to Donna Gillis for quarry access over many years, encouragement in the heritage stone nomination and for comments on quarry operations and products. Abigail Auld commented on historical and architectural aspects. We thank the Manitoba Museum and Stonewall Quarry Park for permission to illustrate dioramas, and Carlton
|
2023-08-25T15:22:52.477Z
|
2023-07-17T00:00:00.000
|
{
"year": 2023,
"sha1": "613f88ab395618215ec072c560f5d722999bfc9e",
"oa_license": null,
"oa_url": "https://journals.lib.unb.ca/index.php/GC/article/download/33418/1882529085",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c93b04d452b965ff17e49bcda5b32b848b2b7e4f",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
259547051
|
pes2o/s2orc
|
v3-fos-license
|
MATHEMATICS LEARNING PROCESS AND RESULTS OF ELEMENTARY SCHOOL STUDENTS IN LIMITED FACE-TO-FACE LEARNING
: The aim of the research is to analyze the learning process and the learning outcomes of students in limited face-to-face learning. The research method used was descriptive qualitative, the research subjects were class V teachers and fifth grade students. The research instruments used were interview text, observation, documentation. The validity of the data obtained was tested using the triangulation test, and the analysis process used the Miles and Huberman model, namely data reduction, data presentation and conclusion. The results of the study show that the implementation of limited face-to-face learning at elementary school is carried out in compliance with health protocols. The learning process in the classroom the teacher explains the material briefly, concisely and clearly, after that gives examples of practice questions to students. In the learning activities the teacher uses the talking stick learning method, lectures and explanations of sample questions. The learning activities in the class are quite active, and the learning outcomes achieved by students have been able to reach the specified KKM. However, in the learning process it is still teacher center, so it needs to be developed into a student center so that the learning process is more meaningful for students. So that students are given the freedom to explore knowledge independently more deeply and improve self-quality.
INTRODUCTION
For 2 years Indonesia has been faced with the problem of the Covid-19 pandemic. The existence of the Covid-19 virus has taken many lives, and not only that, all joints of human life in business, economy, tourism, and the world of education are also affected by the existence of the Covid-19 virus. In the world of education, the learning process system has changed to online learning. However, mathematics learning which was carried out online after the entry of the Covid-19 outbreak made the teaching process a little hampered because of problems in explaining material that could not be explained directly (Onde, et all. 2021). It is this limitation of learning activities that causes the level of students' understanding of the material to decrease, because alternatives to involving online learning media experience obstacles including the teacher's lack of readiness in using the media, inadequate network access, quota availability, limited smartphone ownership, lack of parental assistance, and lack of experience in managing online-based classes.
Online learning connects students with learning resources that are physically separated but can communicate, interact or collaborate with each other. An internet network with flexibility, connectivity, accessibility and interaction skills is needed in the online learning process (Sadikin & Hamidah, 2020). Students have problems when learning online, thus making student learning motivation decrease and become bored. Mathematics learning that does not involve students to be active will cause students not to be able to use their mathematical abilities optimally in solving mathematical problems.
Student motivation decreased during learning during the pandemic which then affected learning outcomes (Cahyani, Listiana, & Larasati, 2020). During online learning there is no progress in student learning, where students who are disadvantaged with learning loss are seen to be more prominent (Engzell, Frey, & Verhagen, 2021). According to The Education and Development Forum, learning loss is when students experience academic gaps in terms of knowledge and skills (Pratiwi, 2021). When learning loss occurs during a pandemic, children's learning experiences changes such as longer time holding mobile phones, less social interaction, increased stress, and reduced physical activity (Bao, Qu, R, & Hogan, 2020).
The learning process in teacher networks uses WhatsApp, Google Classroom, Zoom, and Youtube as media for delivering material. In using this media, teachers tend to burden students with independent assignments, so skills development is needed in using online media in learning mathematics (Sulistyaningrum, Sutama, & Desstya, 2021). In the online learning period, teachers are expected to be able to implement a learning process that goes well supported by a sense of responsibility and professionalism (Fauzi & Khusuma, 2020).
According to Miswar (2017) learning is a conscious effort that aims to gain experience in the form of cognitive, psychomotor and values as controlling student attitudes. In carrying out learning activities students are expected to be motivated, active and happy. So that learning results are obtained that are positive, functional, directed and active (Nurintiyas, 2020). Learning is said to be successful if students are physically, mentally and socially active. The success of learning is also influenced by family environmental factors and student character. With the support from the environment and student character that supports the learning outcomes will get more optimal results. Especially in learning mathematics, this is because mathematics is a lesson that requires detailed explanation in the process of solving the problem. Mathematics is knowledge of logic that discusses shape, composition, quantity, and related concepts (Wandini & Banurea, 2019). Mathematics learning that does not involve students to be active will cause students not to be able to use their mathematical abilities optimally in solving mathematical problems. For this reason, learning mathematics is very important to involve the media to develop students' understanding, so careful planning is needed.
Learning mathematics is synonymous with formulas and numbers, so it requires explanatory models and media that support explanations. With the mathematics subject, it equips students to think critically, logically, analytically, systematically, creatively and collaboratively, however, because the learning process occurs online, several problems arise. Problems that arise in learning mathematics when the learning process is online are students having difficulty understanding learning material, learning motivation decreases, and mastery in lessons is not good (Fadilla, Relawati, & Ratnaningsih, 2021).
Students' mathematics learning outcomes in the online learning period have decreased. This is because students do not understand the material during online learning. Unstable learning outcomes indicate that online learning has a less than optimal impact on delivering mathematics learning material. Where most of the formulas, calculations, operational explanations of problem solving. So that the online learning process causes the material to be incomplete (Mira et al., 2021).
To deal with this, the government took new actions, namely by holding limited faceto-face learning (Dewi, 2020). Limited face-toface learning is a teaching and learning process in one room during the Covid-19 pandemic between teachers and students (Kemendikbudristek, 2021). Implementation of limited face-to-face learning requires caution in its implementation. Which can be implemented if all educational staff vaccinate, comply with the 5M health protocol (wearing masks, washing hands, keeping distance, staying away from crowds and mobility). There are 3 things that are considered in limited face-to-face learning, namely students, teachers and education staff in increasing immunity and infrastructure that complies with health protocols. By re-opening the limited face-to-face learning process, it will provide fresh air for the world of education. Based on this background, it became the background for researchers in researching the processes and results of learning mathematics for fifth grade students of elementary school in limited faceto-face learning.
THEORETICAL SUPPORT Learning Process
Education seeks to form students who have knowledge. The interaction between those who teach and those who learn is called the teaching and learning process. Then accidentally, this process is called learning (Herawati, 2018). To achieve a good learning process it must be done consciously and organized.
There are goals to be achieved in the learning process. Where the goal is learning outcomes, the learning outcomes will show changes in behavior that are permanent, functional, positive and conscious. Learning outcomes according Dakhi (2020) namely the academic achievement of students, equipped with active asking and answering questions. In learning outcomes always have a relationship with the evaluation of learning. Therefore, techniques and procedures for evaluating effective learning are needed.
Learning Outcomes
The results of learning mathematics are the abilities possessed by students after following the material of mathematics (Muslina, 2018). In accordance with the objectives of learning mathematics, the results of learning mathematics will be very useful for students in developing their potential both in terms of cognitive, affective, and psychomotor.
Motivation in learning is also very necessary for every student. Without motivation in learning, it is impossible for the knowledge taught by each teacher to be VOLUME 12 NOMOR 2 APRIL 2023 accepted by students. Motivation is an encouragement that comes from within the student. Motivation is defined as a person's strength that can raise the level of will in carrying out an activity. Motivation comes from within (intrinsic motivation) and from within (extrinsic motivation), how strong student motivation in learning will determine the quality and learning outcomes, therefore teachers are required to be able to encourage and increase student motivation in learning.
Learning motivation is the desire to learn from an individual. A student can learn more efficiently if he tries his best. It means he motivates himself. Learning motivation can come from himself (intrinsic) who diligently read books and high curiosity about a problem. Motivation to learn can be generated, enhanced and maintained by external conditions (extrinsic), such as the presentation of lessons by teachers with varied media, appropriate methods and dynamic communication (Gunawan, et all. 2018).
Face to Face Learning
Face-to-face learning is a teaching and learning event directly between teachers and students (Kemendikbudristek, 2021). In the new normal era of Covid-19, the learning process occurs in a limited way. Where must comply with the circular of the Office of Education and Culture No. 420/04/60728 regarding the implementation of teaching and learning activities. The regulations that have been regulated are as follows: 1. Enter for all classes and only 1 lesson hour 2. Rest with time a duration of 15 minutes Istirahat 3. Class has a maximum of 16 students, if there are more than 16 students, it will be divided into shifts in the future 4. Student sitting distance of at least 1 meter (Nissa & Haryanto, 2020)
METHOD
The research method used is descriptive qualitative method. The object of research is the process and results of mathematics of fifth grade students in limited face-to-face learning. The research subjects were the fifth grade teacher and 15 fifth grade students. The research location was carried out at elementary scool, and the research was carried out in April-May 2022 Data collection is done by interviews, observation and documentation. First, what the researcher observed was the learning model, and the implementation of learning from start to finish. Second, the interview is a question and answer activity with sources regarding the topic being researched. The interviews were carried out together with the class V teacher and fifth grade students. Third, documentation is the activity of collecting and recording existing data (Hardani et al., 2020). The documentation collected is the implementation of learning activities and learning outcomes of mathematics.
The validity of the data using the credibility test. The credibility test is carried out by triangulation, triangulation is checking data obtained from sources in various ways and at times (Sugiyono, 2015). Data analysis uses the Miles and Huberman model analysis, which is carried out interactively and continuously until the data is saturated (Sugiyono, 2015). The data analysis steps are data reduction, data presentation and conclusion.
RESULTS AND DISCUSSION Learning Process
The learning process at elementary school is carried out from 07.30-11.00 pm, complying with health protocols such as wearing masks, taking one break, the distance between students sitting is 1 meter, in one class there are less than 16 students so that the learning process is not divided into two shifts , and provide a place to wash hands. Where this is in accordance with the circular of the Department of Education and Culture NO. 420/04/60728 regarding the implementation of face-to-face teaching and learning for the 2021/2022 school year. Before starting the learning activity the teacher makes a Learning Implementation Plan (RPP). Where the RPP refers to the syllabus and curriculum used by public elementary schools, and is modified according to the conditions in the education unit. Based on the results of interviews with class teachers, the learning process during the pandemic was different from the previous normal period. Because the duration of learning is shortened so that it condenses material and learning activities can only be carried out in class, except for learning Physical Education and Sports. Therefore the teacher prepares a Learning Implementation Plan in accordance with current conditions. That the implementation of learning during a pandemic was carried out by explaining the main subject matter, and giving practice questions to see students' abilities (Dewi, 2020).
The steps of the learning method carried out by the teacher during the mathematics learning process are carried out, namely the teacher opens the class by praying, then the teacher does an apperception of the previous learning material and relates it to the material that will be studied that day. After carrying out prayer activities the teacher conditions students to create conducive learning conditions by calming students who are still noisy and admonishing students who are not tidy. This is done so that students are mentally and physically ready to participate in learning. And then teacher explains the meaning or concept of the material, which then the teacher explains the material to the blackboard and writes mathematics learning material accompanied by examples. Then students work on the exercises given by the teacher, the purpose of giving practice questions is as a form of seeing students' comprehension after paying attention to the teacher's explanation.
The learning method used in the learning process for class V elementary school is the lecture learning method, which is then combined with fun using games or called talking sticks accompanied by math practice exercises. In the learning process does not use special media in delivering material to students, this depends on the material to be delivered to students. Implementation of the talking stick learning method is packaged in the mathematics learning process using the help of a stick, where students holding the stick are required to answer questions from the teacher after the explanation of the subject matter has been taught (FITRIA & Fitriana, 2019).
With this talking stick method can train psychomotor and cognitive abilities. So that the learning atmosphere becomes fun and students are active in participating in the learning process. Where learning mathematics is considered a boring and scary lesson, it becomes an exciting lesson and is easily absorbed by students because of the management of learning carried out by the teacher. In elementary school teachers apply this model by using the help of a ballpoint pen/pencil as a substitute for sticks and children's songs as accompaniment to learning method activities. In using this method the teacher and students in the class become enthusiastic and active in answering the questions asked.
In the learning process that occurs using the talking stick method, there is interaction between students and teachers. Where the teacher will randomly ask questions to students, and students who get questions posed by the teacher must be answered by these students. In answering the questions posed by the teacher, if a student makes an error, he will ask his friend about solving the problem so that the correct answer is obtained. After the learning process is almost complete, then the teacher gives some practice questions to students as a form of cognitive honing and measuring students' understanding of the subject matter presented.
Learning Outcomes
Maulidya & Nugraheni (2021) mathematics learning outcomes are obtained when students have taken a test that aims to measure ability and understanding of the material after participating in learning within a specified time. In learning outcomes, learning outcomes will be obtained that can exceed the KKM and cannot exceed the KKM. Learning outcomes that have not been able to exceed the KKM are assumed that students lack mastery of the material. Because learning outcomes are related to the cognitive domain in brain activity and students' thinking orientation.
In the limited face-to-face learning period, the fifth grade mathematics learning process at elementary school was carried out in a systematic, solid and structured manner.
Where the teacher conveys the subject matter to be conveyed during the learning hour. Even though the learning time is limited, the teacher is able to manage learning in a fun way by using the talking stick method to inspire student enthusiasm for learning, and answer the exercise questions given with enthusiasm. Public elementary schools have a KKM score of 75. KKM or Minimum Completeness Criteria is determined based on passing daily exams or school exams. This value is determined based on teacher deliberations based on the intake, complexity, and carrying capacity of the school (Mardapi, Hadi, & Retnawati, 2015). So that the KKM in each school is different from other schools.
The following are the results of learning mathematics for class V elementary school students for the 2021/2022 academic year for odd semesters during limited face-toface learning: to directly ask teachers and friends about the material being studied, and is supported by a learning atmosphere that makes students' enthusiasm for learning grow again. This process makes students' understanding better when the learning process occurs face-to-face is limited.
DISCUSSION
Based on the results of research in grade V elementary schools, learning takes place with the teacher explaining the material using the lecture method and sometimes using a talking stick, and explaining the material on the blackboard. The teaching and learning process is carried out, namely the teacher explains and after it is finished, students will be asked in advance about their understanding of the material that has been explained. Then students work on exercises in accordance with the material described. The learning process is still teacher-based learning, where the teacher is the center of learning and students listen, pay attention and their learning is dictated by the teacher. When students do the tasks that have been ordered by the teacher, some students experience difficulties in the results of multiplication and division. So that it becomes an obstacle for students when answering questions.
By using teacher based learning, it does not explore insights and student knowledge. so that a paradigm shift is needed in the process that was previously teachercentered into student-centered learning. With student centered it is hoped that students will be actively encouraged in building knowledge, attitudes and behavior. Student-centered learning process, will make students opportunity and facilitate to build their knowledge so that they will gain a deep understanding and finally be able to improve student quality.
The choice of approach is due to the fact that learning makes students enthusiastic about existing problems so that they want to try to solve the problem. The steps that can be taken in the student center learning process are to create study groups consisting of 3-4 students, role play, discovery learning, and contextual learning.
Then in the mathematics learning outcomes of students during this limited faceto-face learning period it was produced that student scores were satisfactory. This is encouraged because learning activities that present directly between teachers and students in one place make the interaction and encouragement of learning bigger.
CONCLUSIONS AND RECOMMENDATIONS
The learning process that is carried out does not divide student study time shifts because in one class there are only 15 students, the school complies with health protocols in education units and regions, and the learning outcomes of class V SD Negeri in limited faceto-face learning can exceed the KKM that is has been determined. Things that support student learning outcomes can achieve KKM, namely the active attitude of the teacher in conveying the subject matter, students who actively ask questions and cooperate with each other in helping friends who do not understand the material. In the learning process the teacher conveys the material briefly, concisely, clearly adjusting to lesson hours during limited faceto-face learning, so that it is efficient to achieve the learning objectives that have been set. Apart from that, it is also supported by the enthusiasm that has grown again after experiencing learning loss during the online learning period, the delivery of material that is clear and easy for students to understand.
However, in the learning process it is still teacher center, this needs to be developed into a student center so that the learning process is more meaningful for students. So that students are given the freedom to explore knowledge independently more deeply and improve self-quality.
|
2023-07-11T19:45:17.873Z
|
2023-04-27T00:00:00.000
|
{
"year": 2023,
"sha1": "985d867b48933806a6c24d94e653053b6d9c777f",
"oa_license": "CCBYNCSA",
"oa_url": "https://primary.ejournal.unri.ac.id/index.php/JPFKIP/article/download/9205/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e7d423a6ec0689c19f44175502bfbc5e55fd896b",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
}
|
256758974
|
pes2o/s2orc
|
v3-fos-license
|
Role of abatacept in the prevention of graft-versus-host disease: current perspectives
Administration of abatacept following transplantation has been reported to inhibit graft rejection and graft-versus-host-disease (GvHD) in mouse models associated with allogeneic hematopoietic stem cell transplant (HSCT). This strategy has recently been adopted in clinical practice for GvHD prevention in human allogeneic HSCT and offers a unique approach to optimizing GvHD prophylaxis following alternative donor HSCTs. When combined with calcineurin inhibitors and methotrexate, abatacept had shown to be safe and effective in preventing moderate to severe acute GvHD in myeloablative HSCT using human leukocyte antigen (HLA) unrelated donors. Equivalent results are being reported in recent studies using alternative donors, in reduced-intensity conditioning HSCT and nonmalignant disorders. These observations have led to hypothesizing that even in the setting of increasing donor HLA disparity, abatacept when given with traditional GvHD prophylaxis does not worsen general outcomes. In addition, in limited studies, abatacept have being protective against the development of chronic GvHD through extended dosing and in the treatment of steroid-refractory chronic GvHD. This review summarized all the limited reports of this novels approach in the HSCT setting.
Introduction
Graft-versus-host disease (GvHD) is still a major problem in patients undergoing allogeneic hematopoietic stem cell transplantation (HSCT). 1 It results from immune reactions associated with donor T-cells toward dissimilar host histocompatibility antigens. 2 Traditionally, unrelated HSCT or partially HLA-mismatched has been reported to result in an increased risk of severe GvHD, in addition to graft failure, profound immune dysregulation, and non-relapse mortality, hence limiting the use of alternative donors. 3 Thus, strategies to prevent GvHD are essential to ensure successful results of unrelated allogeneic HSCT. Conventionally, GvHD prophylaxis includes calcineurin inhibitor (CNI), combined with a short course of methotrexate (MTX). 4 Anti-lymphocyte antibodies either polyclonal (anti thymocyte globulin) or monoclonal (alemtuzumab) are also being used as GvHD prophylaxis due to their effects on T-cell surface antigens or in vivo T-cell depletion by depleting CD4 lymphocytes. [5][6][7] Recent progress in GvHD pathophysiology research has supplied comprehensive knowledge of associated signaling pathways. Consequently, leading to the development of targeted agents which are under study (Phase II and III trials). 8 Primarily, aGvHD is mediated by alloreactive T-lymphocytes. Therefore, several treatment approaches have been developed to target donor T-cell activation, which is achieved through two stimulatory signals (Figure 1). The first signal happens through the T-cell receptor (TCR). The TCR recognizes the antigen and is HLA restricted but not enough to ensure complete activation of the T-cells. The second stimulatory signal, also known as co-stimulation, is mediated by various molecules, especially those TherapeuTic advances in hematology expressed on antigen-presenting cells (APCs), such as adhesion molecules like LFA-1, TNF receptor, and the B7-CD28 family. 9 This signal is necessary to stimulate cytokine secretion, T-cell proliferation, and effector function after TCR activation, and is controlled by various inhibitory molecules such as programmed death-1 (PD-1) and cytotoxic T-lymphocyte antigen 4 (CTLA-4). 10 Since co-stimulation is fundamental to most functions of T-cells, incomplete or improper activation can make T-cells unresponsive or die due to programmed cell death (apoptosis). 10 Therefore, regulation of co-stimulatory and co-inhibitory signals presents novel approaches to target the prevention of GvHD, such as blocking the CD28/CTLA-4 axis. 11 Abatacept is a recombinant soluble fusion protein that inhibits antibody-dependent, cellmediated cytotoxicity and/or complement fixation. 12 It consists of the extracellular domain of human cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) connected to the modified Fc (hinge, CH2, and CH3 domains) of the human immunoglobulin G1. The CTLA-4 binds to CD80 and CD86 (co-stimulatory receptors) on APCs with a higher affinity than CD28 (their native co-stimulatory ligand). This binding leads to attenuation of T-cell activation, offering the underlying principle for abatacept as GvHD prophylaxis given the abundance of evidence that GvHD is driven by the activities of CD4+ CD8+ T-cells 4,11,12 (Figure 1). The effect of abatacept treatment on T-cell subsets population has been investigated in patients with rheumatoid arthritis (RA). [13][14][15][16][17] However, these studies are limited and conflicting, this is due to different time points when T-cell subsets frequencies were analyzed and different patient cohorts. Picchianti et al. analyzed the frequency of T-cell subsets and T regulatory cell (Treg) inhibitory function in 20 RA patients that did not respond to a TNF-α blocking agent and then received abatacept with methotrexate. Immune studies were done before and 6 months after therapy. Abatacept therapy was able to rescue immune function and led to an effective and safe clinical outcome. 13 In an observational cohort study, Conigliaro et al. reported their findings on 48 RA patients treated with abatacept. All clinical data were collected at baseline and after 3 months of treatment. The percentage and the absolute number of CD3+ CD4+ journals.sagepub.com/home/tah 3 CD45+ (helper) T-cells did not show any significant difference after the treatment but the percentage and absolute number of CD3+ CD8+ CD45+ (cytotoxic) T-cells significantly decreased after 3 months of abatacept treatment. 14 Alvarez-Quiroga et al. 15 described an enhanced suppressive ability of Treg cells, isolated from the periphery after abatacept therapy, in contrast, Pieper et al. 16 19 This article will address abatacept's development and clinical applications in GvHD treatment. This article will also review the results of various clinical trials studying this treatment approach in HLA-matched and mismatched allogeneic HSCT.
Preclinical studies
Blazar and colleagues were the first to show that in vivo infusion of recombinant soluble CTLA4 linked to Fc of Ig could prevent effective activation of T-cells, thereby reducing the severity of GvHD. 20 The Ig heavy chains serve as a substitute ligand to block CD28/CTLA-4 co-stimulation. In the study, lethally irradiated B10.BR recipients of major histocompatibility complex disparate C57BL/6 donor grafts received intraperitoneal injections of human CTLA4-lg (hCTLA4-lg) or murine CTLA4-lg (mCTLA4-lg) at different doses and schedules after undergoing bone marrow transplantation (BMT) (on day -1 or day 0). The mice injected with CTLA4-Ig showed up to 67% survival rates, surviving three months after BMT while untreated recipients only had 0% survival rate. The recipients of CTLA4-Ig also showed no difference between those that received hCTLA4-Ig and those that were injected with mCTLA4-Ig. Thymic flow cytometry analysis did not show any reductions in the absolute number of mature CD3+ CD4+ CD8− T-cells. In addition, flow cytometry studies showed that CD8+ T-cell repopulation was not inhibited by hCTLA4-Ig injection. CD8+ T-cells were the predominant T-cell population at all time periods post-BMT in hCTLA4-Ig-treated mice even though the donor spleen used to generate GvHD has twofold more CD4+ compared with CD8+ T-cells. They concluded that CTLA4-Ig consistently and significantly decreases lethal GvHD in murine recipients of fully allogeneic donor cells. However, because GvHD prevention was incomplete, Blazar et al. 20 suggested combining CTLA4-Ig administration with other agents that block co-stimulatory ability to optimize the effects of CTLA4-Ig in preventing GvDH.
Comparable results were replicated by Wallace et al. 21 who used CTLA4Ig to increase the survival rate of lethally irradiated (C57BL/6X DBA/2) F1 recipient mice after undergoing injection of parent C57BL/6 bone marrow and spleen cells. They found that short courses of CTLA4Ig extended the survival of recipients after BMT, even after delaying the treatment for 6 days post-BMT. Wallace and colleagues concluded that the severity of aGvHD seems to be more reliant on CD28/CTLA-4 co-stimulation pathway.
Furthermore, Miller et al. 22 TherapeuTic advances in hematology achieve stable chimerism without increasing cytoreductive toxicities in host. [23][24][25][26] All the above preclinical studies formed the basis for the hypothesis that blocking this co-stimulatory signal using short courses of treatment with CTLA4Ig(abatacept) can result in reducing incidence of acute GvHD (aGvHD) in patients following HSCT.
Clinical transition of abatacept in treatment of GvHD: initial studies
Koura et al. 27 carried out a feasibility study in humans and documented promising results in using traditional GvHD prophylaxis with abatacept in 10 pediatric and adult patients with leukemia. All patients underwent unrelated HSCT. Six donor-recipient pairs were 7/8 HLA-matched, while four had 8/8 HLA-matched (MLA-A, HLA-B, HLA-C, and HLA-DBRB1 loci). Subjects were conditioned with either total body irradiation (TBI) (1200 cGy) + cyclophosphamide (Cy) (120 mg/kg); busulfan (Bu) (900-1300 mmol*min/L for each of 16 doses) + Cy (120 mg/kg); or fludarabine (Flu) (125 mg/m 2 + melphalan (Mel) 140 mg/m 2 . The cyclosporine was started 3 days prior to transplant, with doses titrated to maintain a trough level of 100 to 300 ng/mL and continued at full dose up to 100 days after the HSCT. Methotrexate was given at 15 mg/m 2 at day +1 and 10 mg/m 2 on day +3, +6, and +11. Abatacept was given intravenously over 30 min at 10 mg/kg (maximum dose, 100 mg) day 1 and day +5, +14, and +28. In their results, Keen and colleagues noted that the median time to neutrophil engraftment was 16.5 days. Patients had a reduced rate of aGvHD, with a 20% rate of grades II to IV and an impressive 10% of grades III and IV even in robust immune reconstitution. No graft failures, no deaths due to infection, and no cases of transplant-associated mortality were recorded. Seven out of 10 patients survived to a median follow-up of 16 months. Keen et al. further observed that blocking co-stimulation using abatacept could impact the activation and proliferation of CD4+ after transplantation. They concluded that using abatacept in treating aGvHD in individuals undertaking unrelated-donor HSCT was feasible and encouraging. 27 This report served as proof of concept for further studies in patients with hematologic malignancies and those with nonmalignant hematologic diseases.
Extended studies: malignant
Watkins et al. further explored Koura et al. 27 proof-of-concept observations. In a Phase II Trial study (ABA2, NCT01743131), they investigated the role of abatacept in reducing aGvHD after unrelated donor HSCT in malignant disorders. 28 The study involved pediatric and adult patients with hematologic malignancies grouped into two categories: a randomized, double-blind, placebocontrolled group with 8/8-HLA-matched unrelated donor (MUD) and a single-arm group with 7/8-HLA-mismatched unrelated donor (MMUD In a retrospective study, Khandelwal et al. 33 explored the role of adding abatacept to reduce the severity of aGvHD in 32 children with Betathalassemia major transplanted in their institution. All patients received a myeloablative conditioning regimen comprising of a combination of busulfan given daily for four days according to pharmacokinetic-targeted dosing, fludarabine, and thiopeta intravenously. In the study, they compared the clinical outcomes of eight patients who received a standard GvHD prophylaxis which included calcineurin inhibitor combined with corticosteroids (1 mg/kg/d from 1 day after the HSCT to days +28), to 24 patients who received abatacept given at a dose of 10 mg/kg (maximum dose of 100 mg) intravenously on days −1, + 5, + 14, and +28 following stem cell infusion in addition to their standard GvHD prophylaxis. Donor types were similar in both groups (63% related donors and 37% unrelated donors). With no difference in platelets and neutrophils engraftment between both groups, the rate of aGvHD was 50% in the standard GvHD prophylaxis group versus 0% in the standard GvHD prophylaxis with abatacept group (p = 0.001), chronic GvHD (25% versus 25%, p = 1) and viral reactivation (62.5% versus 83%, p = 0.3) rates. Overall survival at 1 year in the standard GvHD prophylaxis group was 62.5% versus 100% in the group with standard GvHD prophylaxis group and abatacept (p = 0.007). Therefore, they concluded that adding abatacept to routine GvHD prophylaxis can reduce the incidence of aGvHD post-HSCT with durable engraftment and improved survival. 33 In 2017, Jaiswal et al. 34 reported their experience using abatacept in severe aplastic anemia (SAA) following HLA-mismatched haploidentical HSCT. They rationalized that in haploidentical transplants, adding abatacept to prior to graft infusion would eliminate predominant alloreactive T-cell population and a minority of abatacept resistant T-cells which might be activated during the 72 hours window could be effectively eliminated by PTCy. In addition, they also postulated that combining sirolimus and abatacept might enhance transplantation tolerance through via Tregs. They conducted a retrospective study comparing two different GvHD prophylaxis approaches in pediatric patients. The conditioning regimen used in both groups comprised fludarabine, low dose Cy and melphalan, and Anti-thymocyte Globulin (ATG). In the control group (same site historical control), GvHD prophylaxis consisted of post-transplantation cyclophosphamide (PTCy) at 50 mg/kg on days +3 and 4 with sirolimus from day −7 (with trough levels of 8-14 ng/ml on day 0) until 9 months in addition to cyclosporine (CSA) and mycophenolate mofetil (MMF). In the study group, CSA and MMF were replaced with the costimulation blockade, abatacept (COSBL group). Abatacept was administered at 10 mg/kg on days −1, +5, +20, +35 and then every 4 weeks until day +180. Ten patients with a median age of 12 were in the COSBL group, compared with 10 patients, with a median age of 10 years in the control group. There was a rapid and sustained recovery of Tregs (CD4 + CD25 + CD127dim/−) in the COSBL group compared with the control group. The incidence of aGvHD was 10.5% in the COSBL group compared with 50% in the control group (p = 0.04), chronic GvHD (12.5% versus 56%, p = 0.02) and CMV reactivation (30% versus 80%, p = 0.03). Overall survival at 1 year in the COSBL group was 88.9% versus 50% in the control group (p = 0.09). They concluded that abatacept combined with PTCy and sirolimus might augment transplantation tolerance and reduce aGvHD in children with SAA. 34 In another study from the same group, Jaiswal et al. 35 reported their experience, this time in journals.sagepub.com/home/tah 7 patients with thalassemia major (TM, n = 5) and sickle cell disease (n = 5), aged 3 to 19 years. This small cohort of patients underwent pretransplant immunosuppressive therapy for ten weeks. Conditioning was myeloablative, and abatacept was given to patients every 2 weeks during the treatment, on days −1, +5, +20, +35, and every 4 weeks after that for 6 months, together with sirolimus. In addition, a short course of low-dose dexamethasone was administered from day + 6 for 2 weeks. Jaiswal and colleagues observed nine patients engrafted at a median of 15 days, with 1 patient dying due to sepsis on day + 19. No acute or chronic GvHD has been documented in the study. Only four patients have been reported having cytomegalovirus reactivation. All remaining nine patients are still alive and free from disease at a median follow-up of 28 months. 35 Finally, in another study highlighting the effects of abatacept in nonmalignant HSCT, Chaudhury et al. 36 reported their initial experience in an ongoing multicenter trial through the Sickle Transplant Alliance for Research (STAR), looking at the use of abatacept in pediatric patients with sickle cell disease at elevated risk of GvHD. They used a RIC combination of distal alemtuzumab, Fludarabine, Thiopeta, and Melphalan.
The T-cell replete bone marrow grafts were obtained from matched related (n = 8) or unrelated (n = 5) donors. Abatacept was administrated at 10 mg/kg/dose intravenously on days −1, + 5, + 14, and + 28 in addition to a standard GvHD prophylaxis involving tacrolimus and methotrexate. After a median follow-up of 8 months, the first 13 recruited patients were alive and reported no acute or chronic GvHD, and three are now off immune suppression. 36
Using abatacept to treat and prevent chronic GvHD
In a preclinical study, Via et al. 37 using mouse models showed that CTAL4Ig administered early can prevent the development of acute and chronic GvHD by inhibiting the activation of T-cells of the donor. On the other hand, delayed administration of CTAL4Ig after the development of T-helper type 1 and 2 effector responses (day 7) had no impact on aGvHD. However, this delayed administration was noted to reverse cGvHD as showed by fewer donor CD4 memory T-cells, reduced donor T-cell expression of CD40 ligand, standard host B cell numbers, and normal serum levels of auto-antibodies. 37 Watkins et al. 28 showed that, although abatacept reduced the incidence of aGvHD in ABA2 patients, 4-doses schedule of abatacept did not improve cGvHD prevention. Koura and colleagues, had earlier, showed that abatacept treated patients when compared with control patient proved a profound decrease in absolute and relative percentage of CD4+ T but not CD8+ cells early after transplantation. This decrease was clear in both unfractionated T-effectory memory and T central memory subsets. However, by day + 60 post-HSCT, these differences were no longer seen between the two cohorts. There was also a decrease in the number of CD4 + /CD25 high/ CD127 low/FoxP3 + putative Tregs cells in the abatacept arm compared with control. As FoxP3 + can also mark proliferating and activated CD4+ T-cells the authors could not determinate this was due to a true difference in functional regulatory cells. In addition, this difference in FoxP3+ cells were transient and confined to early time points post-HSCT. 27 Chronic GvHD is in part helped by host reactive T-cells stimulated by allogeneic antigens, and these early findings imply that extending abatacept beyond the 4-doses schedule may continue to suppress CD4 memory T-cells, improving cGvHD prevention. Based on this rationale, Jaiswal et al. In a recent phase I clinical trial, abatacept was used to treat patients with steroid-refractory cGvHD. These patients were treated with two increasing doses of abatacept administrated at 3 and 10 mg/kg in a 3 + 3 design with an expansion cohort given only 10 mg/kg. The results of the study showed abatacept to be safe. The results also led to improved chronic scores (44%) and a significant reduction (51.3%) in prednisone use. The sites with considerable improvement in the studied 16 patients were the gastrointestinal tract (40%) and mouth (42%), followed by joints, skin, and lungs. Remarkably, a full recovery of grade II pulmonary cGvHD was reported in one patient. 38 In another retrospective study, 15 patients (median age of 49 years) who underwent HSCT and received abatacept for cGvHD were analyzed. They reported an overall response rate of 40%, mostly, in patients with lung GvHD (bronchiolitis obliterans syndrome). Abatacept was noted to have significant, durable clinical improvement as measured by an 89% improvement based on lung severity score or lung function measured by pulmonary function test. 39
Conclusion
Together, these clinical results show that abatacept has effectively evolved its role from the bench to the bedside in HSCT (Figure 2). These early clinical studies though limited by their sample size, demonstrate the potential abatacept may have in helping alleviate negative impacts associated with HLA disparity in transplantation in both malignant and nonmalignant disorders regardless of conditioning type. In a recent registry study, increased incidence of GvHD and inferior outcomes in patients receiving haploidentical HSCT with PTCy, tacrolimus and mycophenolate mofetil for GvHD prevention as opposed to matched unrelated donor HSCT with PTCy-based GvHD prevention was reported signaling a need for improvement. 40 Future studies should explore extensively, the use of abatacept in conjunction with post-transplant cytoxan and compared with ATG for GvHD prevention. It has also been noted that there is little to no effect on cGvHD incidence with just four doses of abatacept, suggesting adding more doses of abatacept, may also prevent moderate to severe cGvHD. Although reports from these limited studies have been convincing, they call for further studies, especially in preventing cGvHD. Several ongoing and previously completed clinical trials exist and are focusing on expanding and bridging the knowledge gap on this novel approach of using abatacept in transplantation. Table 1 provides a summary of ongoing actives clinical trials based on clinicaltrials.gov. In combination with other immunosuppressive agents, abatacept has supplied a practical and safe pharmacologic choice for GvHD prevention in malignant and nonmalignant diseases while using HLA-matched or alternative donors. However, there are some limitations to its effectiveness. Speculations remains on how abatacept as an effective aGvHD prophylaxis could impact on relapse rates in malignant disorders. However, as successful studies in this novel approach increases, this will guarantee effective and prompt HSCTs are accessible to everyone, including populations traditionally lacking donors, such as patients with hemoglobinopathies.
|
2023-02-11T16:06:04.493Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ee9a83eec32f644c021d1f9e5bd7ce1f61b3e75b",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4dc567d6c9aaca68623aaad2aad3e0837709a49a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
24138304
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid ¢ltering to rescue stable oscillations from noise-induced chaos in continuous cultures of budding yeast
In large-scale fermentations with oscillating microbial cultures, noise is commonly present in the feed stream(s). As this can destabilize the oscillations and even generate chaotic behavior, noise filters are employed. Here three types of filters were compared by applying them to a noise-affected continuous culture of Saccharomyces cerevisiae with chaotic oscillations. The aim was to restore the original noise-free stable oscillations. An extended Kalman filter was found to be the least efficient, a neural filter was better and a combined hybrid filter was the best. In addition, better filtering of noise was achieved in the dilution rate than in the oxygen mass transfer coefficient. These results suggest the use of hybrid filters with the dilution rate as the manipulated variable for bioreactor control.
Nomenclature
C intracellular storage carbohydrate concentration (g L À1 ) D dilution rate (h À1 ) e i key enzyme concentration for ith pathway (g g À1 biomass) E ethanol concentration in the bioreactor (g L À1 ) G glucose concentration in the bioreactor (g L À1 ) G 0 glucose concentration in the feed stream (g L À1 ) k L a oxygen mass transfer coefficient (h À1 ) K i Michaelis constant for ith pathway (g L À1 ) K O 2 ; K O 3 oxidative pathway oxygen saturation constants (mg L À1 ) O dissolved-oxygen concentration in the bioreactor (mg L À1 ) O à dissolved-oxygen solubility limit (mg L À1 ) r i biomass growth rate on ith pathway (h À1 ) S i carbon substrate concentration for ith pathway (g L À1 ) T elapsed time (h) u i cybernetic variable controlling key enzyme synthesis for ith pathway (-) v i cybernetic variable controlling key enzyme activity for ith pathway (-) X biomass concentration in the bioreactor (g L À1 ) Y i yield coefficient for ith pathway (g biomass g À1 substrate) Greek letters a specific enzyme synthesis rate (h À1 ) a à constitutive enzyme synthesis rate (g h À1 ) b specific enzyme degradation rate (h À1 ) j i stoichiometric coefficient for ith carbon substrate (-) g i stoichiometric coefficients for storage carbohydrate synthesis and degradation (-) m i specific growth rate of biomass on ith substrate (h À1 ) m i, max maximum specific growth rate on ith substrate (h À1 )
Introduction
Biochemical and metabolic processes within cells, and transport of nutrients and products across cell walls, are closely linked with observations of sustained oscillations in continuous cultures of the budding yeast Saccharomyces cerevisiae (Beuse et al., 1993;Duboc et al., 1996;Wolf et al., 2001). The occurrence and type of oscillations depend on the operating conditions, mainly the dilution rate and the rate of transport of oxygen into the culture broth (Beuse et al., 1993;Jones & Kompala, 1999).
Although it is possible to maintain prolonged oscillations of a particular type in well-controlled disturbance-free laboratory-scale bioreactors, under more realistic conditions the infiltration of noise from the environment distorts the oscillations. Oscillatory behavior in such situations then shows fluctuations around the (unobservable) deterministic (noise-free) profiles. The intrinsic stable oscillations are not just camouflaged by the fluctuations but may even be driven to chaotic behavior if the noise becomes sufficiently intense. Noise carried by feed streams is common in large-scale continuous fermentations (Rohner & Meyer, 1995). The recovery of stable, observable oscillations from chaotic data is therefore important in understanding and controlling the process, and there are continuing efforts to achieve this (Sinha, 1997).
Noise filters of different kinds have been employed. They are broadly of two kinds: algorithmic and non-algorithmic. The former are more common, and many of these have been described by Nelles (2000). The performances of those applicable to bioreactors have been studied recently (Patnaik, 2003a). Algorithmic filters require a reliable model of the process, have limited adaptability to time-dependent noise and, for the latter reason, can be difficult to optimize on-line for complex biological processes. Non-algorithmic filters, mainly based on neural networks and fuzzy logic, are more flexible, do not require a model and can be programmed for automatic on-line tuning. However, algorithmic filters reflect more faithfully the key features of a (fermentation) process, whereas neural networks are 'black box' devices that can sometimes be difficult to train before the real application (Nelles, 2000).
As neural networks are more effective than algorithmic filters in retrieving stable oscillations from noise-distorted behavior (Patnaik, 2003a), it is reasonable to expect a combination of the two to contain the effectiveness of a neural filter and the mechanistic fidelity of an algorithmic filter. This concept is also motivated by its success in simulating and controlling bioreactors with imperfect mixing and inflow of noise (Patnaik, 2003b). In a hybrid model, an algorithmic and a non-algorithmic (neural) filter operate in tandem, either independently or interactively. Details about this filter and the cultivation process are provided in later sections.
In this study, a hybrid neural filter is compared with a pure neural filter and the most common algorithmic filter, the extended Kalman filter (EKF), in order to compare their abilities to rescue steady noise-free oscillations from chaotic oscillations induced by noise in continuous cultures of S. cerevisiae. As explained here, clear oscillatory behavior provides metabolic information and is of significance for the bioprocess.
Fermentation description and data generation
Many experimental studies (reviewed by Patnaik, 2003c) have reported sustained oscillations of different types in continuous fermentations with Saccharomyces cerevisiae. However, all of them have used small laboratory-scale bioreactors that are operated under well-controlled and sanitized conditions which are free from the disturbances that inevitably occur on an industrial scale (Rohner & Meyer, 1995). As noise-affected realistic data were required for this study, and commercial and proprietary considerations restrict the availability and public disclosure of industrial data, computer-generated data simulating industrial operation were generated by 'corrupting' a mathematical model validated with laboratory data by adding noise to the substrate feed stream. The rationale and usefulness of this approach have been described and justified in many previous studies (Simutis & Lubbert, 1997;Chen & Rollins, 2000;Patnaik, 2003a, b).
As in previous studies (Patnaik, 2003a(Patnaik, , 2004a(Patnaik, , 2005, a model developed by Jones & Kompala (1999) was used to generate noise-free and noise-affected performance data. These studies have shown that Gaussian noise with even 5% variance in the inflow rate of the substrate feed stream can generate chaotic oscillations, except at large dilution rates and small mass transfer coefficients of oxygen. This variance is typical of the noise present in production processes (DiMassimo et al., 1992;Rohner & Meyer, 1995), and its chaotic effect may be observed both through the concentration profiles (Patnaik, 2005) and their Lyapunov coefficients (Patnaik, 2003a(Patnaik, , 2004a. The Jones-Kompala model was solved without noise and by adding Gaussian noise with 5% variance to the inflow rate of the substrate feed stream. The Jones-Kompala model was chosen because it expresses in a simple and adequate manner most of the key features of the oscillations observed in continuous cultures of S. cerevisiae. It also departs from most other models in a fundamental way. Whereas most other models are mechanistic, that of Jones & Kompala (1999) adopts a cybernetic perspective. The cybernetic approach (Ramkrishna et al., 1987) attributes to microorganisms the ability to decide and utilize optimally the available resources so as to maximize their own survival. The optimality is usually expressed mathematically by maximization of the growth rate. In a sense, cybernetic modeling is a formalization of a well-established evolutionary concept, and it incorporates regulatory processes within the cells, which mechanistic models do not.
With glucose as the carbon source, S. cerevisiae may follow one or more of three pathways in continuous cultures (Satroutdinov et al., 1992;Duboc et al., 1996): glucose fermentation, ethanol oxidation and glucose oxidation. When sufficient glucose is present, the organism grows on glucose and produces ethanol. However, in a glucosedepleted medium, S. cerevisiae utilizes ethanol as the carbon source. The fermentative pathway is then not followed and purely respiratory oscillations occur (Keulers et al., 1996), but their time periods are of a few minutes, whereas those with glucose cover hours or days, as the feed concentration of glucose increases (Satroutdinov et al., 1992;Beuse et al., 1993;Bai et al., 2004). These short-cycle ultradian oscillations are also mechanistically different from the longer circadian oscillations observed with large bioreactors. Moreover, growth on ethanol is not of industrial interest because the main objective is to produce ethanol.
Although ethanol is produced predominantly under anaerobic conditions in batch cultures, oscillating continuous fermentations generate ethanol in certain ranges of the dissolved oxygen concentration and the gas-liquid mass transfer coefficient (Satroutdinov et al., 1992;Patnaik, 2003c;Bai et al., 2004). Jones & Kompala (1999) postulated that dynamic competition among the pathways, according to the culture conditions, was the main cause of oscillations. They formulated cybernetic equations for each pathway and conditions for switching from one pathway to another, and showed that by manipulating the dilution rate and the mass transfer coefficient it is possible to change the occurrence and the type of oscillations.
Both smoothly oscillating noise-free profiles and chaotic profiles generated by a noisy feed stream were employed in this study. Although the model includes eight component concentrations, four key measurable ones were studied: biomass, glucose, dissolved oxygen and ethanol. These variables provide sufficient insight into the nature and the mechanism of oscillations (Duboc et al., 1996;Wolf et al., 2001).
Evaluation of filter performance
In recent studies (Patnaik, 2003a(Patnaik, , 2004a, it has been shown that the Lyapunov exponent is a compact and reliable measure of the ability of a filter to remove noise and restore nearly noise-free performance. A full description of the Lyapunov exponent is available elsewhere (Elert, 2000), so only a brief introduction sufficient for the present purpose is provided here.
Consider two trajectories in time. In our application these are a pair of time-domain concentrations of any variable, one trajectory that of a noise-free culture and the other that of the corresponding noise-distorted chaotic oscillation. Let x 0 be the value of a concentration just prior to the start (initial time t = 0) of a disturbance or noise signal, and let this value be displaced by Dx(x 0 , t) as time progresses. The initial displacement is obviously Dx(x 0 , 0). The mean exponential rate of divergence of the two trajectories is then calculated as The number l is called the Lyapunov exponent, and it applies to both continuous and discrete processes.
If l o 0, the disturbed trajectory is attracted eventually to a stable periodic orbit. For oscillating cultures of the kind analyzed here, this means the concentration profiles return to their original stable oscillations after the effect of the noise has decayed or has been removed [by methods such as the use of filters (Patnaik, 2003b(Patnaik, , 2004a]; in the limit l ! À 1, the system is said to be super-stable, i.e. no disturbance of any magnitude can permanently displace the oscillations. By contrast, l 4 0 denotes an unstable and chaotic trajectory, which is the subject of the present investigation.
The intermediate situation of l = 0 signifies a neutrally stable orbit. In the present context this means the disturbed oscillations and the original deterministic oscillations stay apart by a constant mean distance for an indefinite duration until perturbed again. Such a system is said to be Lyapunov-stable.
The extended Kalman filter (EKF) is a widely preferred algorithmic filter for bioprocess data analysis and monitoring under noise-affected conditions (Karjala & Himmelblau, 1994;Zorzetto & Wilson, 1996;Simutis & Lubbert, 1997) and it was therefore used for comparisons in the present work. The EKF performs well for Saccharomyces cerevisiae oscillations (Patnaik, 2004b), but under limited conditions and with computational rigidity. It has been shown previously (Patnaik, 2003a) that neural networks can overcome some of these weaknesses. Among different configurations, an autoassociative (AA) neural filter was selected as the best. This choice is also physically reasonable because a noise filter receives and generates the same variables after suitable processing, and an AA network has generic compatibility with this kind of processing. However, a neural filter, being essentially a 'black box' input-output mapping device, may be limited by difficulties in training, computational costs and extrapolation capability. So, to combine the advantages and reduce the weaknesses of the two kinds of filters, a hybrid filter was created by combining a neural filter and an EKF as shown in Fig. 1. Variables that have weak noise or weak influences on the fermentation can be processed by the Figure 1 also allows information flow between the filters; the hatches across the arrows are intended to indicate that information transfer in either direction is optional. This configuration is among those recommended by Schubert et al., 1994, but the two-way internal flow of information has been added to accommodate the complexities of the intracellular biochemical reactions (Wolf et al., 2001) and to enhance flexibility.
Brief description of the EKFand the AA filter
The Kalman filter is a set of mathematical equations that provides an efficient recursive solution of the least-squares type. The filter can provide estimations of past, present and future states of a system even when a precise model is not known. This feature is useful for microbial processes under non-ideal (realistic) conditions because models developed with laboratory data may become inapplicable or imprecise under the influence of disturbances and spatial gradients (Gillard & Tragardh, 1999;Shuler & Kargi, 2002).
The Kalman filter addresses the problem of trying to estimate the state x of a discrete-time controlled process that is governed by the linear difference equation: with a measurement vector that follows: In principle, the EKF determines the current estimates of a set of variables by linearization, using the partial derivatives of the process and measurement functions evaluated at the (known) previous instant of time. The detailed theory and equations are given in the literature (Stephanopoulos & Park, 1992;Grewal & Andrews, 1993;Welch & Bishop, 2004). Although eqns (2) and (3) are in discrete forms, whereas most biological processes are described by continuous models, this is not an impediment because, in practice, data are sampled at discrete points in time.
An autoassociative neural network receives a set of inputs, processes them and generates transformed outputs of the same variables. The nature of processing or transformation depends on the application. In this study, processing involved reduction of the noise in the feed stream. Although the noise is present directly in the flow rate, it also affects other variables, as eqns (A1)-(A6) show, because they are mechanistically connected to the feed stream. Moreover, a neural filter is normally used in conjunction with a neural controller (Patnaik, 2003b), which uses output information to adjust the input variables continually. The liquid feed stream (of glucose) is characterized by its concentration and flow rate, and only the flow rate of air may be adjusted as its composition is fixed. Thus, the AA filter has three neurons each in the input and output layers, and the number of neurons in the hidden layer was adjusted until the output profiles were within 2% of the input profiles. As shown in a previous study with a purely neural filter (Patnaik, 2003a), the optimum number turned out to be two, thus generating the topology shown in Fig. 2.
As both the EKF and the AA neural network allow any arbitrary variation in the sampling interval, this may be varied according to the nature of the process. For instance, the interval may be made inversely proportional to the current concentration gradient, generating closely spaced data when the variations are steep and more widely separated points during mild variations (Patnaik, 1997).
Earlier studies Rohner & Meyer, 1995;Patnaik, 1997) have suggested that the feed stream is a major carrier of noise in continuous and fedbatch fermentations, and white noise is the principal component of the observed fluctuations. To generate data simulating a noise-influenced oscillating culture, therefore, the equations in the Appendix were solved with the parameter values used by Jones & Kompala (1999) (see Table 1) and white noise in the flow rate of the substrate. Data from the simulated profiles were sampled at intervals inversely proportional to the local concentration gradients.
Application and discussion
To maintain consistency with earlier work (Patnaik, 2004a, b), the same case studies were chosen from Jones & Kompala (1999). They considered the effects of changes in the dilution rate and the gas-liquid mass transfer coefficient of oxygen on the occurrence and the nature of oscillations. These two variables are commonly used in control policies for continuous fermentations, with the dilution rate being preferred (Henson & Seborg, 1992;Dochain & Perrier, 1997).
Studies by Jones & Kompala (1999) and others (Beuse et al., 1993;Duboc et al., 1996) have shown that oscillations decay in both amplitude and frequency as the dilution rate is increased, whereas the mass transfer coefficient has the opposite effect. These studies in laboratory-scale bioreactors were not influenced by the inflow of noise and did not show any chaotic oscillations. In the present context this observation is significant. As there was no deterministic chaos, any chaos observed in the simulated data was due to noise alone. Thus, the Lyapunov exponents indicate purely noise-induced chaos and provide reliable comparisons of filtering devices for their abilities to rescue noise-free oscillations.
The progress of Lyapunov exponents for four variables normally monitored for fermentation performance is compared in Figs 3 and 4, the former for the dilution rate and the latter for the mass transfer coefficient. Just as the deterministic oscillations decay with increasing dilution rate and amplify with increasing oxygen mass transfer coefficient, so do the corresponding Lyapunov exponents. This implies that strongly oscillating cultures are more likely to Fig. 3. Variations of the Lyapunov exponents with the dilution rate for noise-free (empty circles) and noise-distorted (filled circles) cultures. (Jones & Kompala, 1999) Parameter Units Value a h À 1 1.0 a à g h À 1 0.1 b h À 1 0.2 g 1 g g À 1 6.0 g 2 g g À 1 6.0 g 3 g g À 1 0.3 m 1,max h À 1 0.44 m 2,max h À 1 0.32 m 3,max h À 1 0.31 j 1 g g À 1 0.27 j 2 g g À 1 1.067 j 3 g g À 1 2.087 Stable oscillations from noise-induced chaos be destabilized and eventually driven to chaos by noise in the substrate feed stream. Both sets of figures also show that the Lyapunov exponents without noise are consistently smaller than zero, whereas those with noise are positive. These results corroborate the absence of chaos in noise-free experiments (Beuse et al., 1993;Duboc et al., 1996;Jones & Kompala, 1999). The effectiveness of different filtering devices is compared in Figs 5 and 6. Representative values of the dilution rate and the oxygen mass transfer coefficient were chosen from the work of Jones & Kompala (1999). All three types of filters eliminate noise significantly, but there are also equally significant improvements from an EKF to a neural filter to a hybrid filter. The EKF was chosen among the algorithmic filters because of its popularity in bioreactor applications as well as its suitability for oscillating fermentations (Patnaik, 2004b).
Although the EKF removes a substantial part of the noise in the feed stream (as evident from the large reductions in the Lyapunov exponents), it does not always restore stable oscillations. For dilution rates of 0.10 h À1 for glucose and dissolved oxygen, 0.10 and 0.13 h À1 for ethanol, and all three dilution rates for biomass, the filtered exponents still remain positive, indicating residual chaos (Fig. 5). Similar results are seen at all the three mass transfer coefficients for biomass and dissolved oxygen, whereas for ethanol the culture is just marginally stable at 275 and 325 h À1 , implying that a small perturbation can again generate chaos (Fig. 6).
Although the hybrid model, combining an EKF and a neural filter as in Fig. 1, creates the largest improvements toward noise-free stable oscillations, these improvements are generally much smaller for noise reductions in the mass transfer coefficients (Fig. 6) than in the dilution rates (Fig. 5). Even with a hybrid neural filter, oscillations in the biomass concentration remain precariously close to a relapse into the chaotic regime.
As the biomass concentration is the most severely affected by noise (Patnaik, 2005), the rates of convergence of all three filters in trying to restore stable oscillations in this variable are compared in Fig. 7. The relative performances correlate well with their transient learning abilities. This is characterized by the mean sum of squares of errors, defined as where X e j and X p j are respectively the 'experimental' (or simulated) and predicted values of a variable at the jth point in time, and N is the total number of data. Here, too, a hybrid neural filter outperforms a pure neural filter and an EKF. The relatively inferior performances of all noise filters for biomass concentration when compared with other variables are because the oscillations in this variable are disrupted more strongly by inflow of noise than are other concentrations. Likewise, better filtering of noise is possible in the dilution rates than in the oxygen mass transfer coefficients, which enhances its suitability as a manipulated variable for bioreactor control. This further supports Henson & Seborg's (1992) recommendation to employ input -output linearizing control based on the dilution rate.
Conclusions
It is difficult to eliminate the flow of noise into large-scale fermentations. However, it is important to generate reasonably noise-free performance to identify and act upon salient features of the process and to avoid destabilization and degeneration into chaotic behavior.
The restoration of smooth oscillations from chaos induced by noise in the feed stream to continuous fermentations with Saccharomyces cerevisiae has been explored here. Three kinds of noise filter were investigated: the extended Kalman filter (EKF), an autoassociative (AA) neural filter, and a hybrid filter, which was a combination of these. The effectiveness of each filter was measured by calculating the Lyapunov exponents from the time-domain profiles of the variables of interest. Large positive exponents denote a preponderance of chaos, and large negative exponents indicate a return to stable periodic orbits. The EKF was the least efficient, the AA neural filter was better and the hybrid filter was the best. This underlines a fundamental weakness of algorithmic filters, of which the EKF is the most commonly used.
Two controllable variables determine the nature of S. cerevisiae oscillations: the dilution rate and the mass transfer coefficient of oxygen to the liquid. The performances of all three kinds of filter were better for the dilution rate (i.e. the flow rate of the substrate) than for the mass transfer coefficient. This observation and the superiority of a hybrid neural filter strengthen the preference for the dilution rate as a manipulated variable and of hybrid neural networks for non-ideal bioreactor simulation and control.
Inclusion of the term a à in the enzyme synthesis equations (A10) is based on Turner & Ramkrishna (1988), who have shown its importance in predicting the induction of enzymes that have been repressed for long durations. The specific growth rates thus also include a à in the model: Equation (A11) expresses the rate of change of internal storage carbohydrates that are an integral part of the metabolism (Satroutdinov et al., 1992;Duboc et al., 1996).
The j i are the stoichiometric coefficients for different substrates S i , and g i are similar coefficients for carbohydrate synthesis and consumption by the cells. Jones & Kompala (1999) may be consulted for a full discussion of the model. A point not clarified there is the identification of S 1 , S 2 and S 3 . Reference to eqns (A1)-(A3) shows that S 1 = G, S 2 = E and S 3 = G. This identification is needed to solve the model. The values of the parameters are listed in Table 1.
|
2018-04-03T06:19:00.558Z
|
2006-01-01T00:00:00.000
|
{
"year": 2006,
"sha1": "9e9a881dd961ea39a1ada88d77c1d5942fb2b914",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/femsyr/article-pdf/6/1/129/17938786/6-1-129.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d7a7d06ac9d0815f97ebf67112ca2d64dd72804f",
"s2fieldsofstudy": [
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
31595518
|
pes2o/s2orc
|
v3-fos-license
|
Differences in pectoral fin spine morphology between vocal and silent clades of catfishes ( Order Siluriformes ) : Ecomor-phological implications
Stridulatory sound-producing behavior is widespread across catfish families, but some are silent. To understand why, we compared spine morphology and ecotype of silent and vocal clades. We determined vocal ability of laboratory specimens during disturbance behavior. Vocal families had bony (not flexible or segmented) spines, well-developed anterior and/or posterior serrations, and statistically significantly longer spines. We compared morphology of the proximal end of the pectoral spine between vocal and silent species. For vocal taxa, microscopic rounded or bladed ridges or knobs were present on the dorsal process. Most silent species had reduced processes with exclusively smooth, convoluted, or honeycombed surfaces very similar to spine-locking surfaces, or they had novel surfaces (beaded, vacuolated, cobwebbed). Most callichthyids had ridges but many were silent during disturbance. All doradid, most auchenipterid and most mochokid species were vocal and had ridges or knobs. Within the Auchenipteridae, vocal species had spines with greater weight and serration development but not length. Silent auchenipterids had thin, brittle, distally segmented spines with few microscopic serrations on only one margin and a highly reduced dorsal process lacking any known vocal morphology. Silent auchenipterids are derived and pelagic, while all vocal genera are basal and benthopelagic. This is the first phylogenetic evidence for stridulation mechanism loss within catfishes. Phylogenetic mapping of vocal ability, spine condition, and ecotype revealed the repeated presence of silence and vocal taxa, short and long spines, and ecotype shifts within clades. The appearance and loss of vocal behavior and supporting morphologies may have facilitated diversification among catfishes [Current Zoology 56 (1): 73–89 2010].
Studies of sound-producing behavior in catfishes (Teleostei: Siluriformes) highlight the importance of sound signals in reproductive and agonistic behavioral contexts (Kaatz, 2002;Fine and Ladich, 2003).Pfeiffer and Eisenberg (1965) hypothesized that catfishes with weaponized pectoral spines produce disturbance sounds as a form of acoustic aposematism, but an experimental study of one species did not support this hypothesis (Bosher et al., 2006).Disturbance sounds are produced when a catfish is physically restrained in a way similar to an interspecific or predatory attack and can indicate the presence of stridulation signaling in undisturbed intraspecific contexts (Kaatz, 1999).Heyd and Pfeiffer (2000) observed that chemical alarm signals were weakened or absent from species that were vocal during disturbance.These findings suggest that disturbance could function as a vocal in place of a chemical alarm signal.Thus, sound production is a widespread and potentially important aspect of catfish behavior.Determining the distribution and evolutionary patterns of vocal behavior and morphology in catfishes is essential to understanding communication in these fishes.
The phylogenetic distribution of vocal swimbladder mechanisms in catfishes suggests multiple independent origins within the order (Parmentier and Diogo, 2006).Repeated, isolated origins of sound production suggest patterns of vocal mechanism acquisition, elaboration and possible loss and reacquisition.In addition to having the ability to vocalize with their swimbladders, many catfishes use pectoral spines to produce stridulation sounds, also associated with disturbance, agonistic behavior, and male courtship display (Kaatz, 1999;Pruzsinszky & Ladich 1998;Kaatz and Lobel 1999;Fine and Ladich, 2003).The evolution of pectoral spine stridulation in catfishes is unexplored.
The structures employed for sound production in pectoral spine stridulating catfishes are part of a synapomorphic complex of characters that define the Order Siluriformes (Alexander, 1965).The functional role of this complex in catfishes is known to serve a locking function for a passive predator defense that deters gape-limited predators (Alexander, 1981).The structures involved are the pectoral girdle groove, the spine locking processes (Gainer, 1967), and, specifically, the dorsal process of the proximal end of the pectoral spine (Fine et al., 1997).The vocal mechanism includes microscopic bony ridges on the pectoral spine proximal surfaces that articulate with the pectoral girdle (Burkenroad, 1931;Agrawal and Sharma, 1965;Goel, 1966;Schachner and Schaller, 1981;Kaatz and Stewart, 1997;Fine et al., 1997;Teugels et al., 2001;Fabri et al., 2007;Parmentier et al. In Press).These ridges are hypothesized to be responsible for the production of pulsed, broad-band frequency "creaking" sounds (Tavolga 1960;Winn 1964;Fine et al. 1997) and are analogous to the stridulatory mechanisms of vocal communicating arthropods (Ewing, 1989).Determining if vocal morphology, the presence of ridges, is a reliable indicator of vocal ability would allow the mapping of vocal compared to non-vocal or silent taxa and provide an insight into the evolution of this stridulation mechanism.
What are the evolutionary constraints or selection pressures that might lead to loss of vocal behavior and morphology within a vocal clade?While sound production appears to be a relatively specialized behavior among fishes (Demski, et al., 1973;Ladich et al. 2006;Senter, 2008), reasons for the loss of vocal ability remain enigmatic.The causes of vocal mechanism loss have been examined in anurans (Martin, 1972), and sound production within an arthropod species can be lost rapidly (Zuk et al., 2006).Based on investigations of their vocal behavior or morphology, it has been found that more than ten families of vocal fishes include silent taxa (Moulton, 1958;Nelson, 1965;Hawkins and Rasmussen, 1978;Schuster, 1984Schuster, -1985;;Stewart, 1986;Chen and Mok, 1988;Kaatz, 1999;Ladich and Popper, 2001;Johnston and Vives, 2003).Absence of vocal ability was shown by a lack of muscles or bones specific to the vocalization mechanism in some species.Four of these families are catfishes, suggesting either ancestral absence or evolutionary loss of the swimbladder drumming mechanism in ariids (Kulongowski, 2001), pimelodids (Stewart, 1986), pangasiids (Parmentier and Diogo, 2006), and heptapterids (Heyd and Pfeiffer, 2000), although cladistic analysis, an explicit comparison of primitive versus derived taxa, was not conducted in all cases.Among fishes in general, many authors have speculated on the types of selection pressures that may produce silent lineages.The major areas these hypotheses cover are (reviewed in Kaatz, 1999): social behavior (Protasov et al., 1965), predator-prey interactions (Hawkins and Rasmussen, 1978), sensory ability (Ladich, 1999), and ecomorphology (Marshall, 1967).In this paper, we consider possible ecomorphological factors leading to the loss of sound production in catfishes.
Ecological selection pressures, differences between habitats among these, could effect fin spine morphology.Fin spine lengths differ between pelagic and littoral habitats in freshwater sunfishes (Robinson et al., 2008) and between marine shallow and deeper water ecotypes of groupers (Carvalho-Filho et al., 2009).Among catfishes, a shift from a bottom-dwelling habit to a burrowing habit in clariids correlates with a significant reduction of the pectoral spine, even to the point of complete loss in some individuals (Adriaens et al., 2002).A sub-benthic or burrowing habit, thus, appears to pose a constraint on using the pectoral spine for sound production.Multiple-use anatomical structures such as pectoral fins in catfishes play important functional roles in locomotion, brood care (Ochi and Yanagisawa, 2001), and defense, as well as in sound production (Fine and Ladich, 2003).We propose that there may be structural differences in the pectoral fin spine associated with these different functions and that some roles may conflict with others, imposing constraints on vocal mechanism design.Specifically, we hypothesize that shifts in pectoral fin use between different ecotypes may alter the use of the pectoral spine in vocal behavioral display, and we evaluate this hypothesis in this paper.
Literature review of pectoral spine distal morphology: Inter-familial variation
We conducted a survey of the vocal abilities of catfishes for all extant catfish families.The vocal or silent status of catfish families was based on previous reviews from the literature (Kaatz, 1999;Heyd and Pfeiffer, 2000).We categorized all species within a family as vocal if at least one species was known to be vocal.In order to determine if there were any differences between the gross morphology of pectoral spines of vocal and silent catfishes, we conducted a literature review of gross spine morphology by obtaining measurements of pectoral spine lengths from descriptions of type specimens in the literature for 351 references and 993 species (Teugels, 1996;All Catfish Species Inventory Database, Sabaj et al., 2003-2006).We recognized the 34 living families cited in Ferraris ( 2007) and four additional living families (Auchenoglanididae, Heteropneustidae, Lacantuniidae, and Horabagridae) identified by molecular techniques (Sullivan et al., 2006;Lundberg et al., 2007).
From these publications we extracted the quantitative and qualitative morphology of the anterior-most lepidotrichium of the pectoral fin, henceforth referred to as the "pectoral spine", for each species.We noted pectoral spine length, fish specimen standard length (SL), and the location and development of the anterior and posterior serrations on the pectoral spine for each species.For most species only one data point was obtained for each species (i.e., the holotype), but when available, we also used the range for paratypes as reported in the literature.We also noted the predominant habitat specializations or ecotypes for the majority of species within each family as described in the literature.
We applied and extended the comparative technique and classification scheme of Fernandez (1980) for ranking families by percent standard length of the pectoral spine (Appendix 1).Differences in spine length between vocal and silent taxa were evaluated with an analysis of covariance, with standard length as the covariate to account for differences in body size.This covariance analysis was performed using the JMP 5.0.1.2statistical package (SAS Institute, Cary, NC).
Vocal disturbance behavior for laboratory specimens
The disturbance behavioral context in fishes, that simulates a predation attack and releases many fishes' agonistic vocal repertoires, provides a valuable tool for sampling fish sounds (Fish and Mowbray, 1970;Kaatz, 1999;Lin et al., 2007).In total, we evaluated 143 species in 23 families (Appendix 2).Disturbance sounds for vocal members of 81 species (for sample sizes see Kaatz, 1999) were recorded with a VHS Panasonic video camera while the fish was held by hand underwater within 3 to 6 cm of a suspended hydrophone (left side facing the hydrophone), either in the field or in a glass aquarium (see Kaatz, 1999;Kaatz and Lobel, 2001).For all remaining species, individuals were held in the air, and only the presence of disturbance stridulation was noted.For these remaining 62 species, one to 57 individuals per species were evaluated (mean 13 +12 SD).Standard length (cm) and weight (g) were recorded for each individual immediately after recordings or observations were made for all individuals.
Thirteen vocal species representing four families that were studied for disturbance behavior were also studied in undisturbed social settings.Observations on vocal behaviors associated with reproductive and agonistic interactions in aquaria demonstrated that the presence of disturbance stridulation correlated with the use of the same vocal mechanism in undisturbed contexts (Kaatz, 1999).Thus, when we observe disturbance sounds in a species of catfishes, it is a likely indicator of the presence of another vocal communication context that employs this mechanism.Lack of disturbance sounds is not necessarily an indicator of total silence, as some vocal fishes, such as cichlids, are not known to produce any disturbance sounds.For one additional species, Ageneiosus magoi, we also monitored behavior for seven individuals, including both adult males and females (n = 34, 10 -20 min observations).In order to determine the extent of vocal ability in catfishes we particularly focused our survey of vocal disturbance behavior within several clades: (1) Mochokidae (18 species, 3 genera), (2) the doradoids, which include the Doradidae (24 species, 17 genera) and Auchenipteridae (12 species, 8 genera); and (3) Callichthyidae (48 species, 9 genera).To determine whether or not vocal behavior was evolutionarily derived for the species we sampled within each of the above families, we referred to genus level phylogenetic hypotheses for all families except the mochokids for which cladograms representing all the taxa we evaluated are lacking (Ferraris, 1988;Higuchi, 1992;Reis, 1998).
Microscopic analyses of pectoral spine proximal morphology for laboratory specimens
In order to determine if there were any differences in the surface structures on the proximal end of the pectoral spine, we conducted a microscopy survey of these structures (1 -22 individuals per species; Appendix 2) for the same individuals whose social and disturbance behaviors were documented.Experimental fishes were euthanized following standard techniques (ASIH, AFS, and AIFRB, 1988) and skeletonized by water maceration.The cleaned bones were then air-dried.Morphology of the pectoral spine base was observed for 14 mochokid, 16 doradid, 12 auchenipterid, and 48 callichthyid species.We observed an additional 34 species in 19 other catfish families.Spine morphology was studied with a scanning electron (JEOL 5800,(15)(16)(17)(18)(19)(20)000x) or stereoscopic (Leica Zoom 2000, 30x− 45x) microscope.Scanning electron microscopy samples were sputter-coated with gold palladium.Lateral surfaces of the pectoral spine dorsal process were imaged and documented for surface morphology patterns.The locking surfaces of the dorsal spine as well as the locking anterior and ventral processes of the pectoral spine were viewed for callichthyid, mochokid, doradid, and auchenipterid specimens (1−3 species per family, n = 12).We compared pectoral spine length (measured with digital calipers to nearest 0.01 mm) and weight (electronic microbalance to the nearest 0.001 g) for two vocal (Liosomadoras morua and Trachelyopterus cf.galeatus, n = 8 individuals for each species) and three silent species (Ageneiosus spp., n = 10 individuals) within the Auchenipteridae; differences in length and weight were statistically analyzed using ANOVA with Statistica (Ver.6.0).
Historical biology of pectoral spine vocalization
In order to evaluate ecological patterns in relationship to vocal ability we mapped these character states onto the maximum parsimony siluriform phylogeny using unordered parsimony reconstruction in Mesquite (Ver.2.5; Maddison and Maddison, 2009).We recognized the established families and general topology of Sullivan et al. (2006).Following Lundberg et al. (2007), we accepted the topology for African families whose relationships had been re-assessed relative to the new family Lacantuniidae.We did not make any changes for families in the Asian clade, as the family status of different genera of the Amblycipitidae is not yet fully resolved (Sullivan et al., 2008).Combination of the topologies for the different phylogenetic trees allowed for mapping of relationships for all 19 vocal families and for a total of 37 families.Austroglanididae is not mapped in Sullivan et al. (2006) and is silent.
For comparing shortened versus lengthened catfish pectoral spines relative to a phylogenetic standard, we calculated an average based on other bony fishes.We used the average length for the anterior-most lepidotrichium of the pectoral fin or "spine" (homologous to the catfish pectoral spine) for bony fishes that do not use their fin rays for sound production but do use them for locomotion, a functional difference we were trying to contrast.This average was calculated from a review of bony fish fin lengths reported in the literature and represented a wide range of taxa: 128 species, 54 families, and 19 orders (primarily Teleostei, one Chondrostei, one Holostei).We noted "spine" length for all newly described bony fish species, excluding catfishes, published in the journal Copeia between 1992 and 2008.We found this average estimate of "spine" length to be 14.3% SL +6.4 SD.This value was used to map "short" (≤ 14.3%) versus "long" (> 14.3%) pectoral fin spines on a cladogram of catfish families.An alternative measure would be to use the Diplomystidae as a reference value for spine length, as it is the most basal family to the Siluroidei clade, which includes the majority of catfish families.However, diplomystid spine lengths are not an appropriate comparison for catfish families outside the Siluroidei superfamily (Sullivan et al. 2006).The range of spine standard length for diplomystids was 14.9% -21.3% with a mean of 19.7% +0.03 SD (n = 6 species).This measure at its lowest estimate is very similar to the broader bony fish estimate from the "other bony fish" literature review.The upper range, above 19.7%SL, identifies eight families with very long spines relative to the Diplomystidae within the Siluroidei super family.
Literature review of pectoral spine distal morphology: Inter-familial variation
Pectoral spine condition for a given species was either ossified, bony and rigid, or slender (described as filamentous in the literature) and flexible with a distally cartilaginous or segmented tip.Only three families (all silent) lacked fully ossified and serrated pectoral spines (Appendix 1): Astroblepidae, Cetopsidae and Trichomycteridae.Filamentous or flexible tips were found in some species in seven other families (Amblycipitidae, Amphiliidae, Lacantuniidae, Loricariidae, Pimelodidae, Plotosidae, Siluridae), all of whom are known as silent except the loricariids, pimelodids and plotosids.
The shortest spine length was 0.8 mm for a clariid (0.2% SL), and the longest was 144.7 mm for a doradid (26.3% SL), both vocal families (Appendix 1).Species in vocal catfish families had significantly longer pectoral spine lengths than did those in silent families (AN-COVA: df = 2, F = 1071.05,P < 0.0001).Of the eight longest spines, six were vocal families and mean % SL spines ranging from 20.6% -27.8%.Their rank order from lowest to highest is: Pseudopimelodidae, Mochokidae, Aspredinidae, Callichthyidae, Loricariidae, and Doradidae.Other vocal families were found interspersed among silent families.
Silent families had spines with mean % SL across the full spectrum of lengths, ranging from the longest, 36.5% +8.7 SD, for astroblepids, to the shortest, 7.0% +0.6 SD, for malapterurids (Appendix 1).Vocal families similarly were not restricted to any narrow range of spine lengths (8.0 +4.4 SD to 27.8 +7.1 SD for % SL for clariids and doradids, respectively).Only nine families had short spines (< 14.3% SL) relative to our bony fish reference value, and five of these were vocal.A similar number of silent (13) and vocal ( 14) families had long spines (> 14.3% SL).The eight families above 19.7%SL, that have spines shorter than the average Diplomystidae, are predominantly vocal (6 of 8).In contrast, the group of families with the shortest spines had slightly more silent families (6 of 11) based on the lower diplomystid spine value (14.9% SL).The families with intermediate spine lengths relative to diplomystids also had similar numbers of vocal (8 of 18) compared to silent (10 of 18) families.
Variation of serration morphology on the pectoral spine (Appendix 1) included: (1) serrations present on both sides or present on only one side; (2) well-developed serrations that were regularly hook-shaped to irregularly shaped; and (3) serrations visible without a microscope to weakly developed serrations requiring a microscope to count.There was a relationship between spine serrations and vocal behavior.Six families (all silent) lacked serrations entirely: Malapteruridae, Trichomycteridae, Cetopsidae, Amblycipitidae, Lacantuniidae, and Astroblepidae.The silent amphiliids mostly lacked serrations, but one genus does have them.Among the silent sisorids, some lacked and some had serrations.There was a relationship between spine secondary serration ornamentation and vocal behavior.Of the nine families that have both anterior and posterior strongly developed spine serrations, eight are vocal, and one is silent.Six of the dual-sided serrated families also had among the highest % SL for spine length (20.6% -27.8%).A total of 23 families (except pimelodids, that had the full range of serration variation) had only one side serrated, with the other often entirely smooth or weakly serrated.
Microscopic analyses of pectoral spines for laboratory specimens and vocal characteristics
Of the 23 families we surveyed, seven were silent and 16 were vocal (Table 1).The silent malapterurid, silurid, cetopsid, and schilbid specimens (Fig. 1) had pectoral spines that were relatively short (< 1 cm) and lightweight (< 1 mg).Malapterurids had the most reduced spine, with a proximal end that was entirely smooth, opaque bone, and whose structures were not clearly homologous to processes in any other catfish species.The dorsal process was so thin in the Schilbidae that it was translucent.In contrast, the silent erethistids, heptapterids, and sisorids had longer spines.Three silent families had dorsal process morphology unique to the silent species surveyed: (1) Ageneiosus in the Auchenipteridae, vacuolated (Fig. 1A, B); (2) Cetopsidae, cobwebbed (Fig. 1C, D); and (3) Erethistidae, beaded rows (Fig. 1E, G).Flat convolutions (Fig. 1E, F) were also exclusively present in the Schilbidae and four other silent taxa as well as in the vocal Heteropneustidae.Other structures found in both vocal and silent taxa that were not documented with SEM were shingled teeth (e.g., loricariids and sisorids) and hemispheres (e.g., silurids and ictalurids; Table 1).Silent individuals of silent species in five families had a limited numbers of edge knobs, but most individuals had none.
Vocal species always had either or both ridges (Fig. 2A, B) or knobs (Fig. 2C, D) present on the dorsal process.Honeycombed patches were present on the dorsal process to the right and left of centrally located ridges or knobs in several vocal families (Fig. 2C, D; Table 1), although only silent species had this structure solely covering the process.
Articulating surfaces of the anterior and ventral locking process of the pectoral spine proximal end, located below the dorsal process, had only either honeycombed or convoluted surface structures (Fig. 2E).Dorsal spine locking surfaces articulating with the vertebrae had only convolutions (Fig. 2F).
Vocal and silent species were both present in a group of eight families (Table 1).Ridges and or knobs were present in all vocal species.Ridges and knobs were present in silent species in the genera Corydoras, Tatia, Ameiurus, Noturus, and Otocinclus, while all other silent species lacked ridges entirely and had either edge knobs or predominantly convoluted or honeycombed surface morphology.Within the vocal families that we surveyed at the species level, very few silent species were found.From 8 to 11% were silent per clade: 3 of 32 doradoids (Auchenipteridae + Doradidae), 2 of 18 mochokids, and 4 of 48 callichthyids (ridged and knobbed species scored as vocal).Ridges and knobs on the shelf of the dorsal process were present in all vocal species.
Ridge and/or knob morphology was present in all individuals of eight (e.g., doradids, horabagrids, pimelodids, ariids, aspredinids, heteropneustids, pangasiids, and auchenoglanidids) of the vocal families that had vocal behavior present in each species sampled in the family (Table 1).Two of these families are categorized as "strongly vocal", as each individual produced disturbance stridulation sounds in great numbers, with multiple sweeps of the pectoral fin as opposed to single Ridges may be round or bladed and are linear, extending from process edge to >1/2 of process shelf.Knobs are short ridges found only at the process edge or onto <1/2 of process shelf.See Appendix 2 for lists of species where more than one species per family is indicated by a number in parentheses.Abbreviations: S = Silent; V = Vocal.sweeps.Their sounds and the sounds of most vocal species we recorded produced "creaks" whose spectrograms indicated pulsed broadband frequency sounds that were audible, temporally even-spaced pulses (Fig. 3A).Of the six "weakly vocal" families that had a reduced capacity for stridulation (less than half of individuals tested produced sounds, and these were often few in number, sometimes only one sweep), three produced sounds audibly different from all other catfishes recorded: Parauchenoglanis sp., Pangasius sutchi, and Heteropneustes fossilis.The spectrograms were of frequency modulated "squeaks" (Fig. 3B) with few broad-band pulses, and individuals rarely could be stimulated to produce any sounds during disturbance even as adults.The "squeaks" were narrower in frequency band than were "creaks" and weakly pulsed, lacking regular spaces between pulses.Only one individual each of the Heteropneustidae and Pangasiidae produced a single "squeak" by abduction and adduction of the pectoral spine.Ariid, aspredinid, auche-noglanidid and pimelodid individuals produced "creaks" by weakly audible multiple sweeps of pectoral fin spine abduction and adduction.Raised linear ridges (rounded or bladed), that extended from the edge of the dorsal process to at least half of the shelf of the dorsal process (Fig. 2A, B), were present in both "strongly vocal" families as well as in Ariidae, Heteropneustidae, and Pimelodidae.Short knobs at the edge of the process (Fig. 2C) were present to the exclusion of ridges in auchenoglanidids and pangasiids.Knobs that reached onto the dorsal process (Fig. 2D) were present in aspredinids and pimelodids.
Fig. 3 Spectrograms of disturbance pectoral spine stridulation sounds recorded from two species in the Mochokidae
A. "creaking", Synodontis eupterus (above), and B. "squeaking" Hemisynodontis membranaceus (below).The "creaking" species has numerous ridges that cover more than half of the dorsal process shelf, while the "squeaking" species only has short knobs at the edge of the process.
Pectoral spine variation within three clades for laboratory specimens and their ecotypes
All doradid species had both rounded ridges and knobs that reached onto the shelf of the dorsal process, and were vocal, producing broad-band, pulsed "creaks" (Fig. 3A).Spines of doradid species were very thick, with strong serrations on both the anterior and posterior surfaces.Their spine weights ranged from 0.011 to 2.074 g (mean 0.279 ± 0.294 SD, n = 23), and lengths ranged from 10 to 58 mm (mean 28.07 ± 12.18 SD, n = 23).All species of Auchenipteridae, the sister group of Doradidae, were vocal as well, except species in the genus Ageneiosus, which did not produce pectoral stridulation disturbance sounds and lacked any known vocal morphology on the dorsal process.Ageneiosus pectoral spines were thin, translucent, and brittle, with segmented distil ends.In contrast, vocal doradid spines were solid opaque bone and non-brittle, with a sharp distal point.Spine weight (6 -46 mg, mean 17 ± 13 SD, n = 10), but not spine length (12 -36 mm, mean 23 ± 8 SD, n = 10), of three Ageneiosus species was significantly less (ANOVA df = 2, F = 15.9,P < 0.0001) compared to two disturbance-stridulating genera in the same family.The spines of the two vocal genera did not differ from each other: A. Parauchenipterus cf.galeatus spine weight range 210 -918 mg (mean 502 ± 242 SD, n = 8) and spine length range 19 -35 mm (mean 29 ± 5 SD, n = 8); B. Liosomadoras morhua spine weight range 28 -764 mg, (mean 447 ± 275 SD, n = 8) and spine length range 16 -36 mm (mean 27 ± 8 SD, n = 8).Ageneiosus species also lacked serrations along the anterior margin of the pectoral spine and had few (< 10), very low-aspect microscopic serrations on the posterior margin.All vocal species of both families had strongly curved, numerous, and visibly countable serrations on both margins of the spine.All doradids species were predominantly bentho-pelagic while active.Within the Auchenipteridae all species are benthopelagic except Ageneiosus species which are pelagic piscivores.
In our survey of the Mochokidae, one species, Hemisynodontis membranaceus, a pelagic zooplanktivore, produced weak, poorly pulsed and rare "squeaks" (Fig. 3B).Its pectoral spine had a largely smooth surface with a few shallow anterior and posterior serrations and was longer and heavier (20.0%SL, 43 mm, 811 mg, n = 1) than those in all Synodontis species (3 -444 mg, mean 90 ± 99 SD; 8 -41 mm, mean 19 ± 9 SD; n = 69).The dorsal process surface morphology was smooth except for two patches of convolutions and had knobs only on the edge.All Synodontis species were benthopelagic and vocal, and spines had numerous serrations that were large and hook-shaped on both margins.All species in the genus Synodontis produced loudly audible, pulsed, broad-band frequency "creaking" sounds (Fig. 3A), and had a dorsal process with well-defined, rounded ridges.The "creaking" species have numerous ridges that cover more than half of the dorsal process shelf, while the "squeaking" species only has short knobs at the edge of the process.
A species belonging to a third genus in the family, Microsynodontis sp., was silent, but its disturbance be-havior is in question, since only one specimen in poor condition was available.Its habit was benthic.Its spine was 7 mm long and weighed 5 mg and had anterior and posterior serrations that were strongly hooked.The dorsal process surface had a smooth and convoluted surface morphology, and the process itself was strongly curled in toward the spine shaft instead of closer to a 90° angle from it as in most vocal taxa examined.
Callichthyid catfishes we sampled had spine lengths that ranged from 3 to 23 mm (mean 15 ± 6 SD, n = 10) and weights from 3 to 200 mg (mean 54 ± 60 SD, n = 10).The subfamily Corydoradinae had rounded or blade-like ridges plus some areas of convolution, but within the basal genus Aspidoras, some individuals lacked ridges, having only convoluted surfaces.In the genus Corydoras, all 39 species had ridge morphology and knobs.All Corydoras with blade-like or rounded ridges produced pulsed, broad-band frequency, grating "creak" sounds.Unlike in other catfishes, disturbance stridulation was difficult to elicit even in species for which social sound communication is well documented.The majority of Corydoras species produced no sounds during disturbance.Species of the subfamily Callichthyinae typically had raised convolutions or flat convoluted surfaces.Some individuals had knoblike extensions of convolutions exposed only on the edge of the process.Only Dianema produced typical pulsed disturbance "creaks", while all individuals of other genera were silent (e.g., Megalechis, Hoplosternum, and Callichthys).Within the entire family, pelagic, benthopelagic, and benthic species exhibited both silent and vocal behavior.
Historical biology of pectoral spine vocalizations, ecotype, and spine length
Phylogenetic patterns of sound production show repeated groupings of vocal and silent lineages within clades.Of eight well-defined clades consisting of two or more families, five included both vocal and silent families (Fig. 4).Silent families in the Loricarioid clade were basal, while the Siluroidei clade also had one silent basal family with many vocal lineages representing higher order clades.
Spines were long (> 14.3% SL) for the majority of families.Only nine catfish families had short (< 14.3% SL) spines.Four of the short-spined families were silent, and five of the short-spined families were vocal.Three clades showed variation in spine length, with both short (< 14.3% SL) and long (> 14.3% SL) spines representing different families within the clade.
Patterns in vocal behavior and morphology in catfish families
Catfish families with the longest spines in terms of % SL were predominantly vocal, although many vocal families had proportionally shorter spines.Silent families are represented in some cases by highly reduced spines (ie.Cetopsidae, Malapteruridae) both in terms of length, weight and serration development.In the auchenipterids we found evidence that the strong degree of ossification (i.e., weight) and defensive morphology of the pectoral spine (i.e., presence of secondary serrations on spine margins) may also correlate with the presence of audible stridulation ability, supporting Pfeiffer and Eisenberg (1965), who originally observed this phenomenon.This is explicable if a locking defensive spine is an exaptation for audible stridulation, as hypothesized by Alexander (1981).
Not all catfishes are alike in disturbance sound intensity, number of vocalizations produced, or defensive morphology (Kaatz, 1999).Members of "strongly vocal" families (Table 1) as well as loud vocalizing species with strong serrations (e.g.Synodontis) could be acoustically aposematic.Many catfish families have venom gland cells in the pectoral fin tissues (Wright 2009).Only some families include species with the ability to envenomate and cause painful symptoms in a human handler (Kaatz pers. observ.),and all these are both vocal during disturbance (except Noturus insignis) and found within vocal families, suggesting that sounds could have preceded envenomation.However, many envenomators were "weakly vocal" in disturbance (Table 1), implying constraints on envenomators for being vocal.Other families are quiet vocalizers and produce sounds of very low amplitude (inaudible to humans underwater without a hydrophone) predominantly in social contexts, rarely during disturbance (e.g.Corydoras species; Kaatz 1999).Catfish families currently hypothesized as "weakly vocal" in disturbance (Table 1) or silent in this study could fall into this latter category.
Microscopic vocal morphology: disturbance versus non-disturbance stridulation
The well-known microscopic, vocal ridges on the dorsal process of the pectoral fin spine (Burkenroad, 1931;Schachner and Schaller, 1981;Fine et al., 1997) were found to be widely distributed among the catfish families we surveyed and present in all species that produced disturbance sounds (Table 1).Thus ridge or knob morphology could serve as a valuable morphological indicator for the presence of sound communication in a species.Recognizing morphologies associated with vocal behavior allows inferences about vocalizations where behavior cannot be observed, such as for rare species only represented by museum specimens or for fossils.
Not all taxa known to produce sounds with pectoral stridulation in intra-specific social contexts produced disturbance sounds in this study.At least five Corydoras have been documented to produce sounds with pectoral stridulation during male courtship (Kaatz and Lobel, 1999), but the majority of Corydoras handled during disturbance were silent, although vocal ridge structures were present.Disturbance sounds might not be useful to such small fishes, which can be readily swallowed by a variety of predators.The ictalurid, Ameiurus nebulosus, is known to produce sounds in agonistic contexts (Rigley and Muir, 1979); individuals of this species that we tested were silent in disturbance as well but did have dorsal process ridges.For such fishes, morphology may be a more useful indicator of vocal ability than disturbance context observations.We infer that the following taxa may fit into this category of vocal behavior because they were silent during disturbance but have vocal ridges: Corydoras spp., Ancistrus sp., Noturus insignis, Tatia perugia, Otocinclus sp., Scleromystax barbatus, and Brochis splendens.These individuals may not have been reproductively conditioned or sexually mature enough to produce disturbance sounds.Many Corydoras species that were vocal during reproduction subsequently failed to produce disturbance sounds outside the breeding season (Kaatz, personal observation).
Catfishes may be able to produce pectoral spine stridulation sounds without the presence of either ridges or knobs.Megalechis and Hoplosternum species had flat and convoluted surfaces (Fig. 1) with no knobs or ridges, suggesting the lack of ability to produce typical "creaking" stridulation sounds.Megalechis thoracata is reported to produce undisturbed stridulation sounds with the pectoral spine in social contexts (Mayr, 1987).The absence of disturbance vocalization in this species may reflect its lack of importance in predator-prey interactions.
Vocal mechanism morphology may have evolved from the friction-locking surface structures (Fig. 2), although in some taxa ridges from the primary spine shaft extend directly onto the dorsal process surface, indicating an alternative origin for vocal structures.The absence of ridges and knobs and the presence of other novel surface morphologies in some silent species suggest a functional bifurcation between sound production for the former and spine locking for the latter structure types.Convoluted or honey-combed surfaces on the dorsal process may serve some function in the binding phase of spine locking (Fine et al., 1997).Marshall (1967) observed differences in ecotype between silent and vocal fishes; swimbladder mechanisms are present in coastal and deep-sea benthic taxa and absent in bathy-and meso-pelagic taxa.Like Marshall (1967), we found that for catfishes, vocal families were associated with bottom habitats and that the vocal ability of some highly specialized pelagic species was reduced or absent.Heyd and Pfeiffer (2000) note that some vocal species are solitary and nocturnal while some silent species are pelagic, diurnal and schooling.Ladich (1997) has observed the widespread importance of agonistic sound production in fishes that could be advantageous during territorial disputes in substrate-associated habitats.However, many silent families are benthic, more strongly restricted to the bottom than benthopelagic species which we found to be the ecotype more predominantly vocal (Fig. 4).The silent and benthic association is not explained by any hypothesis in the literature.
Ecomorphological implication of vocal ability in catfishes
Phylogenetic relations among auchenipterid genera we studied (Ferraris, 1988) indicate that pelagic habit correlated directly with an altered and reduced pectoral spine vocal mechanism for three species of Ageneiosus.This suggests differences in the functional role of the pectoral fin and its spine in the silent Ageneiosus species compared to all other auchenipterids we studied.Silent Ageneiosus species are specialized pelagic piscivores.Doradids and other auchenipterids, that are active just above the bottom or in the water column during the night, typically rest under cover on the bottom diurnally, and are territorial, competing vocally and aggressively for cover sites (Kaatz, 1999).There was a notable difference in the way the silent Ageneiosus moved their pectoral fins.The locking mechanism was never ob-served to hold spines at 90 o to the body, as was common in vocal doradoids.Doradids and basal auchenipterids have fins that are rigid with fewer pectoral fin rays and used in inter-and intra-specific defensive behaviors, often "hooking" other individuals with their pectoral spine and engaging in lateral thrashing (Kaatz pers. obs.).The pelagic Ageneiosus had pectoral fins that had numerous fin rays, were highly flexible and engaged in locomotion, especially hovering behavior in aquarium populations; they aggregated with conspecifics and did not engage in pectoral spine "hooking" behaviors.A reduced vocal capacity in disturbance stridulation was also found in a the pelagic mochokid catfish, Hemisynodontis membranaceus.However, its pectoral fin was robust, and the spine was similar to those of all vocal Mochokidae species, although unlike all Synodontis species, Hemisynodontis only produced "squeek" sounds.Pangasiid individuals were silent except for one individual who only produced a single spine sweep "squeek".Such "squeak" sounds, in these and catfishes observed in this study, were irregularly pulsed or not pulsed at all, providing limited temporal information for a signal (Fig. 3).
Historical biology of pectoral spine morphology and vocalization ability
Knowing the phylogenetic distribution of vocal and silent catfishes (Fig. 4) allows us to better understand sound communication in this diverse and ecologically important group of fishes.Questions that can be addressed include: (1) When did vocal ability arise?Was there a single basal origin for stridulation mechanisms or has it arisen independently multiple times?and (2) What are the patterns of vocal ability acquisition and loss?Catfishes evolved an ossified pectoral spine that locks in a defensive position, and this morphology is lacking in the most likely sister groups within the Ostariophysi, the soft-rayed Gymnotiformes and Characiformes (Fink and Fink, 1996;Saitoh et al., 2003;Peng et al., 2006).Hence, pectoral spine stridulation most likely arose within the catfishes either once basally or multiple times independently in the evolution of this fish order.
There are two generally differing topologies for the evolutionary trajectory of catfish vocal ability based on either morphological or molecular cladograms.Two morphological catfish phylogenies (DePinna, 1998;Diogo, 2004) identify the family Diplomystidae as the most primitive extant family.Diplomystids have a long, bony, hypertrophied pectoral spine with serrations on both margins and structures that look similar to vocal ridges on the dorsal process (Gayet and Meunier, 1998).
Whether or not the dorsal process and these structures can be used by diplomystids for vocal behavior is currently unknown.They are currently considered silent, supporting the hypothesis of a later origin for pectoral stridulation, however if they are vocal an unequivocal early single origin for stridulation is indicated.The most recent molecular phylogeny, modified from Sullivan et al. (2006) identifies the vocal ability of the hypothetical ancestor for the entire catfish order as equivocal (Figure 4).In this phylogeny the superfamily Loricarioidei is the most basal catfish group.Loricarioids have reduced spine length and are basally represented by silent families indicating that sounds are not a basal trait.Two independent origins of stridulation mechanisms among derived families within the superfamily Loricarioidei are suggested by this cladogram.The second major clade within this phylogeny is rooted by the Diplomystidae whose vocal status, as noted above, is currently uncertain.The basal condition for the remaining families in this catfish super family, the Siluroidei, is vocal suggesting an early origin for sound with silent families within this group having secondarily lost vocalization ability.The majority of derived lineages in the Siluroidei also form a polytomy, so it is not possible to discern a clear pattern of evolutionary radiation for the pectoral spine and associated vocal morphologies at higher levels within this clade, which includes the majority of catfish families.Spine vocalization mechanisms thus appear to have evolved independently at least three times between the two super families.
Whether additional independent origins also occurred within the Siluroidei awaits better phylogenetic resolution of inter-familial relations and a more complete understanding of catfish vocal biology.
From our parsimony analysis, we can also infer repeated transitions between silent and vocal ability within five distinct clades for the entire order.We also observe variation in spine length and ecotype (benthic vs. benthopelagic) repeatedly occurring.This pattern points to new opportunities for studying the origin of stridulation and its loss as well as the possible relationships between habitat and vocal abilities in catfishes.
Fig. 4
Fig. 4 Phylogenetic relationships of catfish families based on a modified topology (Sullivan et al., 2006; Lundberg et al., 2007) showing the evolution of vocal taxa within the order Siluriformes Black branches represent vocal taxa, white branches represent silent taxa, and gray represents equivocal cases.Family names with boxes indicate taxa that have a pectoral spine length greater than 14.3% SL (long spine), and family names without boxes have a pectoral spine less than 14.3% SL (short spine).The column to the right of family names indicates the predominant habitat for members of each taxon.
|
2017-11-03T10:48:31.526Z
|
2010-02-01T00:00:00.000
|
{
"year": 2010,
"sha1": "94720b343015481c376fe679b6ee2537a29ed586",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/czoolo/56.1.73",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "94720b343015481c376fe679b6ee2537a29ed586",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
36401510
|
pes2o/s2orc
|
v3-fos-license
|
Origin of cells and network information
All cells are derived from one cell, and the origin of different cell types is a subject of curiosity. Cells construct life through appropriately timed networks at each stage of development. Communication among cells and intracellular signaling are essential for cell differentiation and for life processes. Cellular molecular networks establish cell diversity and life. The investigation of the regulation of each gene in the genome within the cellular network is therefore of interest. Stem cells produce various cells that are suitable for specific purposes. The dynamics of the information in the cellular network changes as the status of cells is altered. The components of each cell are subject to investigation.
THE GEnomE as a bluEprinT
Recently, pluripotent stem cells have played an increasing role in disease and developmental models, including the challenge of generating novel organs such as intestines [1] . Stem cell differentiation is one of the mechanisms by which regenerative tissues are produced. In each cell, the genome encodes the plan for the life of the cell and the path for organizing each tissue. The gene segments travel through the genome to settle at the gene loci [2] . Variations within the genome produce individual differences. Dramatic transitions of cellular phenotypes, such as the Warburg effect, occur in disease states such as cancer [3,4] . Epigenetic alterations provide cellular identity and phenotypic diversity. RNA transcription is altered in cancer; this alteration is caused by somatic DNA translocation or mutation [5] .
Variants of genes such as BRCA2 and CHEK2 increase the risk of lung cancer [6] . Genome sequencing of normal cells has revealed the accumulation of mutations and differences in each cell lineage and tissue [7] . Genome editing has recently been developed. Additionally, gene therapy using clustered regularly interspaced short palindromic repeats/Cas9 is an emerging technique [8] .
The construction and architecture of the genome are important for understanding the cell.
Definition of stem cells
Emerging roles for stem cells as sources for cell-based therapy remind us of the importance of the definition of stem cells [9] . Stem cells are generally defined as cells with self-renewal and differentiation potential [10] . Accumulating knowledge and insights have shown that stem cells are able to differentiate into several cell types in the body. However, a paradigm shift occurred after the discovery of induced pluripotent stem (iPS) cells that can be created by reprogramming differentiated cells with several factors [11] . This finding may allow for a shift in the cell type of stem cells derived from differentiated cells in the body. Thus, the range of stem cells needs to be defined. Stem cells can be classified into two categories ( Figure 1): (1) pluripotent stem cells, such as embryonic stem cells or iPS cells [12][13][14][15] ; or (2) tissue multipotent stem cells such as neural stem cells, hematopoietic stem cells or mesenchymal stem cells [16] . Recently, SNAI1 (SNAIL) has been reported to localize to the nucleus and to play a role in epithelial-mesenchymal transition (EMT) during the early stage of reprogramming of differentiated cells [17] . EMT and mesenchymal-epithelial transition processes may promote the reprogramming of differentiated cells toward stem cells [17] . Altered phenotypes and gene networks of stem cells have been reported, suggesting that the cells themselves have various gene dynamics during culture [18] . Cancer stem cells may be included as stem cells in cancer states. In some cases, engineered differentiated cells with gene modification or genome editing may also be included as stem cells if the cells are reprogrammed.
Cancer stem cell phenotype transition
The cell phenotype transition has been observed in cancer stem cells (CSCs) [19] . SOX2, which is a reprogramming factor, is a CSC biomarker in embryonal carcinoma cells and is related to stem-like cancer cells [20] . Genome analysis of SOX2-silenced human embryonal carcinoma cell lines has revealed that the cellular networks of these cells are enriched for microRNAs that are regulated by SOX2 and that are associated with EMT markers [20] . In contrast, an epidermal growth factor receptor exon 19-deleted lung cancer cell line was induced to exhibit CSClike phenotypes and EMT by DDX3X transfection [21] . Moreover, DDX3X overexpression was reported to induce Sox2 up-regulation [21] . CSCs are related to chemotherapy and radiation resistance in squamous cell carcinomas (SCCs) [22] . The CSC population is diverse in SCCs; this diversity contributes to difficulty in cancer treatment [22] . Understanding the mechanisms of CSCs and EMT are important for the development of novel therapeutics.
Epithelial-mesenchymal transition
Cellular networks characterize both cells and the body, and gene combinations are critical for the presentation of phenotypes [23] . EMT is one of the mechanisms by which the cell phenotype transitions; dihydropyrimidine has been reported to induce EMT [24] . EMT is associated with metastasis in tumor progression and is induced by Notch activation and p53 deletion in mice [25] .
Erythropoietin-producing hepatoma (EPH) receptors, which are receptor tyrosine kinases related to cancer, may be related to EMT signaling [26] . EPH receptor A2 induces EMT via β-catenin activation, followed by Snail expression and cadherin 1, type 1, E-cadherin (epithelial) (CDH1) suppression [26] . Wnt/β-catenin signaling is inhibited by SOX10, leading to the inhibition of the growth and metastasis of digestive cancers [27] . SRY (sex determining region Y)-box 10 (SOX10) inhibits EMT, which may be one of the possible mechanisms of cancer inhibition [27] . Frizzled2, the Wnt receptor, induces EMT and cell migration through the noncanonical pathway [28] .
EMT is monitored by cell rigidity, and human equilibrative nucleoside transporter-1 suppression induces EMT in pancreatic cancer cells [29] . EMT characterization is needed for further understanding cell type transition and cancer progression.
Classification of EMT features
EMT can be characterized by the following three features: (1) changes in cellular morphology; (2) increases in cellular motility; and (3) alterations in the expression of E-cadherin and N-cadherin [29] . Cellular morphological changes are typically observed in the transition from connective-like cells to mesenchymallike cells [29] . The expression of CDH1 is usually upregulated in connective-or epithelial-like cells, whereas the expression of N-cadherin (CDH2) is up-regulated in mesenchymal-like cells [29,30] . EMT is associated with tumor metastasis [30] . The metastasis potential or invasiveness of cancer can be measured by the mechanical rigidity of the cells [31,32] . Several genes are involved in EMT, including BMI1 proto-oncogene, polycomb ring finger (BMI1), hypoxia inducible factor 1, alpha subunit (HIF1A, HIF-1α ) and twist family bHLH transcription factor 1 (TWIST1, Twist) [33] . HIF-1α, which is a key transcription factor, is up-regulated in gastric cancer. Additionally, network pathway genes, such as NFκB1, BRCA1, STAT3 and STAT1, and network hub genes, such as MMP1, TIMP1, TLR2, Tanabe S. The origin of cells FCGR3A, IRF1, FAS and TFF3, have been identified [34] .
Gene and molecule alterations
An abundant number of genes are regulated in cancer. Genes are regulated not only by transcription factors but also by microRNAs (miRNAs). miRNA-9 is upregulated in esophageal squamous cell carcinoma, which may induce EMT and metastasis in cancer [35] . CD151, which is a regulator of laminin-binding integrin function and signaling, represses EMT and canonical Wnt signaling, leading to the inhibition of ovarian tumor growth [36] . Wnt/β-catenin signaling is involved in EMT induction by the parathyroid hormone in human renal proximal tubular cells [37] . Endothelin-1 and endothelin A receptor signaling, together with Wnt signaling, regulate EMT in epithelial ovarian cancer [38] . Endothelin/β-arrestin signaling and Wnt/β-catenin signaling may be involved in chemotherapy resistance in cancer [38] . Hypoxia-inducible factors (HIFs) play roles in Wnt signaling in human colon cancer cells [39] . HIF-1α depletion induces the reversal of EMT, and HIF-2α silencing affects the expression of stem cell markers and increases β-catenin transcriptional activity under hypoxic conditions [39] . The roles of HIFs in Wnt/ β-catenin signaling and in the surrounding networks are essential for understanding cancer cell phenotypes. The silencing of β-catenin via promoter methylation is also involved in the enhancement of non-small cell lung cancer invasiveness [40] . Notch1, which is one of the important molecules in cancer signaling, is involved in Ras/phosphoinositide 3 kinase (PI3K)/Akt signaling in T-cell acute lymphoblastic leukemia (T-ALL) [41] , and PI3K and Notch1 may be targets for drug resistance in T-ALL [41] . Sox2, which is one of the reprogramming factors used to produce iPS cells, may be a regulator of EMT during neural crest development [42] . The Wnt pathway induces the EMT pathway, and the inhibition of the Wnt pathway may be involved in the re-differentiation of human islet β-cells [43] . Thus, the investigation of the molecules associated with EMT and disease is of interest [44] .
The gene and genome networks
Several megaprojects have been established in response to the genome projects, one of which is called the ENCyclopedia Of DNA Elements (ENCODE) Project, which aims to translate the human genome sequence into biological and health mechanisms [45] . The ENCODE Project has identified functional elements in the genome (http://www.genome.gov/ENCODE/) [46,47] . The cross-cancer alteration of genes and their networks can be examined in cBioPortal, which is a cancer genomics database (http://www.cbioportal. org/public-portal/) [48,49] . The cBioPortal includes network analysis for the visualization of networks that are altered in cancer [49] . The precise information obtained through network analysis has been reported in several studies [50][51][52][53] . The sources of the networks are derived from pathways and interactions from the Human Reference Protein Database [53] , Reactome [51] , the Pathway Interaction Database created by the National Cancer Institute in collaboration with Nature Publishing Group (http://pid.nci.nih.gov/) [52] , and the Memorial Sloan-Kettering Cancer Center Cancer Cell Map, which are all included as source information in the Pathway Commons Project (http://www.pathwaycommons. org) [50] . Pathway Commons is an open pathway that includes interaction information for multiple species, 537 April 26, 2015|Volume 7|Issue 3| WJSC|www.wjgnet.com such as humans and model organisms [50] . The web interface called Gene Expression Commons is an interesting tool for gene expression analysis and microarray data that can be analyzed with reference data to model biological relationships (https://gexc. stanford.edu/) [54] . The amount of data available in these databases is increasing and includes data from microarrays, next-generation sequencing, and clinical data.
conclusion
The cell is the fundamental unit of life. The investigation of gene and genome regulation is critical for a deep understanding of phenotypic alterations and of the origin of cells. The transition of cell characteristics, including differentiation, reprogramming and EMT, and cell-to-cell communications requires further investigation to reveal the cell of origin.
|
2018-04-03T03:47:29.143Z
|
2015-04-26T00:00:00.000
|
{
"year": 2015,
"sha1": "2be7664e9825070f271590c1c8b8de5688ae85da",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4252/wjsc.v7.i3.535",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6180517d265b101ab850a392e8bb1ee7465e808f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
231662335
|
pes2o/s2orc
|
v3-fos-license
|
Note on the Kato property of sectorial forms
We characterise the Kato property of a sectorial form $\mathfrak{a}$, defined on a Hilbert space $V$, with respect to a larger Hilbert space $H$ in terms of two bounded, selfadjoint operators $T$ and $Q$ determined by the imaginary part of $\mathfrak{a}$ and the embedding of $V$ into $H$, respectively. As a consequence, we show that if a bounded selfadjoint operator $T$ on a Hilbert space $V$ is in the Schatten class $S_p(V)$ ($p\geq 1$), then the associated form $\mathfrak{a}_T(\cdot, \cdot) := \langle (I+iT)\cdot ,\cdot\rangle_V$ has the Kato property with respect to every Hilbert space $H$ into which $V$ is densely and continuously embedded. This result is in a sense sharp. Another result says that if $T$ and $Q$ commute then the form $\mathfrak{a}$ with respect to $H$ possesses the Kato property.
Introduction and preliminaries
Let a : V × V → C be a bounded, sectorial, coercive, sesquilinear form on a complex Hilbert space V , which is densely and continuously embedded into a second Hilbert space H. Then a induces a sectorial, invertible operator L H on H, and Kato's square root problem is to know whether the domain of L 1 2 H is equal to the form domain V . If this is the case, then we say that the couple (a, H) has the Kato property. In this short note we characterise the Kato property of (a, H) in terms of two bounded, selfadjoint operators T , Q ∈ L(V ) determined by the imaginary part of a and by the embedding of V into H, respectively. We show that the Kato property of (a, H) is equivalent to the similarity of Q(I + iT ) −1 to an accretive operator, or to the similarity of (I + Q + iT )(I − Q + iT ) −1 to a contraction; see Theorem 2.1. The established link to different characterisations known in the literature provides an interesting connection between a variety of techniques and results mainly from operator theory of bounded operators, harmonic analysis, interpolation theory, or abstract evolution equations.
In particular, we show that if a bounded, selfadjoint operator T on a Hilbert space V is in the Schatten class S p (V ) for some p ≥ 1, then the associated form a T (·, ·) := (I + iT )·, · V has the Kato property with respect to every Hilbert space H into which V is densely and continuously embedded; see Corollary 3.2. This result is in a sense sharp; see Proposition 4.1.
We conclude this introduction with some preliminaries.
1.1. Forms. Let a be a bounded, sesquilinear form on a complex Hilbert space V . Denote by a * the adjoint form of a, that is, a * (u, v) := a(v, u) for every u, v ∈ V . Then we call s := Re a := (a + a * )/2 and t := Im a := (a − a * )/2i the real part and the imaginary part of a, respectively. Note that s = Re a and t = Im a are symmetric forms on V and a = s + it. Throughout the following, we assume that a is coercive in the sense that Re a(u, u) ≥ η u 2 V for some η > 0 and every u ∈ V . This means that s = Re a is an equivalent inner product on V , and for simplicity we assume that s is equal to the inner product on V : s(u, v) = u, v V (u, v ∈ V ). We shall also assume that a is sectorial, that is, there exists β ≥ 0 such that Let H be a second Hilbert space such that V is densely and continuously embedded into H, that is, there exists a bounded, injective, linear operator j : V → H with dense range. In the sequel we identify V with j(V ). The embedding j induces a bounded, linear embedding j ′ : H → V ′ (where V ′ is the space of bounded, antilinear functionals on V ) given by j ′ (u) := u, · H , u ∈ H. Then we have the following picture: We write also J := j ′ j for the linear embedding of V into the dual space V ′ . As usual, 1.2. Bounded operators associated with the pair (a, H). Let (a, H) be given as above. We define two associated bounded, linear operators on V . In fact, by the Riesz-Fréchet representation theorem, there exist two unique selfadjoint operators T = T a , and hence, by recalling our convention that s = ·, · V , Moreover, since ·, · H is an inner product, Q is nonnegative and injective. In fact, Q = j * j, where j * : H → V is the Hilbert space adjoint of j.
Conversely, every selfadjoint operator T ∈ L(V ) induces via the equality (1.3) a bounded, sesquilinear, sectorial form a on V for which Re a coincides with the inner product ·, · V , and for which Im a is represented by T . Similarly, every nonnegative, injective operator Q ∈ L(V ) induces via the equality (1.2) an inner product ·, · H := Q·, · V on V , and thus, by taking the completion, a Hilbert space H Q into which V is densely and continuously embedded.
We say that the pair of operators (T, Q) is associated with the pair (a, H), or, conversely, the pair (a, H) is associated with the pair (T, Q).
1.3.
Unbounded operators associated with the pair (a, H). Given a pair (a, H) as above, we define also associated closed, linear operators on H and V ′ . First, we denote by L H := L a,H the, in general, unbounded operator on H given by Second, we denote by L V ′ := L a,V ′ the operator on V ′ which is given by In a similar way we define the operators L s,H and L s,V ′ associated with the real part s = Re a.
Recall that a closed, linear operator (A, D(A)) on a Banach space X is called sectorial and if for every θ ′ ∈ (θ, π) one has We simply say that A is sectorial if it is sectorial for some angle θ ∈ (0, π). The numerical range of a closed, linear operator (A, D(A)) on a Hilbert space H is the set The operator A is said to be θ-accretive for θ ∈ (0, π), if W (A) ⊆ Σ θ , that is, if | arg Au, u H | ≤ θ for every u ∈ D(A).
If θ = π 2 , that is, Re Au, u H ≥ 0 for every u ∈ D(A), we say that A is accretive. Both operators L H and L V ′ defined above are sectorial for some angle θ ∈ (0, π 2 ). Since a is assumed to be coercive, we have 0 ∈ ρ(L H ) and 0 ∈ ρ(L V ′ ), that is, both L H and L V ′ are isomorphisms from their respective domains onto H and V ′ , respectively; see e.g. [14, Theorem 2.1, p. 58].
It is easy to check that the numerical range of L H is contained in the sector Σ θ with θ = arctan β and in particular L H is θ-accretive. As a consequence, by [8,Theorem 11.13], L H admits a bounded H ∞ functional calculus. We refer the reader to [8] or [4] for the background on fractional powers and H ∞ functional calculus of sectorial operators.
Characterisations of the Kato property
Let (a, H) be as above, that is, a is a bounded, sectorial, coercive, sesquilinear form on a Hilbert space V which embeds densely and continuously into a second Hilbert space H. Let L H = L a,H be defined as above. We say that the couple (a, H) has Kato's property if D(L H · H and · V are equivalent on V . According to Kato [6] and Lions [10], the coincidence of any two of the spaces D(L The main result of this section is the following characterisation of the Kato property of (a, H) in terms of the associated pair of bounded operators (T, Q).
Theorem 2.1. Let (T, Q) be the pair of operators associated with (a, H) as above. Then the following assertions are equivalent: (i) (a, H) has the Kato property (ii) There exists a positive operator S on V such that (ii') There exists a positive operator S on V such that (iii) There exists a positive operator S on V such that Recall that, if T , Q ∈ L(V ) are selfadjoint operators, then QT is selfadjoint if and only if T and Q commute, or if and only if QT u, u V ∈ R for every u ∈ V . Therefore, the above Theorem 2.1(ii') gives the following sufficient condition for (a, H) to have the Kato property.
Corollary 2.2. If T and Q commute, then (a, H) has the Kato property.
We start with auxiliary results on the operators appearing in Theorem 2.1.
Lemma 2.3. Let T and Q be selfadjoint, bounded operators on a Hilbert space
Proof. By a standard argument based on the Neumann series extension it is sufficient to show that sup Re z≤0 zR(z, A) < ∞. Note that for every z ∈ C with Re z ≤ 0 and every u ∈ V we have or, by the Cauchy-Schwarz inequality, This inequality implies that z + izT − Q is injective and has closed range. A duality argument, using similar estimates as above, shows that z + izT − Q has dense range, and therefore z + izT − Q is invertible for every z ∈ C with Re z ≤ 0. Moreover, the above inequality shows that sup Let A ∈ L(V ) be a bounded, sectorial operator of angle θ ∈ (0, π 2 ), and let C := (I − A)(I + A) −1 be its Cayley transform. Then the equality From this and the preceding lemma, we obtain the following statement.
Lemma 2.4. The Cayley transform
Recall that a bounded operator C on a Hilbert space V is a Ritt operator if and only if it is power bounded and sup n∈N n C n − C n+1 < ∞; see [12]. Furthermore, a bounded operator C on a Hilbert space is polynomially bounded if there exists a constant M ≥ 0 such that for every polynomial p one has The proof of Theorem 2.1 is a consequence of the characterisation of the Kato property by means of the boundedness of the H ∞ functional calculus for the operator Lemma 2.5. Let L V ′ = L a,V ′ be the operator associated with (a, H) as above. Then the following assertions are equivalent: Moreover, if (i) or (ii) holds, then L V ′ has a bounded H ∞ (Σ θ ) functional calculus for every θ > arctan β with β as in (1.1).
For the convenience of the reader we recall the proof of this result using our notation and with slight modifications.
Proof. First of all, note that the operator L H can be expressed as the operator ∈ Σ θ , and by the definition of the square roots via contour integrals, where the last equality follows from L 1 2 Consequently, the invertibility of I +iT implies, that the graph norm of L s,V ′ is equivalent to the graph norm of For the last statement about the angle of the H ∞ functional calculus first note, that Moreover, by the Closed Graph Theorem, the operator j ′ is an isomorphism from H onto D(L 1 2 V ′ ) = j ′ (H) equipped with the graph norm. Since the operator L H is (arctan β)accretive, therefore it is sectorial of angle arctan β, and consequently the operator L V ′ , too. Finally, for example, by [ Assume that (a, H) has the Kato property. By Lemma 2.5, L V ′ has a bounded H ∞ (Σ θ ) functional calculus for every θ > arctan β. Fix θ ∈ (arctan β, π 2 ). By the characterisation of the boundedness of the H ∞ functional calculus, [8, Theorem 11.13, p.229], L V ′ is θ-accretive with respect to an equivalent inner product ·, · θ on V ′ . Let S ∈ L(V ′ ) be the positive operator such that ·, · θ = S·, · V ′ . Then S := I V SI −1 V ∈ L(V ) is a positive operator on V .
First, note that I V L V ′ Jv = (I + iT )v and I V Jv = Qv for every v ∈ V . Then, for every v ∈ V . Therefore, the operator QS(I + iT ) is θ-accretive with respect to ·, · V . Therefore, (i)⇒(ii)⇒(ii ′ ). The implication (ii ′ )⇒(i) follows from a similar argument.
The equivalences (ii)⇔(iii)⇔(iv) follow from the following chain of equivalences which holds for every positive operator S ∈ L(V ) and θ ∈ (0, π 2 ]: For (iv)⇔(v), set A := Q(I − iT ) −1 , and note that its Cayley transform is given by where φ is the conformal map φ(z) := (1−z)(1+z) −1 from Σ π 2 onto {|z| < 1}. Moreover, for every polymomial p we have Therefore, the boundedness of the H ∞ (Σ θ ) functional calculus of A with θ ≤ π 2 yields the polynomial boundedness of its Cayley transform C. For the converse, by Runge's theorem, it is easy to see that A has a bounded R(Σ π 2 ) functional calculus; here R(Σ π 2 ) stands for the algebra of rational functions with poles outside Σ π 2 . Then, the boundedness of the H ∞ (Σ π 2 ) functional calculus follows again by an approximation argument and McIntosh's convergence theorem [ (b) Note that in the case when the operator Q is invertible on V , or equivalently, the inner products on H and V are equivalent, then (a, H) has the Kato property simply because L H ∈ L(H) = L(V ). It should be pointed out, that in this case the similarity to a contraction of the operator which is stated in Theorem 2.1 (vi), can be proved in a straightforward way. Indeed, in [3, Theorem 1] Fan proved that an operator C ∈ L(V ) with 1 ∈ ρ(C) is similar to a contraction if and only if it can be expressed in the form for some selfadjoint operators E, F , G ∈ L(V ) such that G + F and G − F are positive with 0 ∈ ρ(G − F ). Therefore, in the case of Q being invertible, the above stated expression of the operator C, that is, (2.1), satisfies these conditions.
Kato property and triangular operators
Recall that a bounded operator ∆ on a Hilbert space V is triangular if there exists a constant M ≥ 0 such that for every n ∈ N and every u 1 , . . . , u n , v 1 , . . . , v n ∈ V . By a theorem of Kalton [5,Theorem 5.5
], an operator ∆ on V is triangular if and only if
where (s n (∆)) n∈N is the sequence of singular values of ∆. Therefore, the Schatten-von Neumann classes are included in the class of triangular operators. We refer the reader to [5,Section 5] for basic properties of triangular operators. One interest in the class of triangular operators stems from the following perturbation theorem by Kalton [ Combining this result with Theorem 2.1, we show that the Kato property of (a, H) is preserved under certain triangular perturbations of the imaginary part of a, and in particular, that for every bounded, selfadjoint operators T and Q on a Hilbert space V such that T is triangular and Q is nonnegative and injective, the pair (T, Q) has the Kato property, that is, Q(I − iT ) −1 is similar to an accretive operator on V .
Corollary 3.2. Let a and b be two sectorial forms on V with the same real parts, that is, Re a = Re b. Let the imaginary parts t a and t b of a and b be determined by selfadjoint operators T a , T b ∈ L(V ), respectively. Assume that (b, H) has the Kato property, and that T a − T b is a triangular operator. Then (a, H) has the Kato property, too.
In particular, if T a is a triangular operator, then (a, H) has the Kato property for every Hilbert space H into which V is densely and continuously embedded.
Proof of Corollary 3.2. Note that by the second resolvent equation we get Therefore, since the operator i(I − iT a ) −1 (T a − T b ) is triangular, the claim follows from Lemma 2.3, Lemma 3.1, and Theorem 2.1 (iii).
Alternatively, note that Arendt's result, Lemma 2.5, used in the proof of Theorem 2.1, can be directly applied to get Corollary 3.2. Indeed, set ∆ := L a,V ′ L −1 b,V ′ − I, so that L a,V ′ = (I + ∆)L b,V ′ . By our assumption, L b,V ′ admits a bounded H ∞ functional calculus. We also recall that both L a,V ′ and L b,V ′ are sectorial operators. By Lemma 3.1, it is thus sufficient to show that the operator ∆ is triangular. Since Re a = Re b, we For the proof of the second statement, it is sufficient to apply the one just proved for a symmetric form, that is, b with Im b = 0.
In an analoguous way, by combining Lemma 3.1 with Theorem 2.1 (iii), we get the following perturbation result for the real parts of forms.
, has a bounded H ∞ -functional calculus. The corresponding operator Q a for the form a is equal to SQ b and T a = ST b . Hence, is triangular. Therefore, again by Lemma 3.1, (I − iST b ) −1 SQ b has a bounded H ∞ functional calculus, which completes the proof.
Finally, for the sake of completeness, we state a perturbation result for the operator Q generating the Hilbert space H in (a, H). Its proof follows directly from Lemma 3.1 and Theorem 2.1(iv). Equivalently, there exist a selfadjoint, compact operator T on a Hilbert space V with s n (T ) a n (n ∈ N), and a nonnegative, injective operator Q on V such that Q(I + iT ) −1 is not similar to an accretive operator.
In order to construct an example we adapt two related results from [5] and [2]. Recall that, in [2], the sesquilinear form a on a Hilbert space H is expressed as where S is a positive selfadjoint (not neccessarily bounded) operator on H, and A is a bounded invertible θ-accretive operator on H for some θ < π/2. (Here, we call the selfadjoint operator S on H positive if Su, u H > 0 for all u ∈ D(A) \ {0}.) Then, following Kato's terminology [6], a is a regular accretive form in H. The operator L a,H associated with the form a on H is given by SAS. Note that s = Re a is an equivalent inner product to S·, S· H , and in order to put it in our setting, we additionally assume that 0 ∈ ρ(S). Then s is a complete inner product on V := D(S). In fact, since S is selfadjoint, to get the completeness of this inner product, it is sufficient that S is injective and has closed range.
For the convenience of the reader we restate two auxiliary results from [5] and [2]. Proof. First, note that the operators T and Q are of the form: where Re A and Im A denote the real and the imaginary part of A, and S −1 | is the restriction of S −1 ∈ L(H) to V , considered as an operator in L(V, H). These expressions give the first statements in (i) and (ii). The second assertion in (i) follows in a straightforward from, e.g., [13,Theorem 7.7,p. 171].
To prove the corresponding one of (ii), assume that S −1 is compact with spectrum σ(S −1 ) =: {µ n } n∈N , where µ n → 0 + as n → ∞. Therefore, there exists an orthonormal system {e n } n∈N in H such that Sh = n µ −1 n h, e n H e n for h ∈ D(S) = {h ∈ H : n µ −2 n |(h, e n )| 2 < ∞}. Let C : V * → H, Cu := S −1 u, u ∈ D(S), where V * denote the Hilbert space (D(S), S·, S· H ). Of course, C ∈ L(V * , H) and C * C ∈ L(V ⋆ ) are compact. Moreover, note that C * Cu = n µ 2 n u, g n V⋆ g n , u ∈ V ⋆ , where g n := µ −2 n e n (n ∈ N) is an orthonormal basis for V * . Thus, the singular values of C are given by s n (C) = µ n , n ∈ N. Finally, note that s n (S −1 ) is equal to the n-th singular value of the embedding of V * into H. This completes the proof. This completes the proof.
|
2021-01-22T02:15:46.324Z
|
2021-01-20T00:00:00.000
|
{
"year": 2022,
"sha1": "13012f4ceeaad91afea79e596bfc97fa6e3f581f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "13012f4ceeaad91afea79e596bfc97fa6e3f581f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
201127243
|
pes2o/s2orc
|
v3-fos-license
|
Essential singularities of fractal zeta functions
We study the essential singularities of geometric zeta functions $\zeta_{\mathcal L}$, associated with bounded fractal strings $\mathcal L$. For any three prescribed real numbers $D_{\infty}$, $D_1$ and $D$ in $[0,1]$, such that $D_{\infty}<D_1\le D$, we construct a bounded fractal string $\mathcal L$ such that $D_{\rm par}(\zeta_{\mathcal L})=D_{\infty}$, $D_{\rm mer}(\zeta_{\mathcal L})=D_1$ and $D(\zeta_{\mathcal L})=D$. Here, $D(\zeta_{\mathcal L})$ is the abscissa of absolute convergence of $\zeta_{\mathcal L}$, $D_{\rm mer}(\zeta_{\mathcal L})$ is the abscissa of meromorphic continuation of $\zeta_{\mathcal L}$, while $D_{\rm par}(\zeta_{\mathcal L})$ is the infimum of all positive real numbers $\alpha$ such that $\zeta_{\mathcal L}$ is holomorphic in the open right half-plane $\{{\rm Re}\, s>\alpha\}$, except for possible isolated singularities in this half-plane. Defining $\mathcal L$ as the disjoint union of a sequence of suitable generalized Cantor strings, we show that the set of accumulation points of the set $S_{\infty}$ of essential singularities of $\zeta_{\mathcal L}$, contained in the open right half-plane $\{{\rm Re}\, s>D_{\infty}\}$, coincides with the vertical line $\{{\rm Re}\, s=D_{\infty}\}$. We extend this construction to the case of distance zeta functions $\zeta_A$ of compact sets $A$ in $\mathbb{R}^N$, for any positive integer $N$.
Introduction and notation
1.1. Introduction. In the theory of bounded fractal strings, developed since the early 1990s by the first author and his collaborators in numerous papers and several research monographs (see the books [15,12], the survey article [10], and the many relevant references therein), to each fractal string L a set of complex dimensions, denoted by dim C L, is assigned, defined as the set of poles of the corresponding geometric zeta function ζ L , suitably meromorphically extended. In this paper, we provide a construction of a class of fractal strings such that the corresponding geometric zeta functions generate essential singularities accumulating along a prescribed vertical line {Re s = D ∞ } of the complex plane, with arbitrarily prescribed D ∞ ∈ [0, 1). This is a new phenomenon appearing in the theory of fractal strings. The main result is stated in Theorem 2.12.
The first example of a fractal string L, the geometric zeta function ζ L of which possesses essential singularities, has been constructed in [12, Example 3.3.7 on p. 215] (see also [17]), starting from the classical Cantor string. In this paper, we first extend this construction to a class of generalized Cantor strings depending on two real parameters.
1.2. Notation. Following [12], we introduce some basic notation that we shall need in the sequel.
A bounded fractal string L = (ℓ j ) j∈N , is defined as being either a nonincreasing infinite sequence of positive real numbers such that ∞ j=1 ℓ j < ∞ or else a finite sequence of positive real numbers. Its length is For any two bounded fractal strings L 1 = (ℓ 1j ) j∈N and L 2 = (ℓ 2k ) k∈N , we define their tensor product, (1.2) L 1 ⊗ L 2 := (ℓ 1j ℓ 2j ) j,k∈N , as the fractal string consisting of all possible products ℓ 1j ℓ 2j , where i, j ∈ N, counting the multiplicities. It is also bounded, since |L 1 ⊗ L 2 | 1 = |L 1 | 1 · |L 2 | 1 < ∞. We can also define their disjoint union L 1 ⊔ L 2 as the union of multisets; that is, each element of L 1 ⊔ L 2 has the multiplicity equal to the sum of its multiplicities in L 1 and L 2 . It is possible to define the disjoint union ⊔ ∞ i=1 L i of an infinite sequence L i = (ℓ ij ) j∈N of bounded fractal strings, where i ∈ N, provided i,j ℓ ij < ∞. For any positive real number λ and a bounded fractal string L = (ℓ j ) j∈N , we can define a new fractal string λL := (λℓ j ) j∈N .
The geometric zeta function ζ L of a given bounded fractal string L = (ℓ j ) j∈N is defined by where s is a complex number with Re s > 1. Clearly, ζ L (1) = |L| 1 < ∞.
The abscissa of absolute convergence of ζ L is denoted by D(ζ L ), while the abscissa of meromorphic continuation of ζ L is denoted by D mer (ζ L ). It can be easily verified that −∞ ≤ D mer (ζ L ) ≤ D(ζ L ) ≤ 1, where D(ζ L ) := inf{α ∈ R : ∞ j=1 ℓ α j < ∞} coincides with the Minkowski dimension, dim L, of the fractal string whenever the fractal string L is infinite, i.e., whenever (ℓ j ) j∈N is an infinite sequence of positive numbers tending to zero. 1 The notions of abscissa of absolute convergence and of meromorphic continuation can be extended to general Dirichlet-type integrals; see [12, esp., Appendix A] for details.
For any given real number α, we define the corresponding vertical line {Re s = α} := {s ∈ C : Re s = α} in the complex plane, while the corresponding open right half-plane {s ∈ C : Re s > α} is denoted by {Re s > α}. For any two real numbers α and β, we define α + β Z := {α + β j ∈ C : j ∈ Z}, which is an arithmetic set contained in the vertical line {Re s = α} of the complex plane. Here and thereafter, we let := √ −1 denote "the" complex square root of −1.
Remark 1.1 (Geometric realization of bounded fractal strings). A natural way in which bounded fractal strings arise is as follows (see [15] and the earlier references). Consider an open set Ω of R, with boundary denoted by ∂Ω and with finite length (i.e., one-dimensional Lebesgue measure) |Ω| 1 . 2 Then, Ω = j≥1 I j , where the (finite or countable) family (I j ) j≥1 consists of bounded open intervals I j of lengths ℓ j . These intervals are simply the connected components of the open set Ω. Without loss of generality and since |Ω| 1 = j≥1 ℓ j < ∞ (because the fractal string L := (ℓ j ) j∈N is bounded), one may assume that (ℓ j ) j∈N is nonincreasing and (when the sequence is infinite) ℓ j → 0 as j → ∞. (In the sequel, we will not always assume that (ℓ j ) j∈N has been written in nonincreasing order.) We note that any choice of open set Ω ⊆ R satisfying the above properties is called a geometric realization of L.
Conversely, given a bounded fractal string (ℓ j ) j∈N , there are many different ways to associate to it an open set Ω of finite length and such that |Ω| 1 = j≥1 ℓ j . There is, however, a canonical way to do so; see [12, pp. 88-89].
We close this remark by recalling that if L is an infinite sequence of positive numbers, then D(ζ L ) coincides with the (upper) Minkowski dimension of L (i.e., of ∂Ω, for any choice of geometric realization of L, in the above sense; see [15,Theorem 1.10] 1 Since L is bounded, we then always have that 0 ≤ dim L ≤ 1 and hence, similarly for D(ζL) = dim L. 2 The boundary ∂Ω ⊆ R is always compact and, in the applications, is often a "fractal" subset of R; see, e.g., Example 1.2, where ∂Ω is the classic (ternary) Cantor set. and consists of the "middle-thirds" (that is, of all the deleted intervals in the standard construction of the Cantor set C). Then, L = L CS := (ℓ j ) j≥1 consists of the following infinite sequence where 3 −j appears with the multiplicity 2 j−1 (for j = 1, 2, . . .).
Observe that the boundary of the Cantor string is the classic Cantor set C: ∂Ω CS = C.
Finally, a simple computation (based on (1.3) and (1.4) and followed by an application of the principle of analytic continuation), shows that ζ L = ζ L (s) (also denoted by ζ CS (s)) is meromorphic in all of C and is given by
Paramorphic functions and their paramorphic continuations
It has been noticed that there are (nontrivial) bounded fractal strings without any complex dimensions in the classical sense (viewed as poles of a meromorphic extension of the associated geometric zeta function). As an example, see the fractal string L ∞ constructed in [12, Example 3.3.7 on p. 215] or in [17]. In this case, the geometric zeta function ζ L does not have any poles but has essential singularities. Therefore, there is a natural need to extend the notion of complex dimensions, in order to include essential singularities as well. To achieve this, we need a more general definition of an extension (of a geometric zeta function) than just a meromorphic extension to an open right half-plane (or some more general domain) of the complex plane. This leads in a natural way to the notions of paramorphic extensions and paramorphic functions. An additional justification is provided by the fact that singularities which are not poles (of a fractal zeta function) also have a natural geometric meaning in our context because like the poles, they often contribute to the corresponding fractal tube formula; see [14].
Definition 2.1. Let U be a nonempty connected open subset of the complex plane, and let S := {s k : k ∈ J} be a subset (possibly empty) of isolated points of U . 3 Let f : U \ S → C be a holomorphic function. Then, we say in short that the function f is paramorphic in U .
Remark 2.2. The set S appearing in Definition 2.1 is clearly at most countable, and the set of (possible) accumulation points of S is contained in the topological boundary ∂U of U . Indeed, since we assume the function f : U \ S → C to be holomorphic, then the set U \ S must a priori be open.
In other words, the set S is closed with respect to the relative topology of U .
Obviously, all meromorphic functions are automatically paramorphic but the converse is, of course, not true.
Here, z 0 ∈ C is the only singularity of f , and it is essential. Proof. Assume, contrary to the claim, that the set S is not closed. Then there exists s 0 ∈ (Cl S\S)∩U . On the one hand, f is holomorphic at s 0 , since s 0 ∈ U \ S. On the other hand, there is a sequence (s k ) k≥1 of nonremovable singularities of f converging to s 0 as k → ∞, which is impossible. This proves the lemma.
As we can see, by saying that a complex-valued function f is paramorphic on U , we mean that f : U → C is differentiable (i.e., holomorphic) at all points of U except on a subset S of isolated singularities of f . Each s 0 ∈ S is either a removable singularity, or a pole, or an isolated essential singularity. If we exclude removable singularities from the set S, then S is uniquely determined by f , consisting of its poles and isolated essential singularities contained in U .
For a fixed nonempty connected open subset U of the complex plane, the vector space of all functions paramorphic on U is denoted by Par(U ).
Remark 2.5. We point out that the notion of a paramorphic function is closely related to the class S of functions introduced by A. Bolsch in [3,4] (see also the class K from [2,7]) for studying iterations of complex maps (from the dynamical perspective) which are meromorphic except in a "small" set. Namely, a function f : C → C is said to be in the class S if there exists a closed countable set A(f ) ⊆ C such that f is meromorphic in C \ A(f ) but in no proper superset. 4 The above definition is more general than the definition of a paramorphic function since the set A(f ) may also contain non-isolated singularities that arise as accumulation points of isolated singularities of f . On the other hand, a paramorphic function f : U → C cannot have any non-isolated singularities in the open domain U ⊆ C. In the general theory of complex dimensions, we conjecture that only isolated singularities of fractal zeta functions should be considered as "proper" complex dimensions of the associated fractal set. A strong indication of this is the fact that they have a direct geometric meaning since these complex dimensions appear as co-exponents in the asymptotics of the fractal tube formula of the given set, whereas the non-isolated singularities are a kind of a byproduct of the isolated ones, i.e., of the "proper" complex dimensions.
Definition 2.6. Assume that U and V are connected open subsets of the complex plane, and f ∈ Par(U ), g ∈ Par(V ). If U ⊆ V and g| U = f (except for the set of isolated singularities of f ), we say that g is a paramorphic extension of f . The following result shows that a paramorphic extension g ∈ Par(V ) of f ∈ Par(U ) in Definition 2.6 is uniquely determined by f . Theorem 2.9 (Unique paramorphic continuation principle). Let U and V be nonempty connected open subsets of the complex plane C and U ⊆ V . If g 1 , g 2 ∈ Par(V ) and g 1 | U = g 2 | U , then g 1 = g 2 . In other words, the sets of nonremovable isolated singularities of g 1 and g 2 coincide, and g 1 = g 2 on the complement of their common set of singularities in V .
Proof. Let S = S(g 1 ) be the set of nonremovable isolated singularities of g 1 . Then, according to Definition 2.1 (and Remark 2.2 along with Lemma 2.4), U \ S is an open set and g 1 is holomorpohic in all of V \ S. Therefore, since g 2 coincides with g 1 on U \ S, and since V \ S is open and connected, it follows from the principle of analytic continuation that g 2 coincides with g 1 on all of V \ S. As a result, g 2 is holomorphic in all of V \ S and hence, S(g 2 ) is contained in S(g 1 ) = S. Now, by the symmetry of the hypotheses on g 1 and g 2 (in the statement of Theorem 2.9), we could apply the same reasoning by interchanging the roles of g 1 and g 2 and conclude that S(g 1 ) is also contained in S(g 2 ). Hence, g 1 and g 2 have a common set of nonremovable singularities S, and g 1 and We also provide the following result, which shows that the set Par(U ) of paramorphic functions on a given connected open subset U ⊆ C is closed under multiplications; i.e., it is an algebra. Proof. The unit element in this algebra is, of course, the function 1 ∈ Par(U ) defined by 1(s) = 1 for all s ∈ U . For f 1 , f 2 ∈ Par(U ), it is easy to see that also f 1 · f 2 ∈ Par(U ). Namely, if f j : U \ S j → C, j = 1, 2, are two holomorphic functions, where S j are the corresponding sets of isolated singularities of f j , for j = 1, 2, then the product f 1 · f 2 is well defined and holomorphic on U \ (S 1 ∪ S 2 ). (Here, some elements of S 1 ∪ S 2 may be removable singularities of f 1 · f 2 , due to possible cancellations.) Hence, according to Definition 2.1, the product f 1 · f 2 is paramorphic on U .
In the following definition, we introduce the notion of the 'abscissa of paramorphic continuation' of a given paramorphic function, which is analogous to that of the 'abscissa of meromorphic continutation' of a given meromorphic function.
Definition 2.11. Let α be a real number and let {Re s > α} be the corresponding open right half-plane in C. Assume that f : {Re s > α} → C is a Dirichlet-type function (or, in short, DTI; see, e.g., [12], esp., Appendix A), such that f is paramorphic on {Re s > α}, for some α ∈ R. 5 The abscissa of paramorphic continuation D par (f ) of f is defined as the infimum of all real numbers β, with β ≤ α, such that f can be paramorphically extended from {Re s > α} to {Re s > β}. 6 Equivalently, {Re s > D par (f )} is the largest open right half-plane, to which f can be paramorphically extended. (It is easy to deduce from Theorem 2.9 that this notion is well defined.) 7 Clearly, If f is a DTI of the form of a geometric zeta function, i.e., f = ζ L for some bounded fractal string L, then clearly, It is also clear that the notion of a paramorphic barrier, introduced in Definition 2.11 above, can be extended to a much more general setting.
We are now ready to state the main result of this paper.
Theorem 2.12. Let D ∞ , D 1 and D be three prescribed real numbers belonging to the interval [0, 1] and such that D ∞ < D 1 ≤ D. Then, there exists an explicit (i.e., explicitly constructible) bounded fractal string L such that the 5 Otherwise, we let Dpar(f ) = +∞, which means that f cannot be paramorphically extended to any (nonempty) right half-plane. 6 We also allow for Dpar(f ) = −∞, which means that f can be paramorphically extended to all of C. 7 Indeed, if f is paramorphic on each element of a family of right half-planes, {Re s > αi}i∈I , then (by Theorem 2.9) it is paramorphic on the union of these right-half planes, namely, on the right-half plane {Re s > α}, where α := infi∈I αi.
corresponding geometric zeta function ζ L can be paramorphically extended to the open right half-plane {Re s > D ∞ } and In addition to this, it can be achieved that the line {Re s = D ∞ } coincides with the paramorphic barrier of ζ L (in the sense of Definition 2.11 above), while the vertical open strip {D ∞ < Re s < D 1 } contains infinitely many essential singularities of ζ L , and such that the paramorphic barrier coincides with the set of accumulation points of the set of essential singularities of ζ L .
We postpone the proof of Theorem 2.12 until Section 4 (more precisely, until Subsection 4.1).
Generalized Cantor strings of finite and infinite orders
and their geometric zeta functions 3.1. Generalized Cantor strings of finite order. Let r j , with j = 1, . . . , m, be positive real numbers such that r 1 +· · ·+r m < 1. Let L(r 1 , . . . , r m ) be the self-similar fractal string defined as the nonincreasing sequence of all monomial terms of the form r α 1 It can be shown (see [15,Chapters 2 and 3]) that the corresponding geometric zeta function is given by for all s ∈ C. This is established by first verifying Eq. (3.1) via a direct computation, valid for all s ∈ C with Re s sufficiently large, 8 and then upon meromorphic continuation, by deducing that (3.1) holds, in fact, for all s ∈ C. For example, by choosing m = 2 and r 1 = r 2 = 1/3, we obtain the Cantor string L(1/3, 1/3) = (ℓ j ) j∈N . It corresponds to the nonincreasing sequence of lengths of deleted open intervals obtained during the construction of the usual Cantor's ternary set C (2,1/3) scaled by the factor 3, i.e., starting with the interval [0, 3] instead of [0, 1]; see [15, ibid] or [12, Definition 3.3.1 and Theorem 3.3.3]. Furthermore, in light of Eq. (3.1) and in keeping with the above explanations, we see that ζ L(1/3,1/3) (s) = 1/(1 − 2 · 3 −s ) for all s ∈ C such that Re s > log 3 2. As was explained above in the case of a general self-similar string, the geometric zeta function ζ L(1/3,1/3) can then be meromorphically extended to the whole complex plane by letting ζ L( Let m be a positive integer such that m ≥ 2, and let a ∈ (0, 1/m). Let us define the generalized Cantor string Here, by using Eq. (3.1), we obtain that for all s ∈ C with Re s > log 1/a m. This geometric zeta function can then be meromorphically extended to the whole complex plane, so that (3.3) holds for all s ∈ C.
For any fixed integer n ≥ 1, we introduce the generalized Cantor string of n-th order, L n−1 ⊗ L (m,a) for n ≥ 2. In other words, we iterate multiplying L (m,a) by itself, using the tensor product of fractal strings; that is, for every integer n ≥ 1, The geometric zeta function of L (m,a) n can be explicitly computed (initially, for all s ∈ C with Re s large enough) and then meromorphically extended to the whole complex plane. We first have (3.6) and then by induction, for each n ≥ 1 and all s ∈ C, Here, we have used the multiplicative property of the geometric zeta function with respect to the tensor products of fractal strings; see [12,Lemma 3.3.2]. The total length of the generalized Cantor string of n-th order L (m,a) n is given by Note that |L The above construction of the fractal string L (m,a) n , as well as the computation of its geometric zeta function, are a natural extension of the ones provided in [12, Example 3.3.7 on p. 215] in the case when m = 2 and r 1 = r 2 = 1/3. For the general theory of the complex dimensions of fractal strings, see [15] and [12].
It is easy to explicitly compute the coefficients c j l , with j ≥ 1, appearing in the Laurent expansion It is interesting to note that the value of c j −n is, in fact, independent of j ∈ Z.
Other coefficients of the form c l = c −n+r , with r ≥ 1, can be easily computed as well, since c −n+r = lim s→s j , and there are no other isolated singularities. (For m = 2 and r = 1/2, this construction has been described in [12,Example 3.3.7]; see also [17] and [18].) In light of Eq. (3.13), we see that the total length of the string L (m,a) ∞ is given by In particular, L 3.3. Power series of bounded fractal strings. Let X be the set of all bounded fractal strings. In Subsection 1.2, we have introduced two binary operations, which can be viewed as the operations of addition and multiplication on X, defined as the disjoint union ⊔ of fractal strings and the tensor product ⊗, respectively. It is easy to check that (L 1 ⊔ L 2 ) ⊗ L 3 = (L 1 ⊗ L 3 ) ⊔ (L 2 ⊗ L 3 ), for any L n ∈ X, n = 1, 2, 3. In this manner, we have obtained a commutative unital semiring (X, ⊔, ⊗) (without the zero element). 9 The unit element in this semiring is E := (1). This structure is not a ring, since the elements of X do not possess additive inverses with respect to the binary operation ⊔.
We also have the operation of scalar multiplication of bounded fractal strings L := (ℓ j ) j≥1 with positive real numbers c, where the resulting fractal string is cL := (cℓ j ) j≥1 . The set X, viewed with respect to ⊔ as addition and with respect to scalar multiplication, is clearly a positive convex cone, since for any positive real numbers c and d and any two fractal strings L 1 , L 2 ∈ X, we have that cL 1 ⊔ dL 2 ∈ X.
We are now ready to introduce the notion of a power series of bounded fractal strings in X, as follows. Let F (z) := ∞ n=0 c n z n be the usual power series of complex numbers z, where we assume that the coefficients c n are nonnegative real numbers for all integers n ≥ 0 and c n > 0 for at least one n ≥ 0, such that the radius of convergence R of the series F is positive (or infinite). For any fixed fractal string L := (ℓ j ) j≥1 ∈ X such that |L| 1 := j≥1 ℓ j < R (i.e., of total length less than R), we can define the 9 If zero in X were defined as the one element sequence (0), then X should contain (0) ⊗ L = (0, 0, , . . .), which is an infinite sequence of zeros. This means that this string has the real number 0 with infinite multiplicity which we cannot permit. Otherwise, the disjoint union of a nonzero string L and 0 in X is not well defined (i.e, it cannot be ordered as a nonincreasing sequence of reals).
corresponding bounded fractal string F (L) by where L n is the tensor product of n copies of L for n ≥ 1, while L 0 := E.
It is easy to verify that the fractal string F (L) is bounded: In this way, we have obtained the mapping F : {L ∈ X : |L| 1 < R} → X. In particular, if R = +∞, we have the mapping F : X → X.
As an example, if we consider the function F (z) := exp(z), then c n = (n!) −1 for all n ≥ 0 and R = +∞. We see that for any bounded fractal string L ∈ X, the exponential fractal string of L, that is, is well defined, i.e., it belongs to X. Hence, Let L ∈ X, and let F (z) = ∞ n=0 c n z n be a power series with nonnegative coefficients, where c n > 0 for at least one n ≥ 0 and with radius of convergence R > 0.
Then, for any fractal string L ∈ X of total length less than R (i.e., |L| 1 < R), we have that Proof. For any s in {Re s > D(ζ L )}, we have that where we have used the fact that for any three fractal strings L 1 , L 2 and L in X and for any positive real number c, we have that ζ L 1 ⊔L 2 (s) = ζ L 1 (s) + ζ L 2 (s), ζ cL (s) = c s ζ L (s) and ζ L 1 ⊗L 2 (s) = ζ L 1 (s) · ζ L 2 (s) (in particular, by mathematical induction, we have that ζ L n (s) = ζ L (s) n , for any n ≥ 2). If D(ζ L ) < 1, then (3.22) implies that
Geometric zeta functions with prescribed abscissa of paramorphic continuation
This section is divided into two subsections: in Subsection 4.1, we establish one of the key results of this paper (Theorem 2.12), which establishes the existence of suitable paramorphic (and complex-valued) fractal zeta functions with prescribed abscissae of paramorphic, meromorphic and absolute convergence, respectively. Moreover, in Subsection 4.2, based in part on this result, we construct suitable (real-valued) harmonic functions that are associated with paramorphic geometric zeta functions and have interesting sets of essential singularities.
4.1.
Construction of a class of paramorphic fractal zeta functions via a sequence of generalized Cantor strings. In this subsection, using the results of Section 3, we construct a fractal string L such that the corresponding geometric zeta function ζ L has prescribed values of abscissae of paramorphic continuation D par (ζ L ) (see Definition 2.11), of meromorphic continuation D(ζ mer ) and of absolute convergence D(ζ L ). The construction of L is based on a careful choice of a suitable sequence of generalized Cantor strings. The corresponding precise result is stated in Theorem 2.12, to which we refer the reader and which we now establish.
Proof of Theorem 2.12. Case (i): We first consider the case when D ∞ < D 1 = D. As we have seen, each fractal string L (m,a) ∞ is bounded for any integer m ≥ 2 and for any real number a ∈ (0, 1/m); see Eq. (3.14) above. Let (D k ) k≥2 be any decreasing sequence of real numbers converging to D ∞ as k → ∞ and such that D 2 < D 1 .
Let (m k ) k≥1 be a strictly increasing sequence of integers diverging to +∞ as k → ∞, such that m 1 ≥ 2. Next, we define positive real numbers a k by the following equality: We have that m k a k = m 1−1/D k k < 1; i.e., a k ∈ (0, 1/m k ), for all k ≥ 1. Now, we introduce the following sequence of bounded fractal strings: is the generalized Cantor fractal string of infinite order defined by Eq. (3.12), while L k is its total length, given by (3.14). We have that Let us verify that the fractal string L, given as the disjoint union of the sequence of bounded fractal strings (L k ) k≥1 , is well defined and bounded. Indeed, we have that where in the next to last equality, we have made use of Eq. (4.2). From the definition of the fractal string L in (4.4) (see also (4.1)), it follows that for all s ∈ C with Re s > D ∞ , except for the set of singularities. All the singularities of ζ L , contained in the right half-plane {Re s > D ∞ }, are essential, and the corresponding set S ∞ of its essential singularities, contained in this same half-plane, coincides with the union over all k ∈ N of the sets of essential singularities of ζ L (m k ,a k ) This set S ∞ consists of isolated singularities, which means that the geometric zeta function ζ Since the function ζ L is holomorphic in the open right half-plane {Re s > D 1 }, while D 1 is an essential singularity, it follows that the abscissa of meromorphic continuation D mer (ζ L ) of ζ L is equal to D 1 . This concludes the proof of the theorem in case (i).
where m ′ is an integer ≥ 2 and a ′ ∈ (0, 1/m ′ ), be a generalized Cantor string such that the abscissa D(ζ L (m ′ ,a ′ ) ) of (absolute) convergence of its geometric zeta function ζ L (m ′ ,a ′ ) is equal to D. Then, the bounded fractal string L ⊔ L (m ′ ,a ′ ) , where L is the fractal string from step (i), satisfies the desired properties. Indeed, we have that ζ L⊔L (m ′ ,a ′ ) (s) = ζ L (s) + ζ L (m ′ ,a ′ ) (s), for all s ∈ C with Re s sufficiently large. Therefore, ζ L⊔L (m ′ ,a ′ ) can be paramorphically continued to the open right half-plane {Re s > D ∞ }.
This completes the proof of the theorem.
The following questions arise naturally in this context: Q1: What does the asymptotics of the tube function of a fractal string look like when t → 0 + , in the case when the associated geometric zeta function is paramorphic? For example, in the case of the fractal strings L (m,a) n and L (m,a) ∞ constructed above, as well as for L ∞ appearing in Theorem 6.1 of the appendix below.
Q2: In the paramorphic case and under suitable polynomial-type growth hypotheses on ζ L , is it possible to establish some kind of a tube formula for a fractal string L if we know the complex dimensions of L?
In light of the results of Section 4.2 below, we could ask analogous questions about fractal tube formulas for bounded subsets of R N (for N ≥ 2) and their distance zeta functions instead of for fractal strings and their geometric zeta functions (corresponding to the case when N = 1, as in [15,Ch. 8]). For fractal tube formulas for bounded sets (and, more generally, for relative fractal drums) in R N , see [12,Ch. 5] and [13]. We note that the results about the general fractal tube formulas obtained in [15] and [12,13] assume the meromorphicity of a suitable fractal zeta function in a suitable domain of C, along with appropriate growth conditions satisfied by this zeta function. Finally, we mention that several results along the lines suggested in question Q2 are provided in [14].
4.2.
Harmonic functions and their essential singularities. We first introduce the notion of an isolated singularity of a given harmonic function defined on a connected open subset of the Euclidean two-dimensional plane.
Definition 4.1. Let U be a nonempty connected open subset of the 2-dimensional plane R 2 . Let S be a set of isolated points of U such that a function u : U \S → R is harmonic in U \S. (Observe that the set U \S is necessarily connected as well.) Let v : U \ S → R be a conjugate harmonic function of the given real-valued function u on the connected set U \ S, meaning that the function f : U \ S → C (here, we identify U \ S with the corresponding subset of C) defined by f (s) := u(x, y) + v(x, y), where s := x + y, is holomorphic in U \ S. We then say that a point (x 0 , y 0 ) ∈ S is an isolated singularity of u if the corresponding complex number s 0 := x 0 + y 0 is an isolated singularity of f . In particular, we say that a point (x 0 , y 0 ) ∈ S is an essential singularity (respectively, pole) of u if the corresponding complex number s 0 := x 0 + y 0 is an essential singularity (respectively, pole) of f .
For a harmonic function u appearing in this definition, we say (in short) that u is paraharmonic in U if each of its points in S is an isolated singularity of the corresponding holomorphic function f : U \ S → C. Or even more succinctly, a harmonic function u said to be paraharmonic in U if the corresponding complex-valued function f is paramorphic in the set U , viewed as a connected open subset of the complex plane.
It is easy to generate paraharmonic functions from paramorphic functions as shown in the following example.
Example 4.2. The function u(x, y) = Re(e 1/(x+ y−x 0 − y 0 ) ) is paramorphic in R 2 . Here, (x 0 , y 0 ) ∈ R 2 is the only singularity of u, and it is essential. Of course, the above function is just the real part of the corresponding paramorphic function f (z) = e 1/(z−z 0 ) discussed in Example 2.3.
Note that the notion of an isolated singularity (and in particular, of a pole, as well as of an essential singularity) of a harmonic function u, introduced in Definition 4.1 above, is meaningful since the conjugate harmonic function v, defined on a connected open set, is uniquely determined by u, up to an additive constant. In light of this observation, adding a constant to the function f does not change the type of any of its isolated singularities.
The following corollary of Theorem 2.12 shows that there exist explicit real-valued functions u that are paraharmonic in a prescribed open right half-plane U in R 2 , and possessing infinitely many essential singularities, accumulating densely along the boundary ∂U (which, in this case, is a vertical line). Moreover, the open right half-plane {(x, y) ∈ R 2 : x > D ∞ } is the maximal right half-plane to which the function u can be paraharmonically extended.
Proof. The claim follows immediately from Theorem 2.12, by letting u := Re ζ L . The corresponding conjugate harmonic function is v := Im ζ L .
Remark 4.4. It is possible to construct a class of paraharmonic functions by using paramorphic functions of a simpler type, for example g(s) := exp(1/s). Here, g is paramorphic on C and s = 0 is the only isolated singularity of f . Furthermore, it is an essential singularity of f . If S = {a n : n ∈ N} is any set of isolated points contained in a given connected open set U ⊆ C, then, by using the Weierstrass M -test, it is easy to verify that the function is paramorphic on U . More precisely, f is holomorphic in U \ S; see Definition 2.1. We can ensure that the set of accumulation points of the set S of isolated singularities of f coincide with the boundary of U . However, we do not know if there is a (bounded) fractal string L such that ζ L (s) = g(s − a), for all s ∈ C with Re s sufficiently large, where a ∈ (0, 1) is fixed.
Essential singularities of distance zeta functions
Let A be a nonempty bounded set in R N , where N is a positive integer, and let d(x, A) := inf{|x − a| : a ∈ A} denote the Euclidean distance from x ∈ R N to A. Assume that δ is an arbitrary positive real number, and let A δ := {x ∈ R N : d(x, A) < δ} be the open δ-neighborhood of A in R N . The distance zeta function ζ A of the set A is defined by for all s ∈ C such that Re s is sufficiently large; see [11] or [12]. It is easy to verify that the difference of distance zeta functions corresponding to different values of the parameter δ > 0 is always an entire function. Hence, the value of the parameter δ is unimportant, since it does not have any influence on the type of any of the isolated singularities of the distance zeta function, considered on any given connected and open subset U of the complex plane. We denote by D(ζ A ) the abscissa of convergence of the Dirichlet-type integral defining ζ A on the right-hand side of (5.1); by definition, this means that {Re s > D(ζ A )} is the largest right-half plane for which the Lebesgue integral defining ζ A in (5.1) is convergent. Then, according to [ We refer the reader to interesting examples of obtained distance zeta functions of various well-known fractal sets, such as the Sierpiński gasket and carpet, which can be found in [12,Section 3.2] as well as in the paper [11].
The proof of the theorem rests on the following 'shift property'. For another related shift property result, see [11] Proof. Since δ > ℓ 1 /2, we have that (A α ) δ = (−δ, a 1 + δ). The set (A L × [0, 1] N −1 ) δ contained in R N is connected, and it can be obtained as the union V 1 ∪ V 2 ∪ V 3 of the following three disjoint subsets of R N : 10 It follows from our hypotheses and the definition of DA that 0 ≤ DA ≤ N .
If we let
The last three integrals are equal to the corresponding three terms on the right hand-side of Eq. (5.5). For example, since for all s ∈ C with Re s > N − 1. We leave to the interested reader the easy (and analogous) verification of the other two equalities.
Proof of Theorem 5.1. We consider the following cases: Case (iii): In the general case, when N ≥ 2 and D ∞ , D 1 and D are in [0, N ), we first take N 1 to be the smallest integer strictly larger than D ∞ , and let D ′ 1 be a real number belonging to (D ∞ , N 1 ), and such that D ′ 1 ≤ D 1 . As in case (ii), we first define the set A 1 := A L × [0, 1] N 1 , with the bounded fractal string L chosen as in Theorem 2.12. Then D ∞ (ζ A 1 ) = D ∞ . We then let 11 and finally, A := A ′′ ∪ B ∪ C, where the sets B and C are defined so that the corresponding distance zeta functions ζ B and ζ C can be paramorphically continued to {Re s > 0}, such that D 1 and D are then the respective isolated singularities, D 1 being the essential singularity of ζ B and D the pole of ζ C .
The set B can be constructed as the fractal grill B := C is the canonical geometric realization of the fractal string L (m 1 ,a 1 ) ∞ , with d 1 := ⌊D 1 ⌋, and the parameters m 1 and a 1 chosen so that log 1/a 1 m 1 = D 1 − d 1 . Similarly, the set C can be constructed as the fractal 11 Note that the singularities of the distance and tube fractal zeta functions do not depend on the dimension of the ambient space (see [19], along with [12, Section 4.7]); thus, Dpar(ζA 1 ) = Dpar(ζ A ′′ ) = D∞. The claim now follows, since ζ A (s) = ζ A ′′ (s) + ζ B (s) + ζ C (s) for all s ∈ C with Re s sufficiently large, and hence, ζ A can be paramorphically continued to the open right half-plane {Re s > D ∞ }.
This completes the proof of the theorem.
Appendix
Here, we show that the geometric zeta function defined by Eq. The proof of Theorem 6.1 follows from the following lemma, as will be explained at the end of this appendix. , and is such that the Euclidean distance from U to S ∞ is positive. Then, there exists ε > 0 such that for all s ∈ U and for each k ∈ N, we have that |1 − m k · a s k | ≥ ε. Proof. We consider the following three cases: Case (a): Let U be as in the statement of the lemma and such that its closure does not intersect any of the vertical lines {Re s = D k }, where k ∈ N. We assume that for some k 0 ∈ N, the set U is placed between the two consecutive lines {Re s = D k 0 } and {Re s = D k 0 +1 }, i.e., in the open vertical strip {D k 0 +1 < Re s < D k 0 }. In other words, We have that |1 − m k · a s k | 2 = m 2 k · a 2 Re s k − 2m k · a Re s k cos (log a k )(Im s) + 1 ≥ (1 − m k · a Re s k ) 2 ; so that |1 − m k · a s k | ≥ |1 − m k · a Re s k |, for all s ∈ U and k ∈ N. Hence, since a k = m −1/D k k , we obtain the following inequality: Then, in light of Eq. (6.2), we have that |1 − m k · a s k | ≥ ε 1 , for all s ∈ U and 1 ≤ k ≤ k 0 .
Case (a2): If k ≥ k 0 + 1, then, since inf s∈U Re s > D k for all ≥ k 0 + 1, it follows that for any s ∈ U , By letting ε := min{ε 1 , ε 2 } > 0, we deduce from Eq. (6.2) that |1 − m k · a s k | ≥ ε, for all s ∈ U and k ∈ N. This completes the proof of the lemma in case (a).
Case (b): Assume that the disk U is such that it intersects the vertical line {Re s = D k 0 }, for some k 0 ≥ 2, and let U be a disk of sufficiently small radius, so that D k 0 +1 < inf s∈U Re s ≤ sup s∈U Re s < D k 0 −1 . Analogously as in case (a), we have that there exists a positive real number ε 1 such that |1 − m k · a s k | ≥ ε 1 , for all k = k 0 . When k = k 0 , there exists a positive constant ε 2 such that h(s) := |1 − m k 0 · a s k 0 | ≥ ε 2 , for all s ∈ U . Indeed, the only zeros of the function h : C → [0, +∞) are elements of the arithmetic sequence S k 0 := D k 0 + 2π log(1/a k 0 ) Z. Since U and S k 0 are disjoint (here, U denotes the closure of U in C), then h(s) > 0 for all s ∈ U ; so that the continuous function h = h(s) has a strictly positive minimum on the set U ; that is, ε 2 := min s∈U h(s) > 0.
Case (c): The remaining case when the open disk U is such that inf s∈U Re s > D 1 , is treated analogously as in case (a2).
This completes the proof of the lemma. (1 − m k · a s k ) −n (n!) s is well defined.
In light of Lemma 6.2 and since the sequence (L k ) k≥1 is bounded from below by a positive constant L (see Eq. (4.3)), we deduce from (6.3) that for all s ∈ U , Hence, by using the Weierstrass M -test, we conclude that the function ζ L is well defined and holomorphic in {Re s > D ∞ } \ S ∞ . By Definition 2.1, this means that ζ L is paramorphic in the open right half-plane {Re s > D ∞ }; that is, ζ L ∈ Par({Re s > D ∞ }).
Acknowledgments
We thank the four anonymous referees for their very thorough reviewing of this paper, as well as their helpful constructive criticisms and suggestions, along with new interesting references we were unaware of.
|
2019-08-21T13:04:53.000Z
|
2019-08-21T00:00:00.000
|
{
"year": 2019,
"sha1": "b09fb947d8c9271b288855b4ee95e1cccaf72dee",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "81f9db25e3130904736486b1e95a1fbb9a5f500e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
1792951
|
pes2o/s2orc
|
v3-fos-license
|
Are Tumor Exposure and Anatomical Resection Antithetical during Surgery for Hepatocellular Carcinoma? A Critical Review
Hepatic resection is the most potentially curative local therapy for patients with hepatocelluar carcinoma (HCC). However, the high rate of postoperative recurrence, 50–70% at 3 years, remains a major concern. Such recurrences usually occur in the liver owing to the high propensity of HCC to invade the portal vein branches and the underlying liver cirrhosis, which is the ideal background for HCC development. Two pivotal surgical techniques are commonly used to reduce such recurrences: anatomical resection (AR) and achievement of negative margins. However, controversies exist about the definition of anatomical resection and the requisite width of negative margins. Consequently, a consensus on these issues is far from being achieved in the specialized surgical community. Review of the literature and author’s discernment support AR for HCC larger than 2cm, and tumor exposure when the tumor is in contact with major vessels. Therefore, tumor exposure is not a contradiction to an AR properly carried out.
Introduction
Hepatocellular carcinoma (HCC) is one of the five most common malignancies worldwide, and its incidence is increasing in many countries [1]. Apart from liver transplantation, hepatic resection is the most potentially curative locoregional treatment for HCC. Improvements in perioperative care and refinements in surgical techniques have significantly increased the safety and survival rates of hepatic resection in the last few decades [2,3]. However, the high rate of postoperative recurrence, 50-70% at 3 years, remains a major concern [4,5]. Such recurrences usually occur in the liver owing to the high propensity of HCC to invade portal vein branches [6], and to the underlying liver cirrhosis, which is the ideal background for the development of HCC. Two pivotal surgical techniques are usually recommended to reduce postoperative recurrence: anatomical resection (AR) [7], and achievement of negative margins [8,9]. However, in the surgical community, some confusion still exists about these issues, particularly, about their definitions and for the rules governing their practical application. Therefore, this study aimed to review these two techniques of hepatic resection for HCC.
Anatomical Resection (AR)
AR is usually recommended because the removal of the portal vascular bed containing a tumor is theoretically expected to be effective from an oncological perspective. The rationale for this is that removal of the vascular bed will ensure the removal of any potential satellite tumors in the liver, which may have risen because of the tumor's tendency to invade the portal veins [6]. To achieve an optimal compromise between the need for the complete removal of the area occupied by the tumor and the need to spare the liver parenchyma, Makuuchi et al. proposed systematic subsegmentectomy in 1985 [7], and successfully applied it, mainly in Japan, with excellent results [10]. Given the technical skill demanded by this procedure, which involves free-hand ultrasound-guided puncture of thin portal branches, an alternative method of segmental or subsegmental AR has recently been introduced: the ultrasoundguided finger compression technique [11]. Any other technique that does not aim to precisely identify the afferent portal pedicle and the segmental or subsegmental territory it supplies should be considered a non-anatomical resection (NAR), regardless of whether it is associated with minor or major parenchymal removal. This is a crucial point that should be considered when comparing different studies on segmental or subsegmental AR and NAR for HCC. When the surgical technique is not detailed, the results may be biased by significant conceptual and technical issues, which make the conclusions invalid.
As a partial consequence of this inadequate comparison, it remains unclear whether hepatectomy performed in clinical practice should involve AR or NAR. No prospective randomized trials have been available to date, and two meta-analyses on the topic have reported conflicting findings [12,13]. Moreover, a recent meta-regression analysis showed that even after adjusting for some important covariates, the available studies on AR and NAR could not be easily compared [14]. A review of the most relevant studies published on this topic in the last decade is presented in table 1 [10,[15][16][17][18][19][20]. Most of the studies report a trend of better 5-year overall survival in the AR group than in the NAR group; in particular, it seems that the anatomical approach is advantageous mainly for lesions measuring >2 cm and <5 cm [17,18,20]. However, because comparisons of AR and NAR remain biased owing to technical issues and differences in cirrhosis, etiology, and tumor presentation, the superiority of AR over NAR could not be definitively determined.
Surgical Margins
The effect of surgical margin status on the survival of patients with HCC has been studied, but controversies persist among surgeons. Some authors have reported that margins smaller than 1 cm or even 2 cm negatively affect long-term survival [8,9], while others have found opposite results, stating that even 0-mm margins are acceptable [21][22][23].
A review of the literature on this topic has been presented in table 2: only one randomized controlled study, published by Shi et al. [9] in 2007, is available on this issue. The authors compared HCC patients with 1-cm margins versus those with 2-cm margins and observed a lower recurrence rate in the latter group. The high rate of local recurrence (29%), which is inconsistent with other larger series, and the unclear description of AR and NAR remain major drawbacks of that study. Regarding the latter point, the authors considered certain cri- teria that should not have been included: tumor location in terms of depth into the liver and tumor location at the edge between two adjacent segments. These two conditions are not contraindications to AR. In these conditions, multiple punctures or compressions of subsegmental/segmental portal branches should be performed to anatomically demarcate the area to be removed. A concept that should be stressed in discussions of surgical margin status is the relationship between the width of tumor-free margin and tumor size. The risk of satellites increases proportionally with tumor size; in HCCs larger than 2.5 cm, the risk of microsatellites located more than 5 mm away from the tumor burden becomes significant [24]. Therefore, a clear margin should be achieved in the case of tumors larger than 2.5 cm. These findings are consistent with the observation that in the case of HCCs smaller than 2 cm, similar local control can be obtained using either the ablation technique or hepatic resection [25]. However, this should not act as a confounding finding when attention is focused on 0-mm margins at the site of contact between the tumor and a major vessel, whether a glissonian pedicle or a hepatic vein. Under these circumstances, tumor exposure on the cut surface, even when the HCC is larger than 2.5 cm, is acceptable; the possibility of microsatellites is obviously nil at this site and with appropriate surgery under intraoperative ultrasound guidance, the risk of local recurrence is negligible [26,27]. Conversely, sacrificing the vessels could result in major parenchymal removal and increased surgical risk [28,29].
AR and Tumor Exposure
From the aforementioned text, it can be considered that the performance of AR does not depend on the achievement of negative margins. Complete microsatellite removal depends on the complete removal of the tumor-containing part, i.e., the entire vascular bed supplying the lesion. An indirect proof is the finding that AR impacts prognosis in patients with tumors larger than 2 cm [17,18,20], or in other words, when the HCC has a higher risk of being associated with microsatellites [24] and when ablation is less efficient in providing local control [25]. However, the removal of an entire hepatic segment does not ensure the prevention of tumor exposure. For instance, in the case of an HCC that is located in segment 8 and is in contact with the right and middle hepatic veins at the caval confluence, a full AR of segment 8 will expose on the cut surface the right and middle hepatic veins; the specimen at the level of the detached contact between the HCC and the hepatic veins should have exposed the tumoral surface. As mentioned earlier, the possibility of microsatellites at this site is nil, and the risk of local recurrence becomes negligible if an adequate technique is meticulously applied under intraoperative ultrasound guidance [26,27]. However, sparing of the vessel by means of tumor-vein detachment minimizes the excision of the liver parenchyma, and it is well established that the prognosis of HCC patients depends much more on the residual liver volume than on the width of the surgical margin [29].
Thus, conceptually, any new lesion occurring in the adjacent segments during the postoperative follow-up period should not be considered as an undetected satellite not removed during surgery, but rather as a distant metastasis ( fig. 1). Given the intrahepatic diffusion of HCC through the portal vein system, any metastatic lesion growing in a segment other than the one in which the primary tumor originated should be considered a distant metastatic tumor, regardless of its physical distance from the segment containing the primary HCC.
Conclusions
The success of hepatic resection for HCC relies on the accurate balance between the functional reserve of the residual liver and the best local control of the tumor. The review presented herein of the literature, together with the authors' discernment, does not support either AR or large surgical margins a priori. The better results obtained with AR than with NAR cannot be definitively attributed to the superior oncological control of AR, although this is theoretically reasonable. The role of AR is probably not very important in the case of HCCs smaller than 2 cm, and moreover, the surgical approach in general is increasingly not being used to treat in such cases. For lesions larger than 2 cm, it seems reasonable that a tumor-free margin of at least 0.5 cm be obtained, unless the tumor is not in contact with a major vessel. However, in this last circumstance, the risk of local recurrence is low, and the outcomes of different technical solutions to spare the vessel should be compared with the worse short-and longterm outcomes of vessel resection before a major hepatectomy is carried out. We believe that tumor exposure is not a contradiction to an AR that is properly carried out with the complete removal of the tumor-containing segment or subsegment. To ensure surgical safety, radical oncological resection with narrow margins, and anatomical, but limited, liver resection, a surgeon's skill in intraoperative ultrasonography is mandatory.
|
2017-04-14T01:32:15.603Z
|
2012-11-01T00:00:00.000
|
{
"year": 2012,
"sha1": "b59dbc340698c48b426a7236cfc8fdf7522b0265",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/343831",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b59dbc340698c48b426a7236cfc8fdf7522b0265",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18201210
|
pes2o/s2orc
|
v3-fos-license
|
Lightweight Unsupervised Domain Adaptation by Convolutional Filter Reconstruction
End-to-end learning methods have achieved impressive results in many areas of computer vision. At the same time, these methods still suffer from a degradation in performance when testing on new datasets that stem from a different distribution. This is known as the domain shift effect. Recently proposed adaptation methods focus on retraining the network parameters. However, this requires access to all (labeled) source data, a large amount of (unlabeled) target data, and plenty of computational resources. In this work, we propose a lightweight alternative, that allows adapting to the target domain based on a limited number of target samples in a matter of minutes rather than hours, days or even weeks. To this end, we first analyze the output of each convolutional layer from a domain adaptation perspective. Surprisingly, we find that already at the very first layer, domain shift effects pop up. We then propose a new domain adaptation method, where first layer convolutional filters that are badly affected by the domain shift are reconstructed based on less affected ones. This improves the performance of the deep network on various benchmark datasets.
Introduction
In recent years, great advances have been realized towards image understanding in general and object recognition in particular, thanks to end-to-end learning of convolutional neural networks, seeking the optimal representation for the task at hand. Unfortunately, performance remarkably decreases when taking the trained algorithms and systems out of the lab and into the real world of practical applications. This is known in the literature as the domain shift problem: systems are typically deployed on new data that has different characteristics or has been gathered under different conditions than what was used for training. The default solution is to retrain or finetune the system using additional training data, mimicking as close as possible the conditions during testing. However, this brings an extra cost, first in terms of human effort to collect the new data and annotate it, next in terms of computational resources and expertise that need to be available to retrain the models. Moreover, it is not always feasible either, as conditions during test time may not be known well beforehand.
Overcoming this domain shift problem without additional annotated data is the main goal of unsupervised domain adaptation. It is the task of adapting a arXiv:1603.07234v1 [cs.CV] 23 Mar 2016 system trained on one data set (the source S) to be functional on a different data set (the target T). State-of-the-art methods for unsupervised domain adaptation of deep neural network architectures [1,2] proceed by adding new layers to the deep network or learning a joint architecture in order to come up with representations that are more general and informative across the source and target domains. We will refer to these methods, that retrain the network using the information coming from the new (unlabeled) target samples, as deep adaptation methods. In this context (as in the context of finetuning), it has become common practice to retrain only the last layers of the deep network, supposing that the first layers are generic and not susceptible to any domain shift.
However, in spite of their good results on various benchmarks, these methods seem to be of limited value in a practical application. Indeed, deep adaptation methods require a lot of computation time, a lot of resources (powerful GPUequipped servers), and a lot of unlabeled target data. This is in contrast to the typical domain adaptation setting, where we want networks trained on big datasets such as Imagenet to be readily usable by different users and in a variety of settings. For example, imagine a smart surveillance camera equipped with a pretrained recognition system that needs to be functional in a new context in spite of difficult lighting conditions, different backgrounds, etc. The camera does not have the resources on-board to retrain a deep convolutional network -it may not even have enough memory to store all the source data. Moreover, in many situations, we want the camera to be operational within a short time, so collecting a large set of target data samples is not an option either. The conditions to which we want to adapt, may also vary over time (e.g. lighting conditions in an outdoor application), so if the adaptation process takes too long, the new model may already be outdated by the time it becomes available.
So instead, we advocate the need for light-weight domain adaptation schemes, that require only a small number of target samples and can be applied quickly without heavy requirements on available resources, in an on-the-fly spirit. Using only a few samples, such a system could adapt to new conditions at regular time intervals, making sure the models are well adapted to the current conditions. The simpler sub-space based domain adaptation methods developed earlier for shallow architectures [3,4,5] seem good candidates for this setting. Unfortunately, when applied to the last fully connected layer of a standard convolutional neural network, they yield minimal improvement [6,7]. In this work, we start by analyzing the different layers of a deep network from a domain adaptation perspective (section 3). First, we show that domain shift does not only affect the last layers of the network, but can already manifest itself as early as the very first layer. Second, we show that the filters exhibit different behavior in terms of domain shift: while some filters result in a largely domain invariant representation, others lead to very different results for the source and target data. Based on this analysis, we propose a new light-weight domain adaptation method, focusing just on the first layer of the network (section 4). For a given target data sample, it aims at reconstructing the output of the filters affected by domain shift such that their new output is more similar to the response given under the training conditions (i.e., the source dataset). We evaluate the method on various benchmarks: the Office dataset [8], Mnist [9], a Photo-Art dataset and the German traffic sign dataset GTSRB [10] (section 5). The proposed method can adapt the learned network and improve the raw performance, even when only a few (unlabeled) samples from the new domain are available. It takes minimal time and does not involve parameter tuning. But first, let us describe related work (section 2).
Related Work
Shallow DA So far, domain adaptation (DA) has mostly been studied in the context of image representations based on handcrafted features. Methods (see [11] for a survey) tackle the problem in different ways, such as the feature augmentation scheme of [12] or instance reweighting [13,14], that tries to correct the shift by re-weighting the source samples based on their similarity with the target domain. Another interesting line of work is the use of a latent feature space [15,16], that has led to the development of subspace-based DA methods [3,5,17,18,19]. Especially the work of [19] is worth mentioning here, as it aims at adapting a model in an online fashion, somewhat similar in spirit to our work. Most of these methods have mainly been evaluated on the Office benchmark [8] that comes with precomputed SURF features. However, when applied to deep features (i.e., activations of the last layer of a pretrained convolutional neural network), that capture more high-level object information rather than edges and gradients, they do not seem as powerful as before [6,7]. Therefore, more recent works use deep learning like methods to reduce the domain shift. Deep DA Deep adaptation methods try to integrate the adaptation within the learning process: [20] learns a joint architecture between the source and target data, while in [21], a denoising auto-encoder is used to learn from unlabeled target data after which a two layer network with minimum mean discrepancy (MMD) as an adaptation loss is trained. The performance of these two methods is limited, however, due to the use of relatively shallow architectures. They can easily be outperformed by finetuning a deeper network on the source data. Tzeng et al. [22] proposed to add a fork to the deep network that first determines which layer to use and then decides the dimension of that layer based on a combination of the classification loss on the source data and domain confusion loss, again using MMD. This method needs to fine-tune different networks for choosing the best dimensionality and the adaptation process is limited to the choice of the layer in which the joint loss is minimized. A more comprehensive deep learning based approach [2] suggests to freeze the first layers, as it supposes that these layers are global and are not prone to the domain shift. Then, to retrain the higher layers, a multi-layer adaptation regularizer based on multi-kernel MMD is added to improve the transferability of the features. In contrast, we show that the first layers are not immune to domain shifts even though they are generic feature extractors and that, by improving only the first layer features, we can already improve the performance of the whole network. Finally, [1] shows a significant improvement through the use of a deep network along with a domain regressor. This is done by adding a sub-network consisting of parallel layers to the classification layer. These layers use a different loss that minimizes the discrimination ability between source and target instances, which is incorporated in the back propagation learning scheme. Even though these latter, deep-learning based approaches give good results and go along with the current research line (deep learning), it is quite challenging to apply them directly. First, one needs to determine the number of layers that need to be added for the adaptation process, which is difficult without access to the labels. Second, even if we suppose that the number of layers and all the other parameters are given, these methods require to retrain the whole network, which may involve millions of parameters and training samples. This retraining could take a week, so it is impossible to apply the model in a reasonable time.
In conclusion, we see that most of the methods try to relax the problem, either with some assumptions about the characteristics of the used features or by having access to different parameters and a lot of time and computational resources. In contrast, we propose a simple yet efficient method that: 1) attempts to reduce the shift and improve the network performance without retraining the network or any other classifier, and 2) needs only a few (unlabeled) samples from the target domain to be functional. We do so by adapting the first layer of the network -which goes against common belief and practice. Below, we explain our findings w.r.t. the different layers and then proceed to the proposed method.
Analysis of domain shift in the context of deep learning
As mentioned above, deep adaptation methods typically assume that the first layers are generic and need no adaptation, while the last layers are more specific to the dataset used for training and thus sensitive to the shift between the two domains. Therefore, most of the new adaptation methods tend to adapt only the last layers and freeze the first layers. This assumption is based on the fact that the first layers mainly detect colors, edges, textures, etc. -features that are generic to all domains. On the other hand, in the last layers we can see high level information about the objects, which might be domain-specific. However, given that the feature extraction method is generic, does that mean the features are not conveying any domain shift between the different datasets? To answer this question, we perform a thorough analysis of the output of each layer. For this purpose, we use Alexnet [23] pretrained on Imagenet. This network shows remarkable performance and has been trained to recognize a thousand objects.
Domain shift can be caused by different factors which can be mainly divided into: 1. a generic shift, due to having objects with different appearance or captured from different view points (as one can expect when using different datasets for training and testing), and 2. a low-level shift, uniformly affecting the images in the dataset due to e.g. lighting conditions, colors, etc. To study these two types of shift, we use the Office dataset [8] which is the de facto standard benchmark for domain adaptation. First, we use the sets Amazon and Webcam (where Amazon contains images gathered from the Amazon website and Web- Fig. 1: From left to right: visualization of source points (blue) and target points (red) for three different filters from conv1, conv2, and conv3 resp., for the Amazon→Webcam setup. cam contains images captured by a web camera). With this setting, we cover the generic shift (in this case mostly due to the white background on Amazon compared to non-uniform background in the Webcam data set, as well as different objects with different appearance). In addition, to study the low level shift, we created a gray scale set that contains the same images as in the Amazon dataset but converted to gray scale. We call it Amazon-Gray. We consider two adaptation cases: Amazon → Amazon-Gray and Amazon → Webcam. We start by fine-tuning AlexNet on the Amazon dataset (the source). Then, we consider each convolutional layer as an independent feature extraction step. Each layer is composed of a set of filter maps. We consider one filter from one layer at a time, and consider, for this analysis, the corresponding filter map as our feature space. Each instance (image) is a point in that space. For example, the first convolutional layer is composed of 96 filter maps, each of size [55x55]. We reshape each [55x55] filter map into a feature vector of length 3025 and consider this the feature representation of the image. Now we want to find out whether source and target samples follow the same distribution in this feature space, or not.
Qualitative analysis As a first step, we visualize the points in each feature space. To do so, we use the tSne package [24] that is mostly suitable for visualizing high dimensional data. We visualize the points for each domain adaptation couple. Figure 1 shows the visualizations of some example filters from different convolutional layers for Amazon→Webcam. Figure 2 shows the visualization for two different filters of the first layer for Amazon→Amazon-Gray. Please refer to the supplementary materials for more visualizations. The behavior of the filters is quite distinct. In general, the higher the layer, the more overlap between the source and target features we can observe.
H-divergence
In addition to visualizing the output of the filters in each layer of the deep network, we study the H-divergence w.r.t. each layer / filter as well. H-divergence was first introduced by Ben David et al. [25] as a measure of the discrepancy between the source distribution D S and target distribution D T . The H-divergence can be written as follows: where h ∈ H is a characteristic function that is learned to discriminate between samples generated from the source distribution and those generated from the target distribution. Clearly, the bigger the error, the bigger the divergence. In our case, we use an SVM classifier with linear kernel and a fixed C value for all the different computations of the H-divergence. We use a similar feature representation as for the visualization method above. We repeat the process for all filters in all layers. In Figure 3, we show the histograms of the filters' Hdivergences w.r.t. each layer regarding the two study cases Amazon→Webcam and Amazon→Amazon-Gray. We encode the value of the H-divergence by the color where blue indicates a low H-divergence (= "good" filters) while red indicates a high H-divergence (= "bad" filters).
Discussion
From Figures 1 and 3, we can conclude that, in contrast to common belief, the first layers are susceptible to domain shift even more than the later layers (i.e., the distributions of the source and target filter outputs show bigger differences in feature space, resulting in larger H-divergence scores). Indeed, the filters of the first layers are similar to HOG, SURF or SIFT (edge detectors, color detectors, texture, etc.); they are generic w.r.t. different datasets, i.e. they give representative information regardless of the dataset. However, this information also conveys the specific characteristics of the dataset and thus the dataset bias. As a result, when the rest of the network processes this output, it will be affected by the shown bias, causing a degradation in performance.
Especially in the first layer of the convolutional neural network, we see large differences between different filters (Figures 2 and 3). For some filters, the samples are almost perfectly matching while others have very different distributions.
We do not observe such close-to-perfect matchings in the later layers, as their input is affected by the domain shift in some of the first layer filters. Based on this analysis of the domain bias over different layers, we believe that a good solution of the domain adaptation problem should start from the first layers in order to correct each shift at the right level rather than waiting till the last layer and then try to match the two feature spaces.
Our DA strategy
Based on these findings and keeping in mind our goal of having a lightweight method that compensates for the domain shift without the need to retrain the network or any other heavy computations, we suggest the following strategy: compute the divergence of the two datasets with respect to each filter as a measure for how good each filter is, and use the good filters to reconstruct the output of the bad filters. By reconstructing the bad filters output, we mean: starting from a target image as input, for each bad filter, re-estimate its response map such that it becomes more similar to the response map of a source image for the same filter. To achieve such reconstruction, we rely on the target filter maps of the good filters, for which we know the responses from the two datasets to be similar. Figure 4 illustrates the concept behind the filter reconstruction. Suppose a system is trained to recognize facial expressions based on a set of young faces (i.e., the source) but now has to be applied on elder people's faces (i.e., the target). Because of the wrinkles in the old faces, the recognition will be inaccurate (i.e., there's a domain shift problem). Some of the low level filters will not be affected much by the domain shift (e.g. color filters), while others will show large domain shifts (e.g. texture filters). Now we propose to use information in common with the young faces (color) to reconstruct the filter maps of the bad filters (texture). This corresponds to removing the wrinkles, which in turn allows a better recognition. In the following, we will explain the filter reconstruction scheme and then test the proposed method on a variety of datasets. For simplicity, we just focus on the first layer from now on.
Filter Reconstruction
We start from a set of filters from a given layer. Some of these filters are more prone to domain shift than others. We want to determine the bad filters in order to reconstruct their output given the good filter responses. Here "good" and "bad" are from the perspective of the adaptation problem in hand. We aim at designing an optimization problem that simultaneously determines the bad filters and identifies a set of filters that can be used for reconstructing the bad filters output. In order to achieve this, we consider one filter at a time. The filter under study is considered either good (and, hence, retained) or bad (in which case better filters are selected to reconstruct its output).
A feature selection analogy To explain our proposed solution, we use the analogy with a feature selection operation where we are given a set of features/ predictors (all filter maps) and a desired output / response (the filter map under scrutiny). We want to select the set of features based on which we can predict the given response. Clearly, if we have the response itself as a feature in the features set, it will be directly selected as it is the most correlated with the output (i.e. itself). But now we add another criterion to the feature selection problem, that indicates how good a feature is regarding the new problem in which the predicted output will be used. This new problem is classifying new samples from the target dataset. The additional selection criterion is based on the resulting shift with respect to the filter in hand, i.e the divergence shown by this filter. First, we describe our divergence measure, then use it in the selection process.
KL-divergence
We need a measure that can be computed efficiently and that can give us an indication on how good or bad the filter is in terms of the adaptation problem. For that purpose, we estimate the probability distribution of the filter response given source data as input, P S , and likewise for the target data (or a subset thereof), P T . We then use the KL-divergence [26] [27]. That is a measure of the difference between two probability distributions, in our case P S and P T . It estimates the amount of information lost when using the source probability distribution to encode the target probability distribution. For the case of discrete probability distributions, KL-divergence is computed as follows: In the context of our problem, for each filter, we estimate the distribution of the source samples and the distribution of the target samples for that filter response and then compute the KL divergence between the two probability distributions. This gives us a KL divergence value associated with each filter. We use this value as our additional criterion in the filter selection operation.
Filter Selection
We want to select the set of features (filters) that are going to be used in a regression function that predicts the output of the filter in hand. Here, we do not consider the entire filter map, but rather the filter response at each point of the filter map separately, where, given the response of the other filters at this point we want to predict the current response. We use the source data as our training set where our aim is to reconstruct the source like response of a bad filter given the output of the good filters. Going back to the literature, feature selection for regression has been studied widely. Lasso [28] and Elastic net [29] have shown good performance. The two methods differ in their regularization strategy; while Lasso introduces the L1 norm regularization that will ensure the sparsity of the selected set of features, the Elastic net adds another L2 norm regularization term. By doing so, the Elastic net overcomes the case when the number of features is bigger than the number of samples and encourages grouping of features as well. In our case, we always have a set of source samples bigger than the number of filters, and we don't want to group the selected filters. Therefore, we favor Lasso as it introduces the sparsity which is essential in our case to select as few and effective filters as possible. Having the response y and the set of predictors x, the main equation of Lasso can be written as follows: where β 0 is the residual, B = {β j } the estimated coefficients, n the number of source samples, p the number of filters, and λ a tuning parameter to control the amount of shrinkage needed. The bigger the value of λ, the more we steer the coefficients β j towards zero values. What we need to do next is to insert our additional selection criterion, i.e. the KL-divergence, where for each filter x j , we have computed a KL divergence value, ∆ KL j . We will use this divergence value to guide the selection procedure. This can be achieved by simply plugging the ∆ KL j value in the L 1 norm regularization as follows: Solving this optimization problem, we obtain the weights vector B * , with a weight β * j for each filter, including the filter we try to reconstruct itself. If the filter in hand has a non-zero weight, that means it is considered a good filter and we will keep its value. On the other hand, if the filter has zero weight then it will be marked for reconstruction and the filters with non-zero weights are used for this purpose. The above optimization problem can be nicely solved using coordinate descent where we update the gradient w.r.t. one filter at a time. In the normal Lasso setup [30], the regularization paths are used to select lambda. In our case, we have another term, i.e. the divergence, associated with each selection. We choose the lambda that gives us an optimal combination of divergence between the source points and the reconstructed target points at one hand and the error of the reconstructed source on the other hand. The coordinate descent optimization is quite fast and scalable as it doesn't need to keep the training data in memory.
Reconstruction The Lasso optimization process is very efficient in selecting the features, but the weights that are estimated by the method are not the most accurate for prediction as they are controlled by λ . A common practice is to use Lasso for the variable selection step and then use another regression method for the prediction. Therefore, after selecting the set of filters to be used for reconstruction, we use the linear regression method to predict the filter output y l given the responses of the selected filters. The linear regression is in its turn simple and efficient to compute. As a result, we obtain the final set of coefficients B l for each bad filter y l . Algorithm 1 summarizes the filter selection and reconstruction procedure.
Prediction At test time we receive a target sample x t . We pass it through the first layer and obtain the response of each filter map. Then, for each bad filter y l , we use the responses of its own selected set of filters to predict a source like response given the coefficients B l . After that, we replace each bad filter value (a point in the response of the bad filter map) by the predicted response and pass the reconstructed data to the next layer up to prediction. if B * (l) = 0 then 6: BadF ilters(l) ← 1 Add it to the set of bad filters 7: Selected(l) ← B * = 0 The selected set of filters for its reconstruction 8: B l ← LinearRegression(S(l), Selected(l)) 9: P redB(l) ← B l The set of coefficients for all the bad filters 10: end if 11: end for 12: return BadF ilters, Selected, P redB
Experiments
In this section, we design different experiments in order to see the complete picture of the filter reconstruction method and to have a clear understanding of when and where it is a good practice to use. We test the performance of the method on different datasets and compare with multiple baselines.
Setup & Datasets
Office benchmark [8]: it contains three sets of samples: i) Webcam, composed of images taken by a webcamera, ii) DSLR, containing images captured by a digital SLR camera, and iii) Amazon which contains images of office related objects downloaded from the Amazon website. In addition, we will use the gray scale version of the Amazon data. The main task is object recognition. For the first set of experiments, we will use Alexnet [23] pretrained on Imagenet and fine-tuned on the Amazon data. We will be dealing with three adaptation problems: Amazon→ Amazon-Gray, Amazon → Webcam and Amazon → DSLR. We do not consider DSLR↔Webcam, as with deep features the shift between the two sets is minimal. Mnist→ MnistM: here, we train a network on Mnist dataset for hand written digits recognition that are black & white, while the target test is the MnistM dataset, composed of the same digits as Mnist [9] but blended with patches from real image [31]. We followed the procedure described by [1] and further verified by having the same results. Synthetic traffic signs→ Dark illumination: in this setting, we want to imitate the real life condition in which we train a system on a large dataset of synthetic images that could as much as possible mimic all the different conditions, and then test it on a dataset affected by a domain shift. The task here is traffic sign recognition where we train on a synthetic dataset of traffic signs [32] composed of 100.000 samples and test on a subset of the German traffic sign dataset [10] that has been captured under very dark light conditions (see Figure 5). We follow a single column traffic sign network architecture similar to [33]. Photo→Art: finally we examine a different shift, which is the case of having a network trained on photos of real objects and then test it on paintings. A similar idea has been introduced in [34] in the context of cross depiction problem where the introduced target set was gathered from different domains, i.e. clip art, painting, cartoon and sketches. Here we fine-tune Alexnet on a set of 7 classes of animals, whose real photos we downloaded from the internet using a commercial image search engine. The used categories are: dog, elephant, zebra, lion, cow, deer and horse. In a similar way, we gathered paintings of the same categories from the internet.The training and test sets have around 40 images per category for each set and will be available online. We apply our domain adaptation method to the first layer of the fine-tuned network. See figure 5 for an example of the different adaptation problems. For the previously described adaptation problems, we used 10% of the target dataset as our available target samples.
Baselines: We compare with the following baselines: No adaptation(NA) by testing the network fine-tuned on the source dataset directly on the target set without adaptation.DDC method [22] that only adapts the last layer by selecting the feature space dimension that shows the MMD between the source and target domains. Subspace Alignment(SA): unsupervised subspace alignment [3] is a simple method, yet shows good performance. To make a fair comparison, we take the activations of the last fully connected layer before the classification layer and use them as features for this baseline. We perform the subspace alignment and retrain an SVM with a linear kernel on the aligned source data, then use the learned classifier to predict the labels of the target data. We tried all the different dimensions between 20 and 60 and report the best result regarding the target classification task. For completeness, we also show the result of the SVM classifier trained on the source features before alignment(SVM-fc7). SA -First Convolutional: we adapt the subspace alignment method so it can be applied to the first layer of the convolutional neural network. To avoid having to retrain a classifier, we adapt the target data to the source data (instead of vice versa), w.r.t. the activations of the first convolutional neural network. In the original subspace alignment problem, the source subspace is aligned with the target subspace using the mapping matrix M = X T S X T where X S and X T are the biggest d principal components of the source and the target. Source and target samples can then be compared using y S X s X s X T X T y T . Here, we therefore replace a filter response vector y T by y T X T X T X S X T S . Again, we tried different dimensions for the subspaces, between 20 and the size of the filter maps, i.e. 96 in Alexnet. Table 1 shows the results of the experiments on the Office dataset. In spite of the method's simplicity and the fact that it is just active on the first layer, we systematically improve over the raw performance obtained without domain adaptation -especially in the case of Amazon-Gray where it is a low level shift and the method adapts by anticipating the color information of the target dataset, i.e. reconstructing the color filters. In the case of Amazon-Webcam and Amazon-DSLR, the method tries to ignore the background of Webcam and DSLR datasets that is different from the white background in Amazon dataset. Of course in this case there is also a high level shift that can be corrected by adapting the last layer features. See Figures 6, 7 for two filter reconstruction examples from Webcam and Amazon-Gray target sets. In the case of Amazon-Gray, we have the original color image from Amazon and also each filter output which serves as a reference. More examples can be viewed in the supplementary materials. The method also outperforms the DDC [22] which is dedicated to correct the shift at the last layer only. The same for SA where the method improvement on Amazon gray was negligible. A similar behavior can be observed for the SA applied to the first layer where we refer that to the fact that aligning all the filters of the target on one shot with the filters of the source can be seen as ignoring the bad features (filters) while in our case we try to bring up new information by reconstructing the bad filters' output. Table 2 shows the prediction's accuracy on the remaining three datasets. For Syn→Dark, the method succeeds in improving the performance by 5%. This again proves that the first layer filters are not generic and adapting them plays an important role in the rest of the classification process. It also indicates the direct applicability of the method in real life applications. A similar performance can be observed w.r.t. the Photo-Art and Mmnist-MnistM adaptation problems. As a result, we can conclude that the method could improve on a variety of datasets based on different network architectures and could scale from small datasets like Office and Photo-Art up to large datasets such as the Synthetic traffic signs and Mnist.The whole procedure of filter selection and construction takes around 5 minutes on a desktop CPU Core i7 with 16G RAM. Using few samples As explained earlier, our method uses the unlabeled target samples only to estimate the target distribution and thus the divergence of each filter. To model the distribution of a filter, we don't need a big number of samples -especially since we do not take the spatial resolution into account and consider each point in the filter map as a different sample. So, an image is composed of a number of samples equal to the size of the filter map.
Results and discussion
To examine this claim, we use sets of different numbers of images from the target dataset, here w.r.t. the Amazon-Gray dataset. We run the method 3 times for 3 different subsets and report the mean performance starting from 1 available target sample (see figure 8). The method starts improving the performance already from a single sample and this improvement gradually increases up to 10, after which the performance saturates.
Examining other layers Here, we extend our test to further layers i.e. rather than adapting the filters of the first layer, we try to reconstruct the filters of the second and third layer instead (leaving the other layers untouched). As reported in Table 3, the improvement obtained by adapting the second and third layers is less than the first layer. Especially, with Amazon-Gray, adapting the third layer features doesn't improve the recognition. This may be explained by the fact that the domain shift originates from the first layer (color) and thus can't be corrected by adapting the third layer only, which has more texture-oriented filters.
Conclusion
In this work, we aim to push the limits of unsupervised domain adaptation methods to settings where we have few samples and limited resources to adapt, both in terms of memory and time. To this end, we perform an extensive analysis of the output of a deep network from a domain adaptation point of view. We deduce that even though filters of the first layer seem relatively generic, domain shift issues already manifest themselves at this early stage. Therefore, we advocate that the adaptation process should start from the early layers rather than just adapting the last layer features, as is often done in the literature. Guided by this analysis, we propose a new method that corrects the low level shift without retraining the network. The proposed method is suitable when moving a system to a new environment and can be seen as a preprocessing step that requires just a few images to be functional. This opens the door towards online adaptation, where the model is updated as new instances are becoming available. The method is lightweight, can be applied to different tasks and is not conditioned by a specific architecture. We test it on a variety of datasets in which it systematically succeeds to improve the network performance.
|
2016-03-23T15:28:29.000Z
|
2016-03-23T00:00:00.000
|
{
"year": 2016,
"sha1": "5f1cea6b26a089c0de3a9363dbc2470c715de919",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-319-49409-8_43.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "01cc5b86313bc32a70bf03257766ecb8336606ae",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
216564334
|
pes2o/s2orc
|
v3-fos-license
|
A STUDY ON STRATEGIES FOR MARKETING MANAGEMENT ADOPTED TO INCREASE CUSTOMER AWARENESS BY THOLGIRI (THE ETHNIC HUB), GUWAHATI, ASSAM
A marketing strategy is a business's overall game plan for reaching people and turning them into customers of the product or service that the business provides. The selling strategy of an organization holds the company‟s price proposition, key marketing messages, information on the target customer and other high-level components. The marketing strategy is a route map of the marketing plan, which is a document that lays out the types and timing of marketing activities. A company‟s marketing strategy is expected to have a longer lifespan than any individual marketing plan as the strategy is; where the value proposition and the key components of a company‟s brand image reside. These components ideally do not shift very much over time. The marketing strategies adopted by organizations may vary from industry to industry and may even vary among similar industries. Marketing strategies helps in developing products and services in one business that can meet the needs of the target market. Good marketing strategies helps the customers understand the products or services in a better way. A good selling strategy must be drawn from marketing research and specialize in the proper merchandise combine so as to attain the most profit potential and sustain the business. A marketing strategy is also important for developing a promotional strategy as it helps the business identify its target market and to set measurable goals. It is vital to the success of the organization that implements a marketing plan that aims for growth and positive change in the bottom line.
INTRODUCTION
Earthquake Situated right in the heart of Guwahati -the historic Latasil area, 'Tholgiri' (থলগিগি) is a wonderful concept. A conglomeration of four different housing styles prevalent in the villages o f Assam, the interiors will leave you spellbound. The first two sections are for the display and sale of various items indigenous to Assam and the other two sections serve the purpose of a cafe-cum-bookstore. The owners: a noted journalist-Manorom Gogoi and Author-Monalisha Saikia have strived to create a space like home -the feeling of belonging, connecting and purpose. It is interesting to see how on the one hand, Assamese sub -nationalism has hit a new low due to the acts of miscreants, and on the other hand, some people have channelized this into creative ventures. This variant is all-embracing and devoid of parochialism. Tholgiri came into existence on 14th of December, 2018. Tholgiri being a complete new concept aims at serving the various cultural and ethical products of Assam, which are also manufactured in Assam. Along with enhancing the ethnic items of the state they are also providing opportunities to the people of various villages to earn by providing the resources which ultimately results in th e economic growth of the state as well as the country. The local villagers from various villages of Assam has been engaged in providing the resources and also producing the packaged products. Tholgiri has lead to a great source of income to the local people of the villages providing them the employment opportunities. The products of Tholgiri are manufactured in Assam, starting from the raw materials till the finished products. Group of people from various villages of Assam are engaged with the manufacturing of products like pithas (rice cakes), khar, pickle, rice, organic tea and green tea, chunga tea (phalap), fruits juice, organic oil, spices, jiggery, kumal chaul jolpan, chiri jolpan, sandoh jolppan, pitha guri jolppan, hurum jolppan, bora chaul jolppan, pithas, ladoos, pork rice(boiled veg, pitika, chatni), chicken rice (boiled veg, pitika, chatni), duck rice (boiled veg, pitika, chatni), mati maahor jol, paneer curry, mushroom curry, fish tenga, fish with mustard seeds, fish pitika and fish with vegetables, curd(doi), cream, molasses and payos(kheer), etc. The clothing items i.e the ethnic attires are manufactured in Sualkuchi which is famously known as the "Manchester of Assam" for its large number of cottage industries engaged in handloom. As we know, Assam as a state is well known for its natural resources (Like Silk, Cotton, tea, Rice etc.) and the people of Assam has been engaged in agricultural and handloom activities since decades, and Tholgiri is an idea of keeping this tradition alive. The main aim of the Tholgiri is to enhance the resources of Assam and to promote the culture and food habits of Assam among the younger generation and promote it all over Assam and other parts of the country. People now a days are also much into the consumption of wheat based food products, which is scientifically an unhealthy consumption. Tholgiri has also shown concern regarding the various food habits which are leading to various health problems and has been making efforts to spread awareness to the customers with an aim to replace such food habits with organic and rice based products. The products of Tholgiri are produced organically and are preservative free. The main objectives of Tholgiri are as follows: To keep the culture of Assam alive. To enhance the cultural and ethnic food habits of Assam by providing various packaged items as well as serve various ethnic meals in the outlet. To help the local producers and vendors by providing employment opportunities. To promote the books written by different authors from all over Assam. To promote the ethnic attires of different parts of the state thereby opening opportunities for the handloom sector of the state. To promote the the various resources which are available in the country. According to Kotler & Keller "The marketing strategy lays out target markets and the value proposition that will be offered based on an analysis of the best market opportunities." [1] Marketing strategy involves mapping out the company's direction for the forthcoming designing amount, whether or not that be 3, 5 or 10 years. It involves enterprise a 360° review of the firm and its in operation surroundings with a read to distinctive new business opportunities that the firm might probably leverage for competitive advantage. Strategic designing may additionally reveal market threats that the firm might have to think about for semi-permanent property. Strategic designing makes no assumption concerning the firm continued to supply constant merchandise to customers into thelonger term. Instead, it's involved with distinctive the business opportunities that are doubtless to achieve success and evaluates the firm's capability to leverage such opportunities.
It seeks to spot the strategic gap; that's the distinction between wherever a firm is presently located (the strategi creality or accidental strategy) and wherever it ought to be situated for property, semi permanent growth (the strategic intent or deliberate strategy). Strategic designing seeks to deal with 3 queries, specifically: Where are we now? (Situation analysis) What business should we be in? (Vision and mission) How should we get there? (Strategies, plans, goals and objectives) A fourth question may be added to the list, namely 'How do we know when we got there?' Due to increasing need for accountability, many marketing organizations use a variety of marketing metrics to track strategic performance, allowing for corrective action to be taken as required. On the surface, strategic designing seeks to deal with 3 straightforward queries, however, the analysis and analysis concerned instrategic designing is incredibly refined and needs a good deal of skill and judgement.
LITERATURE SURVEY
According to researchers like Stanley F. Teele, the marketing practices of Food manufacturers and observed that the use of brand names is not directly related, however, to high distribution costs because it is the intensity with which brands are promoted that determines costs rather th an their use alone. The costs of marketing differ very decidedly from organization to organization within the same product division of the food industry. It is of great importance to see how wide a range of marketing practices may be adopted successfully b y companies in competition with each other. The wide variety of marketing practices had exemplified by the extent to which firms differed in the selection of types of customers. Personal selling costs vary significantly from one industry to another, but within each industry there is more of a tendency toward a common or typical figure. Firms of larger size tend to had higher distribution costs in relation to smaller firms in the same industry. [2] According to Barksdale study, in the United States cross -sectional study on consumers attitudes towards th e policies and practices of business of a national sample of consumers. Consumers showed a high level of apprehension regarding bound policies of business and discontent over specific promoting practices. Most consumers valued the free enterprise system highly. In the marketing system presence of imperfections was believed to be caused by the ineptness, carelessness, and apathy of consumers. Consumers also believed that their problems required a lot of attention and expressed the necessity for bigger government regulation. [3] According to Peter F. Drucker, in today"s modern day society there is no other leadership group except managers. Despite the emphasis on marketing and its approach, marketing is still rhetoric rather than reality in many types of businesses. After many years of marketing the rhetoric consumerism has become a powerful popular movement that has prove that not much marketing has been practice d [4]. According to Williamson in a study concerning the pattern of adoption of new drugs, surveyed 140 general practitioners and the results showed that doctors prescribing attitudes are strongly influenced by the characteristics of the drug. He pointed out that a single marketing practice for the entire product line would be ineffective and recommended a different combination of marketing variables to influence sales revenue in each product market taking into account the complex factors characterizing each product market and the effects of the product characteristics on doctor's www.ijtrs.com www.ijtrs.org Paper Id: IJTRS-V4-I7-017 Volume IV Issue IX, September 2019 @2017, IJTRS All Right Reserved prescribing attitudes. He also draws literature on risk assessment to examine the medical practitioners prescribing the new drugs". He concluded that the level of risk which a doctor perceives determines the external validation he or she requires in order to prescribe the drugs. The preferred information sources vary with the perceived riskiness of medicines by the doctors. He also stated that the most important source for low-risk drugs are medical representatives, but are less important for higher risk drugs. [5] According to Jain in his study, Marketing strategy is pointed out by the promoting objectives, client and competitive views and product and market momentum (i.e. extrapolation of past performance to the future) kind the idea of selling strategy".......... "Marketing strategy is developed at the business unit level. Within a given surroundings, marketing strategy deals essentially with the interplay of three forces known as the strategic 3 C"s: the Customer, the Competition and the Corporation. A good commercialism strategy have to be compelled to be characterized by a) clear market definition; b) a decent match between company strengths and also the desires of the market; c) superior performance relative to the competition is the key success factors of the business. Marketing strategy in terms of these key constituents must be defined as an endeavour by a corporation to differentiate itself positively from its competitors, using its relative corporate strengths to higher satisfy client desires during a given environmental setting. Based on the interplay of the strategic 3 C"s, formation of marketing strategy requires the following decisions: Where to compete? How to compete? When to compete? [6]
OBJECTIVE OF THE STUDY
To examine the influence of marketing strategies to create band awareness among customers. To examine the SWOT analysis of Tholgiri. To analyze the response of consumer with respect to quality, price, variety, packaging and freshness parameters.
SCOPE OF THE STUDY
The study is limited to the customers of Tholgiri, Guwahati, Assam. There was no previous data to study on Tholgiri and the whole initiative is a first-of-its-kind.
RESEARCH METHODOLOGY
In this descriptive Research design, a structured questionnaire has been used as the data collection instrument from a convenience sample of various customers. The sample size of 100 is taken for the purpose of research. Moreover, interviews were conducted on a face-to-face basis. There were a number of data provided by the customers which helped in studying the various measures undertaken by the Organisation to increase customer awareness. [7] The methodologies applied for successful completion of the study are:-Primary Sources of Data Collection Personal Interview and Contact: -Personal contact was established with customers to obtain necessary informations. Questionnaires: -For the purpose of collecting data and information structured questionnaires were prepared and shared for recording responses. Findings: -Findings includes vision as its main means of data collection. Different customers attitude, behavior and knowledge are noticed here. Sample Size: -The total number of sampled customer surveyed is 100 on a personal basis through the help of structured questionnaire. Secondary Sources of Data Collection Files and Documentary Sources: -Data were collected from Tholgiri files and documentary sources.
Fig. 6.1 Sources of Informati on about Tholgiri
Social Media , 13% Newspapers, 21% Friends, 57% Others, 9% Analysis: It is found that a large number of the customers have come to know about Tholgiri from friends and moderate number of customers have come to know about tholgiri from newspapers and social media. It is understood that the organization should focus more on promoting over print and social media to attract more customers. 57% of them came to know about Tholgiri from their friends, 21% of them from newspaper,13% from social media and 9% from other sources.
Fig. 6.2 Product Preference of Tholgiri Source: Field Data
Analysis: It is found that good number of customer prefers the packaged food items and the food served in Tholgiri and moderate number of customers prefers the books and a very less number of customers prefers the ethnic wears. It is understood that the customers are preferring the food items more than the other items offered by Tholgiri. Hence, Tholgiri needs to concentrate on its other products as well to increase the customers preferences. 42% of the customers likes the packaged food items, 34% of the customers likes the food served in Tholgiri, 18% of them likes the books and 6% of them likes the ethnic wear.
Fig. 6. 3 Customer Satisfaction at Tholgiri Source: Field Data
Analysis: It is found that Tholgiri is doing really well with the service being provided to the customers and have been successful in making the customers satisfied with its service quality as majority of the customers has given a positive respondent towards its customer service. 94% of the customers are satisfied with the cust omer service, 6% of the customers are somewhat satisfied with the customer service. Analysis: It is found that majority of the respondents are satisfied with the quality of the food being served to them. 92% of the respondents are satisfied with the quality of the food in respect to its price, 1% of them are not satisfied and 7% of them are somewhat satisfied.
Fig. 6.5 Purchases Made from the Available Packaged Food Products Source: Field Data
Analysis: It is found that although being a new entrant in the market, Tholgiri has a good number of customers who have purchased and liked its packaged food products. 78% of the customers have purchased the packaged food products offered by Tholgiri and 22% of the customers haven"t purchased it yet.
Fig. 6.6 Preference of Packaged Food Products Source: Field Data
Analysis: It is found that a good number of customers likes the rice products and pithas, whereas the number of customers who prefer pickles and other products of Tholgiri are less. 41% of the customers prefers the rice products offered by tholgiri, 33% of them likes the pithas and ladoos, 11% of them likes the pickles offered by Tholgiri and 15% of them likes other products like oil, spices, tea etc.
Preference of packaged food products
Yes, 78% No, 5% Somewhat, 17% , 0 Analysis: It is found that a few of the respondents are not much satisfied with the packaging of the products of Tholgiri which means Tholgiri needs to focus on its packaging in order to meet the customer expectation. 78% of the customers are satisfied with the packaging of the products, 17% of the customers are somewhat satisfied and 5% of the customers are not satisfied.
Fig. 6.8 Customers Liking about Tholgiri
Source: Field Data Analysis: It is found that majority of the customers likes the food and infrastructure of Tholgiri and also a moderate amount of them likes the customer service. 37% of the customers likes the food offered by Tholgiri, 34% of the customers likes the infrastructure and 29% of them likes the customer service.
Fig. 6.9 Customers Convenience from their Location Source: Field Data
Analysis: It is found that Tholgiri is convenient to customers as they are located within 8 km radius. 17% of the customers finds Tholgiri very convenient from their location, 21% of the customers finds Tholgiri convenient as they are located within 3-5 kms from the location of Tholgiri, 32% of them finds the location of Tholgiri less convenient as they are located with 5-8 kms from Tholgiri and 30% of the customers do not find the location of Tholgiri. www.ijtrs.com www.ijtrs.org Paper Id: IJTRS-V4-I7-017 Volume IV Issue IX, September 2019 @2017, IJTRS All Right Reserved Analysis: It is found that Majority of the customers are willing to recommend Tholgiri to their friends and family which shows the positive impact of Tholgiri on its customers. 96% of the customers are willing to recommend Tholgiri to their friends and family and 4% of them may or may not recommend Tholgiri to their friends and family.
FINDINGS
By using the 7p"s of marketing mix the strategies of Tholgiri regarding its products and service has been analyzed. It is seen that out of all the P"s Tholgiri lacks behind in its promotional strategies of marketing mix. Tholgiri haven"t yet invested in any sales promotion as it is new to the market. Tholgiri is currently engaged in direct marketing. It is also observed in accordance to the 7p"s of marketing mix that Tholgiri is not able to target large number of customers because of only one outlets in the city. By using SWOT Analysis the major Strengths, Weakness, Opportunities and Threats of Tholgiri have been found: Strengths of Tholgiri: Variety of products and food items, Organic products and Unique concept and attractive outlet. Weakness of Tholgiri: Only one outlet, No promotional strategies and no home delivery. Opportunities of Tholgiri: New in the market, Unique concept, Deals with Ethnic food and products, large number of target market. Threats of Tholgiri: High number of competition, Different food habits of people, Customer preferences. According to customer survey: That majority of the customers came to know about Tholgiri from Friends that means word of mouth has played the key role in promoting Tholgiri among Customers. It is also understood that Tholgiri should focus more on promoting over social and print media to attract more customers.
It is found that 42% of the customers likes the packaged food items,32% of the customers like the food served in Tholgiri,18% of them likes the books and 6% of them likes the ethnic wear. It is understood that the customers are preferring the packaged products and served food items more than the books and ethnic wear. Hence, Tholgiri needs to concentrate on its other products as well to increase their sales. It is seen that majority of the customers are satisfied with the customer service being provided by Tholgiri, which shows positive impact of Tholgiri among the customers. It is also found that majority of the customers are satisfied with the quality of the food in respect to its price, While a few are not completely satisfied as they think few food items are priced slightly high. Majority of the customers i.e 78% have purchased the packaged food products offered by Tholgiri and 22% of the customers haven"t Purchased yet. Among the packaged products, the rice products and Pithas and Ladoos are the mostly preferred ones among the customers. Hence, Tholgiri should focus on its other products as well to enhance the customer preferences of Tholgiri products over other brands. Majority of the customers are satisfied with the packaging of Tholgiri products while few of them are not much satisfied as they think the packaging could have been of better quality. It is found that all of the customers had positive reviews for the infrastructure of Tholgiri as they find it very unique and attractive. It is seen that 37% of the customers like the food offered by Tholgiri, 34% o f the customer likes the infrastructure and 29% of them likes the customer service. It is understood that the majority of the customers likes the food and infrastructure of Tholgiri.
While being asked about location, 17% of customers finds the location of Tholgiri convenient as they are located within 3-5 kms from the location of Tholgiri, 32% of them finds the location of Tholgiri less convenient as they are located with 5-8 kms from Tholgiri and 30% of the customers do not find the location of Tholgiri convenient as they are located with in 8 kms from Tholgiri and they want Tholgiri to come up with more outlets in other location of the city of Guwahati. Majority of the customers are likely to visit Tholgiri again and a very few of them may or may not visit Tholgiri again. Majority of the customers are willing to recommend Tholgiri to their friends and family, which shows the positive impact of Tholgiri on its customers .
RECOMMENDATION
On the basis of analysis and findings as explained in the previous p ages, the following recommendations are being made: Tholgiri should use the marketing mix strategies or any of the tools of the marketing strategies to understand the market situation and take marketing measures accordingly. Tholgiri should invest in promotional activities to promote itself among more no. Of customers. They should focus more on online promotions of their products and services. Tholgiri needs to have their own website, which will help the customers reach Tholgiri more easily. They should also focus on the quality of packaging of the products to meet the customer expectations. The most preferred products of Tholgiri are the food items and products, tholgiri should focus on its other products as well to increase their sales. They should also increase the stock of their ethnic wear collection to make it more attractive. Tholgiri should plan to come up with more outlets in the city to reach large no of the target market.
CONCLUSION
A marketing strategy is a business's overall game plan for reaching people and turning them into customers of the product or service that the business provides. The selling methods of an organization possess the company"s price proposition, key selling mes sages, data on the target client and alternative high -level parts. Tholgiri is a unique concept which is completely new in the market and aims at serving the various cultural and ethical products of Assam. The main objective of the study was not only to un derstand the marketing strategies but also to analyze the customer response in respect to the price, quality, packaging and freshness of the Tholgiri products. From the analysis, we can conclude that Tholgiri being new in the market needs to focus on its marketing strategies and promotional activities to make people aware of its existence in the market. Tholgiri should also come up with their own website and keep itself updated in various social platforms to reach large target markets. \
|
2020-03-19T10:22:22.822Z
|
2019-09-15T00:00:00.000
|
{
"year": 2019,
"sha1": "42ad30e1146f7efe843e8ce4703f7332e140a716",
"oa_license": null,
"oa_url": "https://doi.org/10.30780/ijtrs.v04.i09.002",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d9473b8a0bd1a403333f4bcae7581e59ae5dd4ad",
"s2fieldsofstudy": [
"Business",
"Sociology"
],
"extfieldsofstudy": [
"Business"
]
}
|
254293172
|
pes2o/s2orc
|
v3-fos-license
|
Repurposed FDA-approved drugs targeting genes influencing aging can extend lifespan and healthspan in rotifers
Pharmaceutical interventions can slow aging in animals, and have advantages because their dose can be tightly regulated and the timing of the intervention can be closely controlled. They also may complement environmental interventions like caloric restriction by acting additively. A fertile source for therapies slowing aging is FDA approved drugs whose safety has been investigated. Because drugs bind to several protein targets, they cause multiple effects, many of which have not been characterized. It is possible that some of the side effects of drugs prescribed for one therapy may have benefits in retarding aging. We used computationally guided drug screening for prioritizing drug targets to produce a short list of candidate compounds for in vivo testing. We applied the virtual ligand screening approach FINDSITEcomb for screening potential anti-aging protein targets against FDA approved drugs listed in DrugBank. A short list of 31 promising compounds was screened using a multi-tiered approach with rotifers as an animal model of aging. Primary and secondary survival screens and cohort life table experiments identified four drugs capable of extending rotifer lifespan by 8–42%. Exposures to 1 µM erythromycin, 5 µM carglumic acid, 3 µM capecitabine, and 1 µM ivermectin, extended rotifer lifespan without significant effect on reproduction. Some drugs also extended healthspan, as estimated by mitochondria activity and mobility (swimming speed). Our most promising result is that rotifer lifespan was extended by 7–8.9% even when treatment was started in middle age.
Introduction
Because of the infirmities associated with human aging, there continues to be great interest in interventions that can mitigate the process. Of the three approaches, genetic manipulation continues to make important contributions to the scientific understanding of the mechanisms of aging, whereas environmental and pharmacological interventions offer the most promise for practical benefits. Little is known about how pharmaceutical interventions slow aging in animals, but they have advantages because their dose can be tightly regulated and the timing of the intervention can be closely controlled. Pharmacological interventions also may complement environmental interventions like caloric restriction by acting additively.
A fertile field to search for therapies that can slow aging is the pool of FDA approved drugs whose T. W. Snell (&) · R. K. Johnston · A. B. Matthews · H. Zhou · M. Gao · J. Skolnick School of Biological Sciences, Georgia Institute of Technology, Atlanta, GA 30332-0230, USA e-mail: terry.snell@biosci.gatech.edu safety has been thoroughly investigated (Armanios et al. 2015). All drugs bind to several protein targets causing multiple effects (Zhou et al. 2015), many of which have not been biologically characterized. It is possible that some of the side effects of drugs prescribed for one type of therapy may have benefits in slowing aging. The field of re-purposing approved drugs for other therapies is rapidly growing (Ashburn and Thor 2004;Pantziarka et al. 2014).
Some metabolic pathways are known to be key in aging like insulin/IGF-1 signaling involved in nutrient sensing and metabolic regulation. For a variety of reasons, drug development has targeted many proteins in this pathway as therapy for a diversity of diseases. However, there has been little systematic drug development aimed at aging therapy. Moreover, there could be other metabolic pathways not yet associated with aging that respond to pharmacological intervention with existing approved drugs.
With more than 5000 drugs in commercial use, it would be quite difficult to test all of these experimentally for aging benefits using in vivo animal models. An alternative is to use computationally guided drug screening for prioritizing drug targets (Snell et al. 2016;Calvert et al. 2016;Ziehm et al. 2017). This would produce a short list of candidate compounds that could then be subjected to the power of in vivo testing with animals. This is one of the most promising strategies for identifying pharmacological interventions that can safely slow aging and extend human lifespan and healthspan.
In this paper we have emphasized testing for effects vs biochemical mechanisms. Our priority has been to screen as many drugs as quickly as possible for major beneficial effects slowing aging. Once a small pool of such compounds has been identified, then a concerted effort can be mounted to understand the biochemical mechanisms underlying the therapeutic effects.
We apply the virtual ligand screening approach FINDSITE comb (Zhou and Skolnick 2013) for screening potential aging-related protein targets against FDA approved drugs listed in DrugBank (Wishart et al. 2006) to identify drugs with the potential to slow aging. FINDSITE comb has advantages over traditional ligand based approaches in that it does not require a known set of ligands for the target. It is also advantageous to traditional structure-based docking methods because it does not require high-resolution target structures and screens a compound library much faster; and more importantly has much better enrichment factor. FINDSITE comb generates a short list of FDA drugs that potentially bind to agingrelated protein targets of an animal model. This short list of promising compounds was screened using a multi-tiered approach with rotifers as an animal model of aging. Rotifers have several advantages as experimental animal models for investigating the biology of aging (Snell 2014;Snell et al. 2014a, b). Among these are a short life cycle so that cohort life table experiments can be completed within 3 weeks, the ability to clone females via parthenogenetic reproduction, the possibility of performing experimental evolution in chemostats, and the identification in rotifer transcriptomes of many genes implicated in aging and their homology to similar genes in mammals.
In this paper, we screened a short list of drug candidates with the potential to extend lifespan using in vivo animal experiments with primary and secondary survival screens followed by cohort life tables. We also screened drugs for their ability to extend healthspan by using proxies for mitochondrial activity and swimming speed (mobility) during the aging process. We found a few drugs capable of extending both lifespan and healthspan and then tested them for additive or synergistic effects in combined exposures. In addition to the beneficial effects of life-long therapy of some drugs, we demonstrated retarded aging effects in a few cases even when drug therapy was initiated in midlife, a phenomenon that many consider the holy grail of aging therapy.
Materials and methods
Computational screening of putative aging proteins and FDA-approved drugs Protein targets for drug binding to Brachionus manjavacas were identified in a multistep process (Table 1) beginning with identifying putative aging genes from the GenAge database (http://genomics. senescence.info/genes/stats.php). We used data from the October 8, 2015 build, which identified 2054 putative aging genes from 9 model organisms, including Saccharomyces cerevisiae, Caenorrhabditis elegans, Drosophila melanogaster, and Mus musculus. The genes were ranked by their effect on maximum average lifespan increase, so we chose those in these four model organisms whose knock down produced a [ 20% increase in average lifespan. From this pool of protein candidates, we identified 94 proteins with [ 40% amino acid sequence similarity to Adineta vaga genes, the only rotifer for which a whole genome analysis is currently available (Flot et al. 2013).
Subsequently, FINDSITE comb was applied to screen the 94 proteins against all DrugBank drugs (including FDA approved & experimental drugs) plus the ZINC8 molecules clustered at Tanimoto Coefficient (TC) 0.8 as background (Irwin and Shoichet 2005). FINDSITE comb takes the protein amino acid sequence as input and builds a structural model using a threading approach. The pockets of each target protein were detected in their models, and subsequently were compared to the ligand-binding pockets found in PDB structures (Bernstein et al. 1977) and to the pockets of the structural models of the proteins from the ChEMBL (Gaulton et al. 2012) and DrugBank (Wishart et al. 2006) libraries. Pockets with the most significant similarity to the target pockets were selected and their corresponding binding ligands were used as template ligands. These template ligands were then utilized as seed ligands for virtual screening against the compound library with a fingerprint comparison method (Nikolova and Jaworska 2003).
The compounds were ranked in a list by their similarity score to the seed ligands. FDA approved drugs within top 1% of the screening list for each Adineta gene product are identified as a potential binder of the protein. We then ranked each drug according to its cumulative aging effects as the summed strength of the life extension effects of those proteins predicted to bind to the drug (Table 2). Although FDA approved drugs are mostly safe, some of them still have serious side effects and can be filtered with a killing index (KI) that are associated with severe side effects such has heart attack, cancer and death (Zhou et al. 2015). Drugs with KI [ 0 were removed, leaving 601 candidate drugs. Top 100 drugs were filtered for structural and experimental redundancy by clustering them at TC = 0.8, leaving 42 candidate drugs. Drugs costing more than $200, scheduled, or unavailable for purchase were also filtered out, leaving 31 candidate drugs to be experimentally screened using the rotifer model.
Experimental design and treatments
Experiments were performed with the rotifer species Brachionus manjavacas (Russian strain), which was cultivated in 15ppt salinity, and fed the green alga Tetraselmis suecica, as described in Snell et al. (2016). All treatments were applied by dissolving the drugs in water and adding them to the rotifer medium. When DMSO was used as a solvent to assist drug solubility, the controls also contained an identical concentration of DMSO. DMSO concentration was kept below 0.2% for all treatments, a concentration shown in many trials to have no effect on rotifer lifespan or reproduction. Full cohort life table and survival screen experiments were conducted following the methods detailed in Snell et al. (2016). Life table experiments were performed with a cohort of 120 neonate rotifers in each treatment, 5 per well in a 24-well plate. Each well contained 6 9 10 5 cells/mL T. suecica in 15 ppt artificial seawater (ASW), drug treatments, and 20 μM 5-fluoro-2-deoxyuridine (5-FU or FDU) which prevents hatching of asexual eggs (Snell et al. 2012). Plates were incubated at 22°C and scored daily for mortality. In experiments where drug treatments were delayed, the rotifers were transferred to new plates with fresh medium containing the appropriate drug treatments on Day 9.
Survival screens were conducted with a cohort of 84 rotifer neonates per treatment, 7 per well in 12 wells of a 24-well plate. These rotifers were maintained identically as in the life table experiments. Five μL 5-FU (1 mg/mL) was added to each well on days 2, 4, and 6 to prevent egg hatching, and T. suecica food was replenished on day 6. These plates were incubated at 28°C, and the number of live animals was counted on day 10. Survival was scored as the average percent surviving in each well on day 10. All drugs were tested in primary survival screens at both 1 and 5 μM concentrations. Drugs that enhanced survival in primary screens were tested in a secondary survival screen at three concentrations (ranging from 0.5 to 10 μM) chosen based on the results of the primary screen.
Assessment of rotifer healthspan
Drugs were tested for their ability to extend rotifer healthspan by analyzing reproductive rates, mitochondrial activity, and swimming speed using the methods described in Snell et al. (2016).
Reproductive life table experiments were conducted as described above with the exception of using a cohort of 24 rotifer hatchlings per treatment, one per well in a 24-well plate, and no 5-FU in the medium to allow normal egg hatching. Offspring were counted and removed each day, and intrinsic population growth rate (r) was calculated for each treatment.
Mitochondrial activity was estimated using Mito-Tracker ® Red (Invitrogen). Rotifers were incubated with drug treatments, the alga T. suecica, 15 ppt ASW, and 20 μM 5-FU at 22°C for 4 days. They were then rinsed and incubated with 5 μM Mito-Tracker ® Red in the dark for 30 min. After rinsing again, rotifers were anesthetized with club soda, fixed with formalin, and imaged on a Zeiss Imager Z1 microscope. Images were taken at 9200 magnification with an Alexa 568 nm filter, and average pixel intensity was measured using ImageJ. In experiments where drug treatments were delayed, rotifers were transferred into the drug treatments on day 9 and stained and imaged on days 10, 12, 14 and 16. MitoTracker requires active mitochondria to yield a fluorescent product; mitochondria in dead rotifers do not fluoresce (Snell et al. 2014b). However, Mito-Tracker should only be regarded as imprecise measure of mitochondrial activity as compared to other measures of mitochondrial metabolism (Brand and Nicholls 2011).
To measure swimming speed, rotifers were first incubated with drug treatments, T. suecica, 15ppt ASW, and 20 μM 5-FU at 22°C for 10 days. On day 10, 15 rotifers from each treatment were transferred to a microscope slide in 12 μL ASW. Video of swimming behavior was recorded for 30 s using a PixeLink camera on a stereomicroscope at 910 magnification. Swimming speed was then calculated for 10 rotifers from each treatment using the Tracker Video Analysis and Modeling Tool program ( http://physlets.org/tracker/). In experiments where drug treatments were delayed, rotifers were transferred into the drug treatments on day 9 and videos were taken and analyzed on days 10, 12, 14, and 16.
Statistics
Survival screens and healthspan assessments (average reproduction per female, swimming speed, Mito-Tracker ® Red fluorescence) were analyzed using an ANOVA with Dunnett's test comparing treatments to control. Life table experiments were analyzed by using the JMP Pro 12 (SAS Institute) reliability and survival analysis with Wilcoxon's test to compare survival curves.
Results
The flow diagram in Table 1 illustrates the rationale for selecting drugs to test for their effects on aging.
Our models identified 31 drugs with favorable binding patterns to be experimentally screened for lifespan extension. Most of these drugs had never been implicated in any effects on aging. A list of these drugs, their mechanisms of action, medical use, and therapeutic dose is shown in Table 2.
Our experimental design called for a series of screens for drug effects on rotifer survival. Because rotifers are aquatic animals, all exposures were with drugs dissolved in water. Survival after 10 days of continuous drug exposure was compared to either a control of the dilution water or a solvent control that consisted of the dilution water plus 0.2% DMSO (if drug solubility required a carrier). An example of a primary drug screen can be seen in Fig. 1a. Rotifer Table 2 Summary of drugs screened Number refers to a drug's DrugBank number. Cumulative aging effect is the summed strength of the life extension effects from the GenAge database. This provides an estimate of the "maximum average lifespan change" for each gene as % effect on lifespan. For example, the C. elegans let-363 gene extends worm lifespan by about 150%. A rotifer target mapped to this gene will receive an aging effect of 150. The Cumulative aging effect for a drug is the sum of all of these effects for all putative rotifer protein targets. Optimal concentration is the experimentally determined drug concentration that produced significantly longer lifespan or healthspan. Purple highlighting-enhanced survival in primary screen, black highlighting-enhanced survival in secondary screen, green highlighting-enhanced survival in primary and secondary screens and life table experiments Biogerontology (2018) 19:145-157 149 survival after 10 days exposure to six drugs at 1 and 10 µM concentrations was compared to a control containing DMSO. Asterisks above the column indicate significantly better survival than control by ANOVA and Dunnett's test. For example, survival was improved by 67% over control by exposure to 1 µM of the drugs clarithromycin and ivermectin. In contrast, exposure to 10 µM ivermectin killed all rotifers in the 10 day exposure. All 31 drugs screened were subjected to a primary screen. Drugs yielding significant lifespan extension in at least one concentration were subjected to a secondary screen (Fig. 1b). A secondary screen was a similar experiment as a primary screen, but with different drug concentrations. For example, ivermectin was tested at 1 and 3 µM. At 1 µM exposure lifespan was once again extended 71% over control, but all rotifers died when exposed to 3 µM ivermectin. Drugs giving positive results in primary and secondary screens were then subjected to a full life table analysis where test animals were exposed to a drug from birth to their death about 3 weeks later (Fig. 2). Once again, 1 µM ivermectin produced the best results in this experiment comparing four drugs, with a 8% longer mean lifespan, 13% longer median lifespan, and a 19% longer maximum lifespan (age of 95% mortality) than control. Primary and secondary screens as well as life table experiments were performed with the drug 5-FU in the media to prevent hatching of eggs. This facilitates the performance of these experiments by eliminating any confusion among maternal and F1 females. However, a reproductive life table also needs be performed to check that candidate drugs do not significantly inhibit reproduction. An example of a reproductive life table experiment is shown in Fig. 3. It can be seen that among 1 µM erythromycin, 5 µM carglumic acid, 3 µM capecitabine, and 1 µM ivermectin, none had a significant effect on the magnitude of rotifer reproduction.
However, ivermectin delayed Fig. 1 a, b 10 day primary and secondary survival screens for 6 drugs binding to aging pathway proteins. Asterisks indicate treatments where survival is significantly higher than control (P \ 0.05) maximum reproduction from age 4-6 days and enhanced reproduction in older age classes. Lifespan was extended by exposure to 1 µM erythromycin, producing a 37% longer mean lifespan, 42% longer median lifespan, and a 33% longer maximum lifespan. In comparison, 1 µM ivermectin treatment in this experiment produced a 21% longer mean lifespan, 33% longer median lifespan, and a 22% longer maximum lifespan than control. In addition to lifespan extension, we also are interested in drugs capable of extending rotifer healthspan. Diminished mitochondrial activity has been associated with aging, and we used the fluorochrome Mitotracker to estimate overall mitochondrial activity (Fig. 4a). Exposure to 5 µM carglumic acid, 3 µM capecitabine, 0.5 µM pravastatin, and 1 µM ivermectin for the first 6 days of life produced significantly higher mitochondrial activity than control. Only the 1 µM erythromycin treatment failed to improve mitochondrial activity at age 6 days.
Swimming speed is another endpoint that is a useful estimate of rotifer health, and serves as a mobility proxy. B. manjavacas females swim continuously throughout their life, initially at an average of 0.84 mm/s as juveniles, increasing to 1.23 mm/s at age 2 days, followed by a decline back to 0.86 mm by age 4 days (Snell et al. 2016). Near death, rotifers stop swimming, fall to the bottom, and remain immobile until they die. Exposure to certain drugs might mitigate this decline in swimming speed with age. Among the drugs tested, only continuous treatment with 5 µM carglumic acid yielded significantly higher swimming speed at age 10 days than control (Fig. 4b).
We investigated whether combinations of the top candidate drugs might improve survival to older age classes more than single drug exposure. We exposed B. manjavacas females from birth to age 10 days to six single drugs and recorded survival. Exposure to 1 µM ivermectin, 1 µM naproxen, 1 µM erythromycin, 5 µM carglumic acid, or 0.5 µM pravastatin, all improved survival 25-39% over control. We compared this result with 15 two-way combinations of drugs, and in only two cases did we observe enhanced survival over control, and in four cases survival was considerably worse than control. However, neither of these two cases produced better survival than the single drug treatments, demonstrating the absence of additive or synergistic drug effects.
There is substantial interest in finding drugs capable of slowing aging that do not require drug treatment to begin at birth. Ideally, drugs can be identified that produce significant aging benefits even when therapy is initiated in middle age. We investigated whether the five candidate drugs that we identified could produce aging benefits when therapy is started at age 9 days, the approximate midpoint of rotifer lifespan in our experimental conditions. A life table experiment was initiated where females were untreated from birth until age 9 days (Fig. 5). Then exposure to 1 µM ivermectin, 1 µM erythromycin, 3 µM capecitabine, or 0.5 µM pravastatin was initiated. Survival of all treatments was followed until the death of the last animals and the survival curves compared to control. Survival in all four drug treatments was 7-8.9% better than the control, all statistically significant at P = 0.014-0.045. Likewise, we tested the effects of drug therapy beginning in middle age on the healthspan proxies mitochondrial activity and swimming speed. When drug treatment was started on day 9 and followed through age day 16, five of the six drug treatments performed better than control. Exposure to 1 µM ivermectin, 1 µM erythromycin, 3 µM capecitabine, 1 µM naproxen, or 0.5 µM pravastatin improved mitochondrial activity over control by 1.4, 2.9, 1.4, 2.4 or 1.6-fold on day 16, respectively (Fig. 6). An ANOVA followed by Dunnett's test yielded P \ 0.0001 for all five drugs compared to control. Drug therapy beginning in midlife had less effect on preserving swimming speed in older age classes. On day 16, only the 3 µM capecitabine treatment swam significantly faster than control animals (t test, P = 0.018). However, on day 12, rotifer swimming speed was faster in four of the drug treatments than control (1 µM erythromycin, 3 µM capecitabine, 5 µM carglumic acid, and 0.5 µM pravastatin).
Discussion
The significance of these results is that they demonstrate that coupling computation to experimentation can quickly identify new drug candidates with the potential to slow aging. Exploring the pool of FDA approved drugs significantly shortens drug development cycles because the safety of these compounds in humans is already established (Ashburn and Thor 2004). Most drugs bind to multiple targets (Zhou et al. 2015), so there is a strong possibility that they have undiscovered binding partners beyond their licensed targets. Thus, the pool of FDA approved drugs is likely to be rich with new targets for novel aging therapies.
The power of combining computational and experimental approaches in drug discovery using model animals has been demonstrated by Snell et al. (2016). These authors identified several drug candidates by screening three rotifer proteins for binding partners from a compound library consisting of DrugBank drugs, including 1347 FDA approved, non-nutraceutical molecules. Using survival screens, cohort life tables, analysis of swimming speed and mitochondria activity, they found three drugs, naproxen, fludarabine, and hydralazine, that extended rotifer lifespan or healthspan or both. This work was proof-of-principle of the computational model and the rotifer experimental system. This approach was expanded in the current paper where we have systematically screened proteins from most aging related genes in the GenAge database for their binding to FDA approved drugs. Identifying the top 1% of binders and removing those with high toxicity (KI [ 0) using the FINDSITE comb algorithm (about 600 drugs), we eventually experimentally tested 31 Dunnet's test, LS mean lifespan (days), P probability of significant difference in lifespan from control by Wilcoxson test drugs using our rotifer experimental system. From these, five drugs (ivermectin, erythromycin, capecitabine, carglumic acid, and pravastatin) demonstrated the ability to extended lifespan or healthspan or both in a variety of experiments. Only erythromycin and pravastatin have been implicated previously as drugs with aging benefits. Consequently, this work has identified promising new drug candidates and their approximate therapeutic doses for testing in vertebrate models of aging.
Another research group (Ziehm et al. 2017) has taken a similar approach, but using a different computational model to generate a short list of aging drug candidates. They identified 15 top ranked drugs each for Drosophila melanogaster and Caenorhabditis elegans that they predicted would modulate aging. However, none of the drugs on their list match our top five candidates. This is likely due to the fact that our FINDSITE comb is a more general method that uses not only the PDB library, but also considers the CHEMBL library, whereas the method by Ziehm et al. is specific for Drosophila melanogaster and Caenorhabditis elegans and only focused on druglike molecules present in PDB structures. Nevertheless, FINDSITE comb also identified the same four FDA approved drugs as top ranked in the work by Ziehm et al.: DB01254(Dasatinib), DB00619(Imatinib), DB00398(Sorafenib) and DB04868(Nilotinib). However, all these kinase inhibitors were approved as cancer drugs and have serious side effects, which make them unlikely candidates for aging therapies. By comparison, one key advantage of our approach is that we applied our side-effect assessment using the killing index and eliminated these drugs from further experimental tests. However, it should be noted that Dasatinib is considered one of the first senolytic drugs and is currently being considered as a candidate for clinical trials (Kirkland and Tchkonia 2017). Mitotracker estimate of mitochondrial activity is for individual rotifers. Asterisks indicate treatments where mitochondrial activity is significantly higher than control (P \ 0.05). Asterisks for swimming speed indicate that it is significantly higher than control on day 6 for 5 µM carglumic acid Another important difference in our work is that we verified our computational predictions with in vivo animal experiments with rotifers. Ziehm et al. (2017) provided no direct experimental validation of their predictions. A further complication of using C. elegans for drug screening is that the worms are fed E. coli bacteria, which could metabolize the drugs before they affect C. elegans. In contrast, our experimental rotifer, B. manjavacas, is fed a diet of marine microalgae which are not as highly adapted to metabolize drugs as human gut bacteria like E. coli.
Like all invertebrates, including Caenorhabditis elegans and Drosophila melanogaster, there are limitations using rotifers as model animals to screen for drugs capable of slowing aging. Species-specific differences in drug metabolism may produce false positives or negatives for lifespan extension. Because rotifers are aquatic animals, this may cause special bioavailability problems for drug delivery. For these reasons, the drugs that we have identified as producing lifespan and healthspan extension in this study should be regarded as a working hypothesis until confirmed in a mammalian model.
An advantage of using FDA approved drugs is that their mechanisms of action are usually known. For example, ivermectin is highly efficacious against a variety of parasitic infections in animals (Laing et al. 2017). It is known for blocking of glutamate-gated chloride channels in parasitic nematodes, inhibiting motility, feeding and reproduction. Although this is ivermectin's licensed application, at micromolar concentrations it is known to bind to a wider range of ligand-gated channels, including GABA, glycine, histamine, and nicotinic acetylcholinesterase receptors (Wolstenholme and Rogers 2005). In mammals, ivermectin has been shown to bind to the ligand binding domain of the farnesoid X receptor in mice, decreasing serum glucose and cholesterol levels (Jin et al. 2013). Ivermectin also inhibits proliferation and induces apoptosis in several human cancer cells (Melotti et al. 2014). Overdoses of ivermectin in humans cause cardiotoxicity, neurotoxicity, and adverse effects in the gastrointestinal tract (Yang 2012). Because of this promiscuity in binding partners, it is perhaps not surprising that ivermectin also affects metabolic pathways modulating aging in rotifers.
Erythromycin is another drug with interesting effects on rotifer lifespan and healthspan. It is a 14-membered ring macrolide used to treat chronic inflammatory diseases. In addition, erythromycin has been shown to slow aging in yeast (Holbrook and Menninger 2002). The Saccharomyces cerevisiae strain K65-3D grown in 16 g/mL erythromycin had a mean life span that was 27% longer than untreated yeast cells. Although this result was intriguing, there have been no follow-up studies and no demonstration of similar effects in animals until our work with rotifers reported a 37% increase in mean lifespan. Snell et al. (2016) reported that rotifers treated with 1 µM naproxen had a 14% longer mean lifespan than controls. Naproxen is a nonsteroidal antiinflammatory drug (NSAID) for relieving pain, fever, swelling, and stiffness that is a nonselective COX inhibitor. These authors hypothesized that naproxen effects were manifested through an anti-inflammatory mechanism.
The cancer drug capecitabine is used in chemotherapy to treat breast, gastric and colorectal cancer. Capecitabine is metabolised to 5-fluoro-2deoxyuridine (5-FU), which in turn is a thymidylate synthetase inhibitor (Shimma et al. 2000). Inhibition of this enzyme reduces the synthesis of thymidine monophosphate, which is the active form of thymidine required for the de novo synthesis of DNA. The drug 5-FU has been used extensively in rotifer life table experiments to inhibit hatching of eggs, which eliminates the necessity of removing offspring from maternal females. Offspring removal considerably increases the effort required to perform rotifer life table experiments. In describing the use of 5-FU, Snell et al. (2012) reported a consistent 20% extension of mean lifespan in their experiments, but were unable to provide an explanation. We did not observe such a lifespan extension for capecitabine in our life table experiments, but it yielded lifespan benefits in primary and secondary survival screens and improved mitochondrial function in older age classes.
Pravastatin acts as a lipoprotein-lowering drug by binding to the active site and inhibiting the function of the enzyme hydroxymethylglutaryl-CoA (HMG-CoA) reductase. This helps prevent age-related cardiovascular disease. A combination of statins, including pravastatin, inhibited both farnesylation and geranylgeranylation of progerin and prelamin A in a mouse model of premature aging (Varela et al. 2008). This markedly slowed aging by retarding growth, weight loss, hair loss and bone defects, resulting in substantial lifespan extension. The drug carglumic acid is used to treat hyperammonanemia in patients with N-acetylglutamate synthase deficiency, but has no reported effect on aging processes.
One of the most important results of this work is identifying drugs that heretofore have not been implicated as candidates for aging therapy. As exciting as this prospect is, perhaps more promising is the observation that drug therapy can be initiated in midlife and still produce aging benefits. A drug may produce highly desirable aging benefits, but if therapy needs to be initiated at birth and continued throughout the lifespan, few people are going to comply with this therapeutic regime. However, if therapy can begin at midlife, patients are much more likely to comply. Four of the drugs that we tested, ivermectin, erythromycin, capecitabine, and pravastatin, all produced significant lifespan extension (7-8.9%) when started at age 9 days, about midway through the average rotifer lifespan. These same drugs plus naproxen also improved mitochondrial function in older age classes, 1.4-2.9-fold over control. In addition, midlife treatment with erythromycin, capecitabine, carglumic acid, and pravastatin improved swimming performance in some older age classes, compared to control. Together these results are quite encouraging because they demonstrate that proper doses of particular drugs can provide lifespan and healthspan benefits to animals, even if therapy begins in midlife. Fig. 6 Mitotracker activity in older age classes when drug treatment is initiated in middle age. Fold increase is the amount of mitochondrial activity higher than control on day Our results have identified drugs that are strong candidates for aging therapy and demonstrated their efficacy in an in vivo rotifer model. The next step in the development of these compounds for human therapy is to test them in mammals, explore a range of therapeutic doses, and the optimal timing of drug delivery. If these trials are successful, this should provide a strong indication of their likely success in human patients.
|
2022-12-07T14:19:54.820Z
|
2018-01-16T00:00:00.000
|
{
"year": 2018,
"sha1": "6b345345399c3eb2c7d38cf302a961aa456f1115",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10522-018-9745-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "6b345345399c3eb2c7d38cf302a961aa456f1115",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
268284562
|
pes2o/s2orc
|
v3-fos-license
|
Leveraging Machine Learning for Enhanced Cyber Attack Detection and Defence in Big Data Management and Process Mining
— The rapidly developing field of "Commercial Operation Divergence Analysis," this research seeks to identify and understand differences in commercial systems that exceed expected results. Approaches in this domain aim to identify the characteristics of process implementations that are associated with changes in process effectiveness. This entails identifying the features of procedural behaviours that result in unpleasant results and figuring out which behaviours have the biggest impact on increased efficiency. As the scale and complexity of big data management and process mining continue to expand, the threat of cyber-attacks poses a critical challenge. This research leverages machine learning techniques for the detection and defence against cyber threats within the realm of big data management and process mining. The study introduces novel metrics such as Skewness, Coefficient of Variation, Standard Deviation, Maximum, Minimum, and Mean for assessing the security state, utilizing variables like SPI, SPEI, and SSI. The research addresses prior issues in cyber-attack detection by integrating machine learning into the specific context of big data and process mining. The novelty lies in the application of Skewness and other statistical metrics to enhance the precision of threat detection. The results demonstrate the effectiveness of the proposed methodology, showcasing promising outcomes in identifying and mitigating cyber threats in the given dataset and which makes use of Support Vector Regression (SVR), has a standard deviation of 0.9, which is consistent with the variability shown in SVM. The results demonstrate a significant achievement, with a Mean Absolute Error (MAE) of 0.98, indicating the efficacy of the proposed approach in providing accurate and timely insights for cyberattack detection and defense, thereby enhancing the overall security posture in data-intensive systems. The results highlight how well the proposed method extracts significant insights from complicated event data, with important ramifications for real-world application and decision-making procedures.
INTRODUCTION
Effective extraction operations depend on the deep mine's ability to maintain a healthy and secure air atmosphere [1].A crucial step in the analysis of data is outlier detection.Hawkins states that atypical is "a thing whether diverges sufficiently form other items as to be assumed that it has been produced by an alternate method" [2].In 2008, the Global Financial Crisis (GFC) and the demise of the coal mining "super cycle" put a stop to a period of production-focused tactics during which operational costs increased faster than output [3].Because it presents especially challenging compromises the extractive and material extraction sector is a desirable test case for the study of contamination.Individual plants may provide enormous value, up to millions of dollars annually [4].Studies had lately claimed that the application of Process Mining (PM) might address these drawbacks through enabling auditors to efficiently and primarily automatically analyse all of the databases employing historic and/or presentday information [5].Nevertheless, a number of issues brought on by the extraction and use of coal assets, including as sinkholes, erosion of soil, landslides, and the demolition of buildings, have had a significant detrimental impact on the daily lives and assets of local populations [6].
Mining processes is a field of study who tries to enhance process enhancements by offering based on reality observations on previous procedure implementations.The topic sits among system modelling and evaluation and intelligence computing as well as data mine on one's hand.Process variation assessment is described as a collection of methods that allow to contrast more than one event records belonging to various company procedure versions for the purpose to identify the differences between them [7].A prime instance of contaminated soil includes the soils that make up anthracite mine dumps.The sedimentary layers that cover a coal seam are where the initial soil was formed.The excess soil is typically excavated using various excavators, then www.ijacsa.thesai.orgdelivered into the spoil site via lorries or belt conveyors and deposited form different heights, either with or no choosing the material [8].
Multiple research studies indicate that these last class of computations, machine learning algorithms (MLAs), can be more accurate than statistical methods like discriminant evaluation or logistic regression, particularly if the feature space to be studied is complicated (i.e., once the dimension of the input feature time is believed to be quite large and the connection between the intended contributions along with the feedback transparent include is predicted to be non-linear) and the data sets being used are anticipated to include distinct characteristics [9].One the contrary, machine learning is a branch of computing which seeks to give machines or different gadgets the capacity to understand sans needing directly controlled.It tries to provide methods and mathematical models for data-driven learning and forecasting.Upon accomplishment, machine learning techniques are used to simulate characteristics of the input in relation to anticipated result, predict production attributes in relation to past information, and characterise the behaviour within the data.A possible approach to predicting wind power using velocity data is machine learning techniques [10].Machine learning has been immensely successful as information quantities and types have increased because of its ability to examine complex trends in seen information and generate predictive models or choices on fresh data.In the literature, a variety of machine learning methods and algorithms have been published [11].
Predicting how a business operation will behave in the years to come is an essential corporate competence.Procedure prediction, a form of statistical analysis used in management of business processes, uses information from previous process occurrences to forecast future ones [12].Customer service representatives adjusting to requests about the amount of time left until an issue has been settled are a few examples of use instances.Other use cases include production managers forecasting the length of a manufacturing procedure for improved scheduling and higher utilisation or case supervisors determining probably violations of regulations to reduce business risk [13].One kind of procedure mining work, called procedure learning, looks for a model that describes the behaviour of an organization's process using information about how it has previously been executed.The log of events is mapped onto a procedure model using a method known as a process identification procedure, which guarantees the model in question is a good representation of the behaviour shown in the event log [14].
Our approach prioritizes adaptability to external influences by employing dynamic updating mechanisms.We continuously monitor cyber security policies, track advancements in attack techniques, and stay abreast of technological shifts.This proactive approach allows us to incorporate new knowledge into our models promptly, ensuring their relevance and effectiveness in evolving cyber security landscapes.Additionally, we leverage techniques such as transfer learning and ensemble methods to enhance model robustness and resilience to changing external factors.The proposed model exhibits robustness to changes in feature selection and extraction methods through rigorous validation and sensitivity analysis, ensuring consistent performance across varying feature sets.Additionally, automating feature engineering enhances efficiency and scalability while reducing the risk of human error, bolstering the reliability and adaptability of our models.Regular audits and oversight mechanisms further reinforce data privacy measures, mitigating potential privacy concerns and promoting responsible data stewardship in cyber security practices.Implementing the methodology may face challenges such as organizational resistance to change, integration with existing infrastructure, and compliance with regulations.
The following are the research Primary Contribution: The application of machine learning algorithms allows for improved accuracy and predictive power in identifying the characteristics of process behaviour that contribute to efficiency shifts.
Machine learning algorithms provide a means to uncover the relevant factors that significantly affect process efficiency.By analysing the event logs and applying the proposed Declare-based coding, the research identifies the most influential aspects of a procedure, allowing organizations to focus on these factors for process optimization.
The combination of machine learning algorithms and the proposed encoding technique constitutes an effective tool for the analysis of processes.
The research compares the performance of different machine learning algorithms, such as Standardized Stream flow Index, Gene Expression Programming, Support Vector Regression, and M5 Model Tree.This comparative evaluation helps in understanding the strengths and weaknesses of each algorithm and provides guidance on selecting the most suitable approach for a given context.
Section I, the introduction, provides an overview of the research topic, establishing its relevance and context.Section II, related work, explores existing literature and research in the field to highlight gaps or connections with the current study.Section III, the problem statement, clearly defines the specific issue or gap that the research aims to address.Section IV, methodology, outlines the approach and techniques employed to conduct the study.Section V presents the results and engages in a discussion, while Section VI concludes the research, summarizing key findings and suggesting potential avenues for future exploration.
II. RELATED WORKS
Richetti et al. [15] proposed to determine the aspects of a procedure which most affect its efficiency, they first use Treatment Learning as an original method in the realm of Deviation Mining.This is a novel encoding method enabling vector-based representations of process occurrences.The suggested encoding method may find more expressive solutions since it is built on declaring restriction framework fulfilment.Using publicly accessible logs of events from actual procedures, they do a number of tests that contrast our www.ijacsa.thesai.orgsuggestion to the state-of-the-art activity decoding methods.The findings demonstrated that behavioural learning offered actionable and more descriptive insight from events logs when combined with our suggested Declare-based encoding, making it a useful tool for the analysis of processes.
Al-Shehari et al. [16] proposed the use of feature resizing and quick encoding strategies are used in the framework to alleviate the potential skew of identification outcomes that might emerge from an ineffective decoding procedure.The artificial minority sampling too much method (SMOTE) is additionally employed to alleviate the data set's balance problem.In order to discover a highly precise classification which can identify data leakage events carried out by malevolent outsiders throughout the crucial time when they depart an organisation, renowned machine learning methods are used.By applying our mathematical framework on the CMU-CERT Insider Threat Dataset and contrasting its results with the real world, we demonstrate the notion behind it.The results of the experiment demonstrate that our framework outperforms other methods which have been evaluated on the identical data in terms of detecting internal leakage of information events, with an AUC-ROC value of 0.99.The suggested framework offers practical approaches to deal with potential bias and class imbalance concerns in order to design a system that effectively detects insider data leaking.
Roldán et al. [17] proposed an approach that uses technologies like augmented reality and data mapping to teach workers in assembly operations.Firstly, skilled employees do assembly in accordance with their knowledge using a fully immersive environment.The next step is to use process mining methods to extract assemble model in the logs of events.Lastly, to understand the groups what the expert employees incorporated into the framework, learner employees utilise an improved immersion display with suggestions.Construction block experiments were designed as a toy example, and studies on a group of participants have been conducted.The outcomes demonstrate the suggested education system's competitiveness against more traditional options.It bases itself on procedure mining and mixed reality.In terms of mental effort, vision, learning, outcomes, and how they perform, user ratings are also superior.
Helm et al. [18] proposed 38 procedure mining instances related to health care reported from 2016 to 2018 that discussed the instruments, methods, and methodologies used as well as specifics on how the log data were found to have been medically significant.Utilising the common clinical coding schemes SNOMED CT and ICD-10, researchers then connected the diagnostic characteristics of the patient encounter setting, clinical speciality, and diagnosis of illness.The possible results of utilising a standardised method for categorising medical terms and events log data using common clinical codes are also highlighted.
Weinzierl et al. [19] proposed several prospective business process monitoring (PBPM) strategies that attempt to forecast potential process behaviours while the procedure is being executed.Methods for predicting subsequent event in particular have considerable promise for enhancing practical company processes.Many of these methods use deep neural networks (DNNs) and take into account data pertaining to the environment where the operation is occurring to provide recommendations that tend to be more reliable.Nevertheless, an in-depth analysis of such methods is lacking in the PBPM literature, making it difficult for academics and industry professionals to decide which approach is appropriate for a particular event log.To address this issue, they statistically assess the prediction performance among three potential DNN structures using five tried-and-true encoding methods and five context-rich real-world logs of events.They offer four conclusions that might aid researchers and practitioners in developing fresh PBPM methods for anticipating upcoming actions.
The literature review showcases several developments in machine learning and process mining applications across a range of industries.But there is a clear research vacuum when it comes to combining these technologies to improve cyber security-more especially, when it comes to insider threat defence and detection.Research on process mining, efficiency assessment, healthcare procedures, and prospective business process monitoring has been greatly aided by studies by Richetti et al. [15], Al-Shehari et al. [16], Roldán et al. [17], Helm et al. [18], and Weinzierl et al. [19].However, none of these studies specifically address the crucial problem of using machine learning for cyber security in the context of Big Data Management and Process Mining.Novel encoding strategies, predictive modelling, or anomaly detection approaches specifically designed for cyber security in massively distributed data settings are not well explored in the literature.This gap in the literature highlights the necessity for a thorough investigation that carefully incorporates machine learning techniques into cyber security frameworks, with an emphasis on the particular difficulties presented by big data and process mining scenarios.
III. PROBLEM STATEMENT
The problem statement of this work is to address the limitations of existing techniques for business process deviance mining.These techniques are based on the extraction of patterns from event logs but have limited expressiveness, particularly in capturing complex relationships in highly flexible processes.The previous research is to apply Treatment Learning, a novel approach in the context of machine learning, to identify the characteristics of a process that have the most significant impact on its performance.The study aims to compare the proposed encoding technique with current process encoding techniques through a series of experiments using publicly available event logs from real-life processes [15].By incorporating machine learning approaches to strengthen cyber-attack detection and defence mechanisms, particularly within the fields of Big Data Management and Process Mining, the research seeks to expand the breadth of cyber security while boosting the effectiveness and resilience of digital systems.
IV. REGARDING DISCOVERY AND DECLARATIVE PROCESS MODELLING
Conventional urgent process diagrams are produced by the majority of mining process methods.These methods work effectively for organised processes since there aren't numerous www.ijacsa.thesai.orgadditional ways an operation may be carried out.Declarative language modelling is suggested as a way to create an improved equilibrium amongst flexibility and guiding support for these types of models, despite the fact that many of these approaches are capable of handling event logs form flexible or unorganised models.Due to expressive modeling's relevance to log files from dynamic or unstructured processes, the potential of mining declarative models has also emerged.Potential bottlenecks may arise in resource-intensive tasks such as model training and feature extraction, requiring adequate computational resources and optimization strategies.To mitigate these challenges, we implement techniques such as data partitioning, caching, and resource allocation optimization to ensure efficient utilization of computational resources and maintain scalability as data volumes increase.
Declaring continues to be the most commonly employed languages for studies regarding declaratory modelling and mineral extraction, although having very little application in business.This is because it's versatile and particularly suited for use in extremely volatile procedures, which are characterised by extreme complexity and variety.The addition enables associations among actions taken upon KiPs to be described using domains limitations as opposed to sequential ordering.Additionally, it enables occurrences in a KiP to signal chronological ties, behavioural consistency restrictions, or choice-of-action relationships in its instances by using these extra notions.Compliance with laws and regulations controlling the application of machine learning for cyber security in various sectors and regions is given top priority in our approach.In addition, we keep clear records of all procedures, guaranteeing responsibility and traceability for our compliance initiatives.On the other hand, difficulties could emerge because regulations are dynamic and have different meanings in different places.The proposed models are designed to complement human-driven cyber security processes by providing automated support in threat detection and response.A collaborative approach where the proposed models serve as decision-support tools, aiding human analysts in identifying and prioritizing threats more efficiently.The models leverage time-series analysis methods to identify and respond to cyclical or recurring patterns in threat behaviour.Through this approach, the models demonstrate the ability to adapt to changes in threat behaviour over different time intervals, ensuring robust and effective threat detection capabilities in dynamic cyber security environments.
Deviance mining with machine learning and declare-based encoding of event logs in rapidly evolving environments like cyber security, where threats change constantly, machine learning models face challenges due to their assumption of stationary data distributions.This means they struggle to adapt to new patterns and trends.To overcome this, techniques like online learning algorithms and anomaly detection are crucial.Online learning allows models to update in real-time, while anomaly detection helps identify unusual behaviour.By employing these adaptive methods, machine learning models can better keep pace with evolving threats and enhance cyber security defenses.Machines are the simplest collection of rules that may be used in machine learning to discriminate between circumstances that include numerous highly weighted classes and scenarios with few strongly weighed categories.Machine learning, within contrast to association-rule mineral extraction, specifies a preferred type worth, that serves as a benchmark for weighing various class values and allows it to highlight machines with strong or poor performance as determined by a particular class characteristic in a dataset.Diverse datasets play a pivotal role in enhancing the generalizability of machine learning models.Models trained on diverse datasets are inherently more adaptable to variations across industries, company sizes, and geographic locations.This adaptability broadens the applicability of the models, ensuring they can effectively perform across a variety of contexts.By exposing the model to a wide range of scenarios and data distributions, diverse datasets enable the model to learn robust representations and patterns that transcend specific instances, thereby enhancing its ability to generalize and make accurate predictions in real-world scenario.Fig. 1 shows the steps to perform dataset encoding and machine learning analysis.
They introduce a unique rules-based technique to analyse company procedure footprints in the next section.By using a machine learner to find those intriguing regulations that have the greatest impact on the results of company procedure cases, our idea builds on previous methodologies centred around rule mining for associations and comparison items sets mine.Usually, indicators of success may be used to track system outputs.As a result, it can be seen trace-level indicators of success as trace-level characteristics that may be utilised as class variables in machine learning applications.It is crucial to bear in mind which (Process Efficiency Indicator) PPIs can be present at additional levels for abstractness in relation to business procedures, such as at the stage of activity, in which a particular task might be tracked from a PPI without consideration of the outcome of every other task carried throughout the same procedure.At the leadership threshold, it can be more important to keep track of a business' overall efficacy, which is often accomplished by combining the findings of a trace-level PPI.For instance, management is interested in monitoring a service level agreement that requires at least 95% of problems to be resolved within 24 hours, thus they would like to measure the amount of incidents resolved in less than 24 hours.This PPI identifies every procedure trail according to its finish time.It is an incorporation of a tracing-level PPI.Trace-level PPIs are of importance for the purposes of this work.
The idea behind machine learning is this, given a realisable choice, demonstrating the disparities among possibilities could prove more obvious than presenting every single event.As opposed to just listing the details of the present-day scenario, a machine learner quickly determines the critical aspects that most affect that circumstance.A company's record of events might be transformed into a set of data for the purposes of machine learning.Following that, they describe a compression strategy that mines an archive of processes traces for features using declarative mining of processes.Declare's expressiveness is sufficient to record both basic count of each task and complex related to time interactions between pairs of activity.The idea is to use only one, condensed syntax in this manner to record both basic and complicated themes that could be present in an event log.As far that we comprehend, research has not yet investigated a declarative-language oriented encoding strategy to build vector illustrations of process occurrences.
Table I is a non-exhaustive illustration of characteristics that may be retrieved given the occurrences of events inside the context of multiple activity traces that together make up a log of events P′.The illustration used known coding methods including bag-of-activities (boa), bigram, maximum repeat sequence (mrs), and maximal repeat alphabet (mra).Such encoding techniques track the frequency with which each encoding pattern is present in the process traces.The choice containing directly extracting tracing-level characteristics off the set attr h of an operation trace and adding those into the collection of instance properties attr j is also taken into account by our methodology.It is feasible to convert the incident log P′ to a datasets after extraction properties from a set of activity traces H ′ by transferring each h to H ′ to a dataset instance q, so that each q = h may then be encoded using an encoding approach, such as bag-of-activities.Four unique qualities (characteristics) were identified utilising the BOA technique taking into account the peculiarities of the aforementioned activity traces: a, b, and c.It is therefore feasible to create a dataset that includes the gathering of each of the event-driven & trace-level characteristics by taking into account both of the current trace-level characteristics, et and pc.In this manner, the procedure's control-flow and information properties may be examined to one another.In order to connect to the element which serves as the foundation for verifying deviant behaviour, the given name for an attribute of a class ( name c ) has to be identified in the dataset.
term " name c " must be used to identify a trace-level performance marker that is relevant for examination.Falsevalued (unsuccessful) footprints are regarded as aberrant instances in our scenario since the effective completion characteristic is specified as a class variable, name c = pc.
A. Mining Declare Constraints as Trace-level Attributes
Compared to the currently used series encoding methods, they also suggest a fresh method employing logical process mining in order for extracting traits from periods of happenings.They took into account the Declaration programming syntax and its restriction examples, which offer the primary relationship and presence restrictions forms.They took into account the meaning of Declare restrictions using standard patterns included in both Unrestricted Miner and MINERFul++ declaratory mining algorithms with the goal to execute the discovery of limitations at the track levels.They must stay away from vacuously fulfilled restrictions since pattern fulfilments are the things that we want to engage in.To eliminate simply met restrictions, a different labelling collection of support automaton for vacuity detecting is suggested.In our search process, the comparable routine www.ijacsa.thesai.orgexpressions used by the vacuity detecting support automata have been taken into account.Declarative syntax mining methods now in use seek to identify a collection of restriction patterns to describe the behaviour of a whole event record as one procedure paradigm.To determine if a restriction template is valid and meaningful, these techniques may take into account several threshold characteristics at the event log level, such as support, confidence, and interest factor.Through examining the achievement of a set Announce specifications for every step in the trace, they hope to employ Declare constraints as features at the trace level in this study.Similar to the previously discussed current encoding methodologies, those Declare-based attributes for each process trace may be used to create a collection of examples.
They use declaratory procedure mined approaches to identify whether Declaration requirements was satisfied in every programme tracing H h , provided an events log P. A number of Predefined limitations have to be established before mine can be done correctly.A Declaration restriction generators collection may represent all of it or a portion of it in this case.It then needs to be paired to a collection of unique occurrences that are recorded on the occurrence log.This occurrence set includes the parameters that Declaring requirement patterns require in order to function, while this mixture produces the collection of characteristics produced by this encode technique.By creating unique ordinary expressions, the list of default requirement templates may be expanded to include additional restrictions as appropriate.The label of the restriction example, that symbolises an abstraction of a restriction (at first used stated in LTL or via an ordinary expressions), plus a group of parameters are combined to form a Declare restriction d, where d = name ({args}).The total amount of parameters differs based on the pattern; for instance, a init restriction theme only requires a single query since it applies to the occurrence who initiates the trace's execution, but the coexisting restraint pattern requires two inputs because it applies whenever two occurrences occur in the same processes trail.
Considering the occurrence logging instance P ′ from earlier, that includes a collection of three separate occurrences (a, b, and c).Three Declaration requirements init (a), init (b), and init (c) can be produced from a Declaration restriction generator of class init.Every limitations, represented by ''1'' as a fulfilment or ''0'' alternatively, makes up as a trace-level attribute-value pairing in the sake of decoding by obtaining an amount matching to the Declaring condition's fulfilment. .The exactly_n model, which counts an exact n of instances of events within the entire track, corresponds to the lone alternative.Activity tracing containing Declare-based attribute-value pairings can then be converted into database objects in a manner similar to that shown in the table for boa coding.A typical dataset is shown in Table III and is made up of objects with characteristics that correspond to an example of Declaration restrictions obtained from the event log P ′.Declare-based characteristics may represent timing connections among actions in a manner that sequence-based set-based encoding methods can't, in contrast with other current encoding methods.For instance, the boa, bigram, mra, and mrs methods do not have an equivalent for the answer (b,c) restriction.Customised constraints for incident sequencing representations of features may nevertheless be defined.Declaring also offers a number of predefined templates that can handle a variety of timing connections between procedure incidents, which is a further advantage.Concerning methodology, each of the four rules may be represented with the current Announce limitation components.
B. Machine Learning Evaluation 1) Standardized stream flow index (SSI):
Similarly to indicators of severe weather, the majority of investigations used standardised criteria for assessing hydrologic dryness.Two significant standardised indices are flow indices and standardised runoff indices, both which have an analogous theoretical foundation.The sole difference between SSI computations and other computations is that run-off from the surface data are utilised in place of precipitation data.For example, this index displays correct beta dispersion.As a result, for each month, the total flow values are separately estimated before the SSI is computed.
2) Gene Expression Programming (GEP): Genetics can be made using genetic algorithms in the Gene Expression Programming (GEP) algorithm, which uses communities of people and selects these according to fitness.The GEP method's initial step is to create a main collection of answers.This level can be finished by an unintentional procedure or by using some knowledge about the issue.The chromosomal structures were then visualised as a tree expression and evaluated using a fitting method.In general, processing a number of target issues, also known as fitting problems, allows for the evaluation of the appropriate function.The research process ends and the most effective resolution is determined once the answer has an appropriate standard or if a certain number of iterations have passed.The most suitable response form the latest generation is maintained if the most favourable scenario cannot be discovered, and the remaining options are left to be chosen from.The best people are more likely to have children, based on the decision.For many generations to come, every step has been repeated, and it is anticipated the group in question quality will generally increase as new generations are born.GEP chooses the candidates using the renowned roulette wheel approach.In contrast to genetic algorithms and genetic programming, GEP uses a number of genetic operatives to reproduce modified www.ijacsa.thesai.orgpeople.Replica is a procedure designed for preserving a few of the most talented members of this era into the following one.A mutation operator's objective is to insert arbitrary changes into an individual chromosome.To avoid producing people deemed rule-deficient, this operator conducts some of the perfect procedures.GEP employs a one-point and twopoint combination, similar to a biological algorithm.The genomic equivalent problem (GEP) employs a single-point and two-point combinations.The kind of two-point combo is a little more intriguing because it can largely switch on and off the chromosomal regions that are not encoded.Additionally, the GEP also performs a different kind of combining known as gene combination, in which genes are entirely combined.To create two new children, this operator randomly chooses genes on both-parent chromosomes that are located in the same location.
C. Support Vector Regression
Over the following decades, Support Vector Machine (SVM) evolved into a linear classification algorithm using optimum hyper plane concept.Utilising statistical learning theory, this approach is used.Additionally, they utilised kernel algorithms to create nonlinear classifications.SVM's classification algorithm serves to categorise problems associated with data into multiple classes, while its regression technique is applied to solve prediction issues.Regression on fit data produces a hyper plane.A given location's deviation from its hyper plane revealed the inaccuracy of that location.The most effective technique for regression analysis is advised is the least squares approach.But it can happen that using a least-square estimation for analysis issues in the form of outliers may not be entirely rational, which would lead to the analysis performing poorly.In order to avoid bad performance that is not responsive to minute modifications to the model, a robust estimator should be created.As mentioned, the SVM is built upon the principle of minimising risk, a hierarchy generated by the theory associated with statistical training.a distance from real values termed an error function to employ SVM in regression issues that overlook mistakes in ainsensitive manner.This function's definition translates as follows in Eq. ( 1) and Eq.(2): Below, this mistake function does not take into account errors.
D. M5 Model Tree
This technique is an amalgam of machine learning and data mining techniques.Data mining techniques identify several, suitable frameworks before extracting data from a pool of set values.Because data mining techniques differ from statistical approaches because they were established for huge datasets with multiple variables, they were created for smaller datasets with fewer variables.Among the most popular data mining approaches, decision tree-based methods use input data to forecast or categorise target qualities as an output in the shape of an equation having a structure of trees.The M5 modelling trees are a structure of choices that may be utilised for forecasting continuous quantitative qualities.Its branches are representations of regression operate, and it has lately sparked a substantial development in classifications and predictions.When contrasted to other theories, the tree algorithm's data has higher precision and is simpler to replicate and comprehend.A tree of choices is composed of four components: the root, the branch, the nodes, and the leaves.The rectangular shape denoted each node, while the connections between them were shown as branches.The tree of choices usually goes from left to right or from top to bottom, with the base (first node) on the very top to make it easier to create.The leaf denotes the conclusion of a series of events.For the reason of minimising the total of the squared variances from the average information for each node, splitting is carried out by one of the predictive variables.Utilising the splitting criterion is the first step in creating a tree model.The M5 algorithm's dividing criteria relies on the accuracy of the usual variation of the numbers acquired in every node that correspond to every class or subcategory.In a consequence of checking every characteristic at that node, dividing criteria determines the amount of erroneous for that component and determines the smallest predicted error type.In most circumstances, the predictive inaccuracy is determined by assessing how well the desired outcomes for hypothetical cases are predicted.SDR, or standard deviation reduction, is given in Eq. (3).
The total number of specimens approaching all nodes is shown by H , and i H is the portion of examples which correspond to the nth outcome of a possible test.sd stands for standard deviation.Up till reaching the final cluster (the leaf), the method of division is repeated multiple times at every node.So when it reaches the leaves, the total of the squared differences above the average information is virtually zero.The consequence is going to be the growth of a huge tree.Using numerous limbs and nodes, it is going to difficult to operate using this large tree; as a result, undesirable branches must be removed to create an ideal and effective tree.There are a total of two ways to prune: (1) while the plant forms its full potential, (2) trimming following the peak of shrub development.The second strategy begins by forming the largest possible tree before beginning the trimming manipulate, unlike the initial method, which prevents the tree from growing further branching.Choosing the best branch is dependent on reducing errors in prediction.
E. Evaluation parameters
The root mean square error (RMSE) (4), relative absolute error (RAE) (7), mean absolute error (MAE) (5), and correlation coefficient (CC (6)) were used to analyse the error values between the anticipated and observed data.www.ijacsa.thesai.org When n represents the total amount on assessments, and xi, yi are the anticipated & observed results of the SSI.Complete correlation (CC) among measured and anticipated numbers.Correlation that is direct is shown by values that are positive, and the opposite relationship is indicated by negative values.Additionally, the RMSE and MAE values are errors, therefore smaller values suggest lesser modelling mistakes.
V. RESULTS AND DISCUSSION
The effectiveness of the three models-SVM, GEP, and M5-in projecting the Standardised It Index utilising the SPI and SPEI indices at Navrood station throughout six-time delays (a one-month to six-month) is examined in the current work.A 48-month grade was chosen for investigation in this study out of the several scales for predicting SSI since it had a stronger correlation and was predicted by the mathematical models that were provided.The statistical characteristics of the drought indices used in the research region are shown in Table IV.The Fig. 2 Additionally, determined by Pearson's correlation but cross-correlation, it was determined that the USDA hydrological dryness index was better and predicted with a smaller error even though the drought index is better and more dependent on climatic circumstances.A comparison of measures, namely standard deviations, for various approaches is shown in Table V.The Support Vector Machine (SVM) shows performance variability with a standard deviation of 0.9.Two presentations of the Multilayer Perceptron (MLP) technique are made; in both cases, the standard deviation is 0.6, indicating higher consistency in performance as compared to SVM.Interestingly, the suggested technique, which makes use of Support Vector Regression (SVR), has a standard deviation of 0.9, which is consistent with the variability shown in SVM.These measures provide insights into the stability and reliability of each method, with lower standard deviations often reflecting more consistent performance.
VI. CONCLUSION
The research contributes significantly to the domain of commercial operation divergence analysis, offering valuable insights into the identification of variations in commercial systems beyond anticipated outcomes.By delving into the characteristics of procedure executions, the study illuminates' behaviours impacting process efficiency, encompassing both detrimental and optimal aspects.Success in this context is gauged through domain-specific efficiency metrics, encompassing cost-effectiveness, time optimization, and resource utilization.Users may have concerns regarding the dependability and efficacy of machine learning models in detecting and mitigating threats in applications like cyber security or automated threat detection.It's critical to fully assess the models' performance using real-world data and stringent testing protocols in order to address this.Moreover, adding human supervision to the machine learning procedure might offer still another level of security.Clearly defined procedures for human evaluation and intervention, particularly in crucial decision-making roles can guarantee responsibility and reduce the hazards connected with automated systems.The paper introduces an innovative decoding strategy that utilizes Declare constraint templates, enabling more expressive treatments through vector-based representations of procedure scenarios.Additionally, the research pioneers the application of Machine Learning, incorporating algorithms like Standardized Stream flow Index, Gene Expression Programming, Support Vector Regression, and M5 Model Tree within the realm of Deviance Mining.This approach effectively identifies the aspects of a procedure significantly influencing its efficiency, surpassing traditional trend mining methods to handle intricate linkages within highly variable systems.The experimental outcomes underscore the efficacy of machine learning when integrated with the proposed Declare-based coding.Analysing event logs through this approach yields pertinent and insightful conclusions, offering a comprehensive understanding of process behaviour and performance.While acknowledging these contributions, it is crucial to recognize the limitations of the current study, such as the specific contextual constraints and the need for further validation across diverse industry scenarios.Future work in this field by iteratively validating the model's performance across various data partitions, cross-validation provides a more robust estimate of its generalization ability, helping to identify and address over fitting issues before deployment in real-world scenarios.This research sets the stage for practical and effective tools in process analysis, empowering organizations to make informed, data-driven decisions for optimizing efficiency, reducing costs, and enhancing overall performance.
Fig. 1 .
Fig. 1.Steps to perform dataset encoding and machine learning analysis.
2
shows the Root Mean Square Error (RMSE) values for three machine learning algorithms: GP, M5, and SVR.Each row represents a different evaluation scenario or experiment.The values indicate the accuracy of the algorithms, with lower RMSE values indicating better accuracy.Based on the table, GP consistently has the lowest RMSE values across different scenarios, suggesting it performs better than M5 and SVR in terms of accuracy.
Fig. 4 .
Fig. 4. RAE model.The Fig. 4 shows the Relative Absolute Error (RAE) values for three machine learning algorithms: GP, M5, and SVR.Each row represents a different evaluation scenario.RAE is a metric used to measure the relative difference between the predicted and actual values, indicating the performance of the algorithms in relation to the magnitude of the target variable.Lower RAE values indicate better accuracy.Based on the table, GP generally has lower RAE values across different scenarios, suggesting it performs better in terms of accuracy compared to M5 and SVR in relation to the magnitude of the target variable.
TABLE III .
EVENT LOG USING DECLARE ENCODING
TABLE IV .
STATISTICAL CHARACTERISTICS OF THE UTILIZED DATA
|
2024-03-09T16:28:37.588Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "fb0f4fc14614c84a419c2a9cd34613def345e97a",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume15No2/Paper_66-Leveraging_Machine_Learning_for_Enhanced_Cyber_Attack_Detection.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "65fefc09a9e323c0cadb0ca57f19f1b019281b84",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
238478426
|
pes2o/s2orc
|
v3-fos-license
|
Error-Correction Code Proof-of-Work on Ethereum
The error-correction code proof-of-work (ECCPoW) algorithm is based on a low-density parity-check (LDPC) code. ECCPoW can impede the advent of mining application-specific integrated circuits (ASICs) with its time-varying puzzle generation capability. Previous research studies on ECCPoW algorithm have presented its theory and implementation on Bitcoin. In this study, we have not only designed ECCPoW for Ethereum, called ETH-ECC, but have also implemented, simulated, and validated it. In the implementation, we have explained how ECCPoW algorithm has been integrated into Ethereum 1.0 as a new consensus algorithm. Furthermore, we have devised and implemented a new method for controlling the difficulty level in ETH-ECC. In the simulation, we have tested the performance of ETH-ECC using a large number of node tests and demonstrated that the ECCPoW Ethereum works well with automatic difficulty-level change capability in real-world experimental settings. In addition, we discuss how stable the block generation time (BGT) of ETH-ECC is. Specifically, one key issue we intend to investigate is the finiteness of the mean of ETH-ECC BGT. Owing to a time-varying cryptographic puzzle generation system in ECCPoW algorithm, BGT in the algorithm may lead to a long-tailed distribution. Thus, simulation tests have been performed to determine whether BGT distribution is not heavy-tailed and has a finite mean. If the distribution is heavy-tailed, stable transaction confirmation cannot be guaranteed. In the validation, we have presented statistical analysis results based on the two-sample Anderson–Darling test and discussed how the BGT distribution follows an exponential distribution which has a finite mean. Our implementation is available for download at https://github.com/cryptoecc/ETH-ECC.
I. INTRODUCTION
Blockchain is a peer-to-peer (P2P) network that consists of trustless nodes.In a reliable P2P network, no peers would intentionally send wrong information to others.In contrast, in an unreliable P2P network (e.g., a group of trustless nodes), the possibility that some peers may send false information to others should be considered.For example, a node may spread wrong or forged information to others.To address these issues in an unreliable network, Nakamoto has proposed the ideas of using blocks and chaining of blocks with a novel consensus algorithm [1].
In a blockchain, one of the peers propagates a new block containing transactions to other peers.Peers validate the received block and link it to the previous block when there is no problem in the received block.A consensus algorithm accomplishes this process.If one of the peers has sent false information to others, such information is detected by the consensus algorithm as there is no collusion among the peers.A generated block contains information about previous blocks; thus, if someone wants to change one block in a chain, all previous blocks of the changing block must change.Therefore, unless the network is centralized within a particular group, sending forged information about previous blocks to new peers is impossible.Therefore, to prevent collusion, the unreliable network should avoid centralization.
Nakamoto has proposed a proof-of-work (PoW) system for a consensus algorithm.In the PoW system, peers repeat a type of work to solve a cryptographic puzzle using a hash function (e.g., SHA256 [1], Keccak [2]).When a peer successfully solves a cryptographic puzzle, the peer generates a block.Additionally, the node gets an incentive as a reward for the work done.In an ideal PoW system, anyone can join to work and take as much incentive as they can get the reward for the completed work.However, with an increase in the price of reward, attempts have been made to centralize the network to monopolize incentives.
Centralization is a phenomenon occurring in PoW based blockchain networks.In blockchains utilizing PoW as a consensus algorithm, an oligarchy of miners who possess overwhelming portion of computation resources can monopolize the chance to generate blocks.Such centralization negatively impacts the credibility of a blockchain.For example, in a centralized network, a group of dominant nodes can selectively filter out some transactions belonging to others for their own benefits.As far as new nodes are concerned, it will be difficult for them to give trust and join the network in fear of possible unfair treatment [3], [4].
The emergence of application-specific integrated circuits (ASIC) has accelerated the centralization of PoW.As more nodes use ASICs in generating blocks, block generation requires more computations.Thus, it has become hard to generate blocks using general-purpose units, such as a central processing unit (CPU) and a graphics processing unit (GPU).As a result, a few groups equipped with powerful ASICs have surfaced and centralized the blockchain networks.To avoid centralization, researchers have proposed the use of ASIC-resistant PoW (e.g., Ethash [2], X11 [12], Random X [24]) and alternative consensus algorithms (e.g., proof-of-stake, delegated-proof-of-stake, or Byzantium fault tolerance [25]).The networks of alternative algorithms have presented less decentralization effects than the network of ASIC-resistant PoW does [25].Specifically, in alternative algorithms, only limited participants can generate blocks; but, ASIC-resistant PoW has no limit of participants.Thus, ASIC-resistant PoW presents a better-decentralized network than alternative algorithms.
For an ASIC-resistant PoW, an error-correction code based proof-of-work (ECCPoW) algorithm was proposed [6], [7].In ECCPoW algorithms, a hash value of a previous block generates a varying parity check matrix (PCM) for error-correction.This varying PCM works as a cryptographic puzzle in ECCPoW.These time-varying cryptographic puzzles make ECCPoW ASIC-resistant.It is possible to implementing an ASIC for a specific cryptographic puzzle.In ECCPoW, every newly created puzzle differs from all the previously created puzzles.As a result, if there is an ASIC for ECCPoW, such an ASIC must cover a wide range of cryptographic puzzle generation systems.Such a system, however, would incur huge chip space and cost [10], [11].
In [7], the authors have reported that the time-varying puzzle system may generate large block generation times (BGT), i.e., outliers, for ECCPoW implemented on Bitcoin.If the outliers occur frequently enough, it is of our interest in this paper to see, the distribution of BGT may show a heavytailed distribution with a none finite mean [15], [26].As a result, the definition of [6] that BGT has a finite mean needs to be challenged.Previous works on ECCPoW [6], [7] did not include real-world experiments extensive enough to conclude that BGT has a finite mean.If BGT does not have a finite mean, ECCPoW cannot be used as an Ethereum consensus algorithm.Therefore, in this paper, we aim to study the distribution of BGT of the ECCPoW implemented on Ethereum (ETH-ECC).Our experimental results show that the BGT distribution is not heavy-tailed and has a finite mean.
The contributions of our work are as follows: • We show how ECCPoW is implemented on Ethereum.
• We present a method to control the difficulty in ETH-ECC and report the results of automatic difficulty change with real-world experiments of ETH-ECC.• We present a goodness-of-fit result using the Anderson-Darling (AD) test for distribution validation and discuss the necessary condition that the BGT distribution of ETH-ECC follows the exponential distribution.The remainder of this paper is organized as follows.Section II provides a background of the requirements of an ASICresistant PoW.Section III demonstrates the implementation of the ETH-ECC.Section IV discusses the formulation of the problem.Section V provides the experimental result of the implementation of the ETH-ECC.Finally, Section VI summarizes our work and concludes the paper.
II. Background
We introduce three approaches that can use to avoid centralization problems in PoW.One is an intentional bottleneck between an arithmetic logic unit (ALU) and memory, which is used by Ethash of Ethereum [2], [5].It is also termed a memory-hard technique.Another one is the high complexity of ASIC design utilized by Dash [12], Raven [13], and in our method, ECCPoW.The third one is hybrid methods of two methods; Random X of Monero utilizes hybrid methods [24].
A. INTENTIONAL BOTTLENECK
The most known PoW of the intentional bottleneck is Ethash of Ethereum [2], [5].This method uses the difference between the throughput of ALU and the bandwidth of the memory.If there is a bottleneck between the ALU and memory, it is impossible to fully utilize the throughput of ALU.Specifically, if a miner who wants to generate a block must get data from memory to generate a block, the number of block generation attempts depends on the memory bandwidth.The Ethash uses a directed acyclic graph (DAG), which is a set of randomly generated data for the bottleneck.The DAG is a huge dataset that cannot be stored in cache memory; therefore, DAG is stored in memory.To generate a block using Ethash, a miner must mix a part of DAG that is stored in the memory.Owing to this procedure, the miner cannot avoid the bottleneck, which is derived by the memory bandwidth.This method has been ASIC-resistant for a long time; however, Bitmain released ASIC for Ethash in 2018.
B. HIGH COMPLEXITY OF ASIC DESIGN
The high complexity of ASIC design forces ASIC to be less efficient.For example, if ASIC is less efficient than a general-purpose unit such as CPU or GPU, there is no reason to design ASIC.X11 of Dash [12] and X16R of Raven [13] utilize this method.Unlike PoW of Bitcoin, which uses only one hash function (SHA-256), X11 uses 11 hash functions consecutively: BLAKE, BMW, Grosetl, JH, Keccak, Skein, Luffa, Cubehash, SHAvite-3, SIMD, and ECHO.The BLAKE, which is the first hash function of X11, uses a block header with nonce as inputs; and its output becomes the input of the next hash function.Similarly, the next hash function uses the output of the previous hash function.This procedure is repeated until a result is obtained for the last hash function ECHO.Using the result of last hash function, miners determine whether they find a valid nonce.
Designing an ASIC for X11 was expensive; therefore, X11 was ASIC resistant.However, Bitmain released an ASIC for X11 in 2016.There are a few PoW algorithms that extend X11 (e.g., X13, X14, and X15); however, the ASICs for these have been released.X16R of Raven is an extended version of X11 of Dash.In X16R, unlike the previous extension of X11, the sequence of 16 hash functions is randomly changed.Therefore, it is costly to design an ASIC for X16R.However, T. Black, who designed X16R, mentioned that there is some evidence that ASICs for X16R exist [23].Our ECCPoW also uses high complexity of ASIC design method for ASIC resistance.However, unlike previous algorithms, ECCPoW can make ASIC powerless despite the release of ASIC.We explain this detail in Section III.
C. HYBRID METHODS
Random X of Monero combines the above two methods.Random X uses memory-hard techniques for the bottleneck with random code execution; Random X is optimized for CPU mining [24].In [24], they mentioned that it is possible to perform mining using field programmable gate array (FPGA); however, it will be much less efficient than CPU mining.It implies that it is possible to make efficient mining hardware when the cost of developing chip sets is low relative to the mining reward.With the proposed ECCPoW, attempts to develop an efficient mining hardware can be made when the reward-to-cost ratio gets higher.However, such attempts can be evaded easily since the parameters of EC-CPoW can be easily changed, such as increasing the length of code and the code rate.The next section describes more on the ASCI-resistance characteristic of ECCPoW.
III. ECCPoW Implemented on Ethereum
In this Section, we aim to briefly introduce ECCPoW and present how ECCPoW has been implemented on Ethereum using Fig. 1.In addition, we present how the difficulty level of ETH-ECC can be controlled automatically.
A. OVERVIEW OF THE ECCPoW
In a blockchain employing the PoW consensus algorithm, a node solves cryptographic puzzles to publish a block.For a given puzzle, the node who solves the puzzle first gets the authority to publish a block.For example, in the PoW of Bitcoin, the first node that finds a specific output of the Secure hash algorithm (SHA) gets the authority to publish a block.The PoW of Ethereum uses Keccak instead of SHA.The ECCPoW algorithm proposed in [6] is a PoW consensus algorithm that utilizes error-correction code, which is made of the low-density parity-check (LDPC) code [8], as a cryptographic puzzle.The ECCPoW algorithm consists of a pseudo-random puzzle generator (PRPG) and an ECC puzzle solver.Fig. 1 presents the flow chart of the ECCPoW algorithm.For every block, the PRPG generates a new pseudorandom LDPC matrix.LDPC matrix is distinct from the other previously generated matrices.Such a pseudo-random LDPC matrix takes the role of issuing an independently announced cryptographic puzzle.The ECC puzzle solver uses the LDPC decoder to solve the given announced puzzle.Specifically, to publish a block, a node is required to run through input header until the LDPC decoder hits a satisfying result; say, the output of the decoder is an LDPC codeword (with a certain Hamming weight).In the next subsection, we discuss ECCPoW implementation on Ethereum with the flow chart presented in Fig. 1.
B. ECCPoW ON ETHEREUM
In this subsection, we present how the error-correction process is applied to ETH-ECC using Fig. 1.
when a parity-check matrix (PCM) H is given, a code c, which satisfies (1) is referred to as an LDPC code.The goal of the ECCPoW algorithm is to find an LDPC code c using the PCM H, which is derived by PRPG, and a hash vector r, which is obtained by the ECC puzzle solver.For the PRPG, we employ the previous hash value; the previous hash value, known as the parent hash in the Ethereum block header, randomly generates a PCM.Specifically, we use Gallagher's method to make random PCM [9]; we use the previous hash value as a seed of randomness.Thus, PCM is changed every block; because of the same seed, every node uses the same PCM until a block is generated [6].
1) ECC puzzle solver on ECCPoW Ethereum
Here, we introduce a process of ECC puzzle solver in ETH-ECC.Our definitions are based on [6].The equations below follow the right-hand side of Fig. 1.
Definition 1. Hash vector r in which the size of n can be obtained as follows: where Keccak denotes the hash function applied in Ethash of Ethereum [5].We use the same way of Ethereum to generate a nonce.Furthermore, for a longer length of hash vector, we where / 256 ln and 256 j n l .For example, when n is less than 256, r gets the same length as n; and when is not less than 256, r concatenates the results of Keccak.These flexible length hash vector is utilized for ASIC-resistance.
2) Proof-of-Work of the LDPC decoder
The goal of the LDPC decoder is to find a hash vector c that satisfies Hc = 0.The below definition the decoding presented in Fig. 1.
Definition 2.
When PCM H, which is the size of m × n, and hash vector r, which is the size of n, are given, the LDPC decoder uses H and r as inputs and obtains output c using the message-passing algorithm [6], [14].When c satisfies (1), c becomes an LDPC code, and the miner completes LDPC decoding.
A PCM H is randomly generated, but all miners use the same previous hash value, which is derived from the previous block.Therefore, it is impossible to predict the next PCM to mine a block in advance.In the PoW of Ethereum, miners change a nonce when they got a wrong output.We follow the same way as that used by Ethereum to obtain a hash value from Keccak with a nonce, but ETH-ECC uses one more step (3) to generate a hash vector for decoding.When the code derived by (4) does not satisfy (1), the miner generates a new nonce and repeats all the steps.
Our method is based on the high complexity of ASIC design in Section II for an ASIC-resistant PoW.However, unlike the mentioned method in Section II, ECCPoW generates varying cryptographic puzzles for high complexity.Specifically, ECCPoW utilizes two factors for high complexity: flexible length LDPC code c, randomly generated PCM H. ASICs can be released for the n length of code.However, extending the length of code (e.g., n+1) makes ASICs powerless.Furthermore, in [10], [11], it has been proven that implementing an ASIC that can handle variable PCM is expensive and occupies a lot of space.If developing an ASIC costs more than buying a CPU or GPU, there is no incentive to make an ASIC.In other words, the ECCPoW algorithm is ASIC resistant as implementing an ASIC that can handle various lengths of changing codes and randomly generated PCMs is very inefficient.
C. DIFFICULTY CONTROL OF ETH-ECC
In this subsection, we demonstrate the implementation of difficulty control of ETH-ECC.Bitcoin [1] and Ethereum [2] have Ethereum utilizes the number of attempts to generate a block per second, termed as hash rate, and a probability of block generation.Similarly, ETH-ECC utilizes the hash rate; but ETH-ECC considers a probability of decoding success.In [5], the difficulty of Ethereum is defined by the probability of block generation.The difficulty follows: where n denotes the result of PoW, and Diff denotes the difficulty of Ethereum.Thus, ( 6) means that when the difficulty increases, the number of n that satisfies (6) decreases.Furthermore, we can consider that the reciprocal of difficulty is a probability of block generation.Ethereum utilizes this probability and hash rate to control block generation time.
For example, without replacement, when the probability of block generation is 1/150 and hash rate is 10 hash per second, brute force takes 15 seconds.If the hash rate increases, such as 20 hash per second, Ethereum's method adjust the probability of block generation time to 1/300.Thus brute force takes 15 seconds even though the hash rate increase.
For ECCPoW, if we can calculate a probability of decoding success, it is possible to control difficulty similar to the process in Ethereum.Thus, it is important to know the probability of a successful LDPC decoding according to the LDPC parameter.To test the difficulty change using the BGT, we use the pseudo-probability of a successful LDCP decoding according to the parameters [7].Namely, ETH-ECC utilizes the probability of decoding success and hash rate to control difficulty.For example, without replacement, when the probability of decoding success is 1/150, and the hash rate is 10 hash per second, it takes 15 seconds like the above example of Ethereum's method.However, unlike Ethereum, when the hash rate increase, ETH-ECC tunes parameters of LDPC to adjust the probability of decoding success.By tuning parameters, ECCPoW achieves both difficulty control and ASIC-resistance.The parameters can be found at https://github.com/cryptoecc/ETH-ECC/blob/master/consensus/eccpow/LDPCDifficulty_utils.go#L65.In Fig. 2, the difficulty of the ETH-ECC is 32.49KH.It is indicating that the probability of block generation is 1 of 32,490 hash.
IV. Problem Formulation
In PoW, there is a case that nodes generate blocks at the same time.Bitcoin allows only one block; Ethereum allows three blocks to generate at the same time.However, in Ethereum, only one block can be canonical; the other blocks cannot.Blocks that cannot be the canonical is called an uncle block.In Ethereum, nodes rollback transactions of uncle blocks [5].Therefore, the transaction's participants must wait block confirmation to prevent the rollback.That is to say, in blockchains utilizing PoW, the BGT must have a finite mean for the block confirmation time.For example, if the BGT has a none finite mean, we cannot determine how long we must wait for the confirmation of transactions.Therefore, to apply the ECCPoW algorithm in a real network, the BGT must have a finite mean.
In [6], the authors present the definition of the block generation of the ECCPoW algorithm using a hash rate with a geometric distribution.Namely, they assumed that nodes generate a block within specific block generation attempts.However, if the BGT has a none finite mean, there is no guarantee that nodes generate a block within specific attempts.In [7], the authors present a practical experiment using the EC-CPoW algorithm.However, they only mentioned that the BGT of ECCPoW is "unstable".Namely, they mentioned that BGT of ECCPoW has outliers; but they did not present a discussion about BGT.Thus, in the paper, we present a discussion about BGT.Specifically, our experimental result presents evidence that exponential distribution describes the distribution of BGT of ECCPoW.
V. EXPERIMENT ON ETH-ECC
In this section, we conduct experiments using ETH-ECC.First, we simulate the difficulty change using multinode networks.Second, we conduct a goodness-of-fit experiment using the Anderson-darling (AD) test [16], [17], [18] to discuss the distribution of the BGT with fixed difficulty.
A. SIMULATION OF THE DIFFICULTY CHANGE
We simulate the difficulty change employing Amazon Web Services (AWS) using 12 nodes.Two nodes are bootnodes that help connect the nodes, and the other 10 nodes are sealnodes that participate in the block generation.In the charts presented in Fig. 2, BLOCK TIME presents the BGT of the last 40 blocks, and DIFFICULTY shows the difficulty of the last 40 generated blocks.BLOCK TIME and DIFFICULTY show that because of the large standard deviation, the block is generated slow despite the low level of difficulty, as already mentioned in [7]; in the next subsection, we discuss the BGT.In the charts presented in Fig. 2, LAST BLOCK shows the BGT of the previous block, and AVG BLOCK TIME shows the average of the BGT.Moreover, AVG NET-WORK HASHRATE shows the average hash rate of all miners.BLOCK PROPAGATION shows the block propagation time from a miner who generated a block to other miners.We used two different regions: Seoul and US East for sealnodes.Specifically, 3 of the 10 sealnodes are in the US East region, whereas the rest are in the Seoul region.BLOCK PROPAGATION also shows the percentage of blocks which are propagated corresponding times.BLOCK PROPAGA-TION indicates that propagation of almost blocks takes less than 2 seconds to propagate between the Seoul and US East regions.Block propagation follows the same method as that of Ethereum.
B. STABILTY OF THE BLOCK GENERATION TIME
Fig. 2 demonstrates the need to check if varying puzzles might make outliers.Namely, in BLOCK TIME and DIFFI-CULTY of Fig. 2, a slow block generations are observed despite the low level of difficulty.In other words, the observation of BGT shows outliers.If the outliers are not controllable, outliers make the distribution of BGT have none finite mean similar to the heavy-tailed distribution.None finite mean cannot guarantee the confirmation of transactions.Thus, to achieve a stable BGT that can guarantee the confirmation of transactions, the BGT must have a finite mean.
We obtain the BGT of ECCPoW Ethereum with a fixed difficulty to observe what kind of distribution with a finite mean the BGT follows.Specifically, if BGT follows exponential, it has a finite mean.However, if the BGT follows a heavy-tailed distribution, it has a none finite mean [15].Thus, through the goodness-of-fit, we aimed to discuss what type of distribution the BGT follows.For the goodness-of-fit, we set a null hypothesis H0 and alternative hypothesis HA: 0 : BGT has the exponential distribution : BGT does not have the exponential distribution For the goodness-of-fit, we use the AD test [16], [17], [18].There are other tests available for the goodness-of-fit such as the chi-squared test [19], Kolmogorov-Smirnov test [20], and AD test [16].The chi-squared test has a restrictive assumption that all the expected frequencies should be five or more [21].But, there is no guarantee that our samples achieve this assumption.If we collect more samples, the chisquared test possibly uses.However, the p-values used to validate the hypotheses are affected by the number of samples.When the number of samples increased in the chisquared test, the p-values tend to decrease.Therefore, the assumption of the chi-squared test is not appropriate for verifying our distributions.The Kolmogorov-Smirnov test does not have an issue with adequacy on sample size.But it is sensitive more to the center of the distribution rather than the tail [22].To cover all possibilities, we must consider verifying the tail of the distribution.Therefore, we have chosen to use the AD test [16], which gives more weight to the tail compared with the Kolmogorov-Smirnov test. where A is standardized to remove the dependencies derived by the number of samples.This standardized form is utilized to calculate the p-value [17], [18].The p-value provides an evidence for hypotheses test.
The two-sample AD test is suitable to verify a hypothesis that two sample sets come from the same population.For the two-sample AD test, as a null hypothesis 0 H , we set the () M Fx has the same population as ()
N
Gx .Also, we set that the () N Gxis an exponential distribution.H is true.The p-value is, under the assumption that the null hypothesis is true, the false-positive probability.A low p-value indicates that a test result provides evidence against the null hypothesis; a large p-value does not.Namely, large p-value denotes the probability of true-negative is low.The p-value is determined from the observation of the sample data.Thus, before observing the data, we set the threshold significance level(TSL), [0,1] TSL , first.The TSL can be used to determine the critical value.Given a TSL and the number of samples that are used in the AD test, the TSL table in [18] is used to read off a value corresponding to the TSL and the number of samples.This read off value is called the critical value.If the standardized 2 MN A is smaller than the critical value, this result indicates that the p-value is larger than the predefined TSL.In the TSL table of [18], the maximum TSL is 0.25.Thus, when standardized 2 MN A is lower than the critical value corresponding to the 0.25 TSL, the p-value is capped at 0.25.
3) Verification of the AD Test
In this subsection, we aim to verify the two-sample AD testing method.Verification is done under the assumption that the input distributions are a priori known.This will example, when the minimum BGT is 10 and the maximum BGT is 20, there are ten intervals, i.e., [10,11], [11,12], , [19,20].Using these intervals, we count the observed frequency of the BGT data.We set () M Fx using the observed frequency and set () Gx using a mean of BGT data.For the expected frequency of () N G x in Table 2, the mean in Fig. 4 is utilized.Namely, the mean in Fig. 4 is used as 1/λ for the CDF of the exponential distribution () The expected frequency of the Table 1 is calculated using the integral of () N G x corresponding to the interval time.Be- cause () x , we may consider () M Fx is the exponential distribution.
E. Discussion on AD Test Results
In Fig. 3, we present plots of the observed frequency and expected frequency.These frequencies are calculated in a manner mentioned in the subsection Application of AD Test to BGT Distribution.Fig. 3 shows that the observed frequency tends to follow the expected frequency.Also, in Table 3, the observed mean and standard deviation tend to converge as the number of blocks increase.Furthermore, in Table 3, we present the results of the AD test to discuss hypotheses 0 H and a H .These results show a similar result of (c) in Table 3.In (c), we drew samples from the same true distribution; the results present the largest possible p-value.All the p-values in Table 3 have a larger than or equal to the 0.25 regardless of the number of blocks.In other words, if the null hypothesis is rejected, this decision will cause an error with a probability greater than 0.25.Namely, the decision that the BGT distribution () M F x does not follows the exponential distri- bution could be made with a high decision error.
VI. CONCLUSION
In this paper, we present the implementation, simulation, and validation of ETH-ECC.In the implementation, we showed how Ethereum applies ECCPoW as a consensus algorithm with real implementation.In the simulation, we conducted a multinode experiment using AWS EC2.The results revealed that the ECCPoW algorithm with varying difficulty is successfully implemented in the real world.In the validation, we showed the statistical results.These statistical results satisfy the necessary condition that the distribution of ECCPoW block generation time is exponential.
FIGURE 1 .
FIGURE 1. Flow chart of ECCPoW Ethereum.Every miner who gen-erates blocks can make a parity check matrix using a previous hash value.A generated nonce becomes an input of a hash function.A hash vector used for decoding can be generated using the output of a hash function.If decoding is successful, the block is generated; otherwise, a miner generates a new nonce to make a new hash vector for decoding.
FIGURE 2 .
FIGURE 2. This figure shows the results of the simulation of ECCPoW Ethereum on Amazon Web Services (AWS).Twelve nodes are used in the simulation.The two nodes are bootnodes that help connect the nodes, and the other 10 nodes are sealnodes that participate in the block generation.We use the m5.xlarge of AWS EC2 for the simulation.In the charts, BLOCK TIME shows the block generation times for the last 40 blocks, and DIFFICULTY shows the difficulty levels of the last 40 blocks.BLOCK PROPAGATION shows the percentage of the block propagation time corresponding to time.
FIGURE 3 .
FIGURE 3. The numbers of all the blocks are 100, 200, 300, and 400.The expected frequency is calculated using the exponential distribution derived from the mean of the observed block generation time.
FIGURE 4 .TABLE 2 . 4 TABLE 1 .
FIGURE 4. Plot of 300 BGTs when n is 32.The legend at the top right shows the mean, variance, and standard deviation of BGT.
|
2021-01-27T02:16:11.326Z
|
2021-01-26T00:00:00.000
|
{
"year": 2021,
"sha1": "a055cc365b57b28f6b0fb80f81a2d815f7945531",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09540598.pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "4ce43ce26368993a9f454622482957b8ae782203",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
265639925
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of the trends of climate variability over two different eco-regions of Ethiopia
Abstract This study analyzed the trends of precipitation and temperature in two eco-regions, which represent the whole part of Ethiopia based on climate variations. Mann–Kendall, Sen’s slope estimator test and innovative trend analysis method were used to detect precipitation and temperature trends. The observed historical meteorological data from 1980 to 2016 were used to analyze the trends in this study. MATLAB software was used to analyze the trends of climate variability. The findings of this study showed that the trends of precipitation were statistically significant with a positive trend in Gondar (β = 1.84) and Bahir Dar (β = 1.80) of highland eco-regions, whereas a significant increasing trend was observed in Negele (β = 23.40) and Gewane (β = 0.10) of lowland eco-regions. However, Sekoru (β = 0.01) and Degahabur (β = 4.13) stations showed a significant decreasing trend. As far as trends of temperature are concerned, a statistically significant increasing trend of temperature was observed in Gondar (β = 0.04) and Bahir Dar (β = 0.08), and a sharp significant decreasing trend was observed in Sekoru (β = 0.01) stations of highland eco-regions. The lowland eco-regions (Gewane (β = 0. 10), Degahabur (β = 0.03) and Negele (β = 0.07)) showed a statistically significant increasing trend. The consistency in precipitation and temperature trends over the two eco-regions of Ethiopia confirms the robustness of the change in trends. Further study should be done by taking more stations and datasets to reach a conclusion whether climate change has occurred or not. However, the findings of this study could provide insights for policy- and decision-makers to take proactive measures for climate change mitigation.
The global seasonal temperature has been showing an increasing trend across the globe.Global land and ocean surface temperature has increased by 0.85°C during 1880-2012.Climate change is expected to exacerbate variability in rainfall and temperature in Ethiopia, potentially increasing farmers' exposure to climate-related risks.Ethiopia is classified into two eco-regions, i.e, highland and lowland.Strong topographic contrasts lead to high spatial variability of climatic conditions in these eco-regions.Thus, this study analyzed the trends of precipitation and temperature in two eco-regions using Mann-Kendall (MK), innovative trend analysis method (ITAM) and Sen's slope estimator tests.It was confirmed that precipitation is mainly caused by cold summer, and thus correlates to a large extent with temperature in the study area.The findings of this study could provide insights for policy-and decision-makers to take proactive measures for climate change mitigation.
Introduction
The global seasonal temperature has been showing an increasing trend across the globe (Jemal et al., 2022).The historical changes in global mean rainfall have been more uncertain, varying by season and region (Asfaw et al., 2017;Jemal et al., 2022).Global land and ocean surface temperature has increased by 0.85°C during 1880-2012(IPCC, 2013)).The average global temperature for the years 2015-2019 is the warmest period recorded in history (Gemeda et al., 2021).It is estimated to be 1.1°C (±0.1°C) above the pre-industrial times.
Extreme changes in rainfall and temperature were observed at different parts of the world (Worku et al., 2018).Climate change is expected to exacerbate variability in rainfall and temperature in Ethiopia, potentially increasing farmers' exposure to climate-related risks (Dereje et al., 2020).Such changes in precipitation and rising temperature are undeniably clear with the impacts affecting ecosystems and biodiversity (Gong et al., 2018;Wen et al., 2017).Ethiopia is characterized by erratic and unreliable rainfall (Seleshi & Zanke, 2004;Wen et al., 2017).The variation is very high when comparing the highland parts of the country with its lowland regions (Gedefaw et al., 2018).This study investigated the trends of climate variations in these two eco-regions.Strong topographic contrasts lead to high spatial variability of climatic conditions in highland eco-regions (Dereje et al., 2020).The lowland eco-regions are particularly affected by drought and extreme heats.Most deserts are found in the lowland parts of the country.Thus, it is essential to examine climate variability and trends in these eco-regions.Climate change and variability have negative impacts on water resources, agriculture and livelihoods, which leads to food insecurity (Gemeda et al., 2021).Scientific evidence indicates that rainfed agriculture is severely affected by the change in climate (Alemayehu & Bewket, 2016;Seaman et al., 2014).Climate change that leads to extreme events, such as flooding, drought and excessive heat, contributes to the increment of global food prices (Tabari et al., 2015;Ureta et al., 2020;Wu et al., 2016).The magnitude of the climate change effect relies on the level at which the community relies on rainfed agriculture, level of technology and institutional capacity to adapt such situations (Naab et al., 2019).
Concrete information is needed to clearly understand the climate variability and trend analysis at a spatiotemporal scale.These analyses have been used to inform adaptation options for agriculture and water resources sectors (Bewket, 2014).The trends of rainfall and temperature were analyzed using MK, Sen's slope estimator and ITAM.These methods are very common to detect the trends of climate datasets.Numerous studies have been conducted so far in Ethiopia to examine climate variability and trends using the above methods (Asfaw et al., 2017;Behailu et al., 2014;Bewket, 2007;Gedefaw et al., 2018;Girma et al., 2016;Mekasha et al., 2014;Seleshi & Zanke, 2004;Tekleab et al., 2013;Yenehun et al., 2017).
However, no study has examined the trends and variability of climate in the given two ecoregions of Ethiopia yet.Thus, this study aimed to investigate the spatiotemporal characteristics of climate variability and trends over these two eco-regions.The findings of this study will provide insights for water resource managers for future sustainable water resources management.
Description of the study area
Ethiopia is lying between 3°−15° north latitude and 33°-48° east longitude (Figure 1).It covers a total area of about 1.12 million km 2 and consists of 12 river basins (Seleshi & Zanke, 2004).It is estimated that the total number of population is 120 million.It is divided into two eco-regions, namely highlands and lowlands (Wondie et al., 2011).The country is characterized by seasonal and annual variability of rainfall (Seleshi & Zanke, 2004).The mean annual precipitation is 834.97 mm, with 509.93 and 1015.90 mm as the minimum and maximum precipitation, respectively.However, the mean annual temperature is 29.16°C.The minimum and maximum temperatures are 27.92°C and 30.35°C, respectively.
Data sources
Daily precipitation and temperature data were collected from 1980 to 2016 from the National Meteorological Services Agency of Ethiopia (Table 1).
Methods
Long-term trends in the observed and adjusted time series data were detected using trend detection methods (Figure 2).Significance levels at 10%, 5% and 1% were taken to assess the precipitation and temperature trends.
Mann-Kendall trend detection
The MK test statistics (Kendall, 1975;Mann, 1945) used a non-parametric test to detect the trends of precipitation and temperature time series data using the following equations.MK is insensitive to outliers and does not require data to be normally distributed.The trend test is applied to X i data values (i = 1, 2, . . .,n-1) and X j (j=i + 1,2, . . .n).The data values of each X i are used as a reference point to compare with the data values of X j which is given as: where X i and X j are the values in period i and j.When the number of data series is greater than or equal to 10 (n ≥ 10), MK test is then characterized by normal distribution with the mean E(S) = 0 and variance (Var(S)) is equated as follows (Ma et al., 2014): where m is the number of the tied groups in the time series, and t k is the number of data points in the kth tied group.The test statistic Z is as follows: In time sequence, the statistics are defined independently: Given the confidence level α, if UF k > UFα/2, it indicates that the sequence has a significant trend.Then, the time sequence is arranged in reverse order.According to the equation calculation, while making
Methods Analysis
Annual precipitation UB k and UF k are drawn as UB and UF curve.If there is an intersection between the two curves, the intersection is the beginning of the mutation (Zhang et al., 2012).
Sen's slope estimator test
The slope (Q i ) between two data points is equated as follows (Sen, 1968): where X j and X k are the data points at time j and k (j > k), respectively.
Innovative Trend Analysis Method (ITAM)
The trend indicator of ITAM is multiplied by 10 to make the scale similar with the other two tests.
The trend indicator is calculated as follows (Sen, 2014): where ϕ = trend indicator, n = number of observations in the subseries, X i = data series in the first half subseries class, X j = data series in the second half subseries class and μ = mean of the data series in the first half subseries class.
Precipitation concentration index (PCI)
The PCI was adopted to quantify the distribution of rainfall and its heterogeneity pattern across the stations (Ademe et al., 2020;Guo et al., 2020).PCI values were categorized as uniform (<10), moderate (11-15), irregular (16-20) and strongly irregular (>20) in monthly rainfall distributions (De Luis et al., 2011).The PCI of the six stations is depicted in Table 2.
The annual PCI was computed as follows: where P i is the amount of rainfall of the ith month.
Data preparation and quality control
Daily data were averaged to monthly and annual data for each station to simplify the calculations.Missing data were also checked.The data were arranged in Excel data sheet as required for each analysis.The data were also checked for significant difference in each datasets as well as stations.
Analysis of mean annual precipitation and temperature
The results revealed that the annual rainfall was found to be 834.97mm and coefficient of variation was 15% (CV = 15%).The maximum rainfall was 1015.90 mm and the minimum rainfall was 509.93 mm.The findings showed that the highland eco-regions received high amount of rainfall (>650 mm) (Gondar, Bahir Dar and Sekoru), whereas the lowland eco-regions received less rainfall, which accounts about 20.30% (Gewane, Degahabur and Negele).The mean annual temperature was found to be 29.16°C.The maximum and minimum temperatures were 30.35°C and 27.92°C, respectively.Figure 3 shows some selected stations with seasonal variability of precipitation in Ethiopia.However, this study only focused on two eco-regions, i.e highland ecoregions (Gondar, Bahir Dar and Sekoru) and lowland eco-regions (Gewane, Degahabur and Negele).
Results indicate that all stations in the study area showed a positive trend in mean maximum temperature over the study period, which is in line with the increasing global mean temperature and increasing number of warm days and warm nights in Ethiopia.More concentrated spatial storm events could be expected with higher temperatures as global temperature increases (Wasko et al., 2016).The mean annual maximum temperature change over the country is about 0.01°C/ year over the last 50 years (NMSA, 2001).Land use, land cover change and over exploitation of natural resources are the main driving forces for the variations of current trends of maximum temperature over the study area.The present study indicated that the mean maximum temperature across the study area showed statistically significant positive trends at five out of six stations.This is exactly consistent with the results of Suryabhagavan ( 2017), who has reported that most of the weather stations in Ethiopia experienced a significant increasing trend in temperature.This needs policy interventions and public awareness to decrease the adverse impacts of climate change.
Analysis of mean annual precipitation trends
The findings of this study showed that the trends of precipitation were statistically significant with an increasing trend in Gondar (β = 1.84) and Bahir Dar (β = 1.80) of highland eco-regions (Table 3).A significant increasing trend was observed in Negele (β = 23.40) and Gewane (β = 0.10) of lowland eco-regions.However, Sekoru (β = 0.01) and Degahabur (β = 4.13) stations showed a significant decreasing trend (Figure 3).Four out of six stations showed a positive trend of precipitation, whereas Sekoru and Degahabur showed negative trends.Table 3 shows the statistical trend results of precipitation using MK, ITAM and Sen's slope estimator tests.Similar results were reported by Gemeda et al. (2021) where a statistically significant decreasing trend of rainfall was observed at Sekoru station.The inter-annual variability of rainfall is a common phenomenon across the different parts of the country including the above eco-regions (Suryabhagavan, 2017).Mekasha et al found contrasting results on Negele station that reported a negative trend of precipitation in Negele, which represent site specific and region wide differences in climate.Overall, the findings of the present study are consistent with other studies (Asfaw et al., 2017;Gedefaw et al., 2018;Gemeda et al., 2021;Suryabhagavan, 2017).The changes in these climatic elements across the stations during the study period ) could be associated with human activities and climate changes.
Analysis of mean annual maximum temperature trends
The trend detection result showed that a statistically significant increasing trend of temperature was observed in Gondar (β = 0.04) and Bahir Dar (β = 0.08) and a sharp significant decreasing trend in Sekoru (β = 0.01) stations of highland eco-regions (Table 4).Lowland eco-regions (Gewane (β = 0. 10), Degahabur (β = 0.03) and Negele (β = 0.07)) showed a statistically significant increasing trend (Figure 4 and 5).Table 4 shows the statistical trend results of maximum temperature using MK, ITAM and Sen's slope estimator test.Five out of six stations showed increasing trends of temperature during the study period.Only Sekoru station showed a sharp decreasing trend.Gemeda et al. (2021) also found statistically significant decreasing trend at Sekoru station (−2.22).The results of the present study are generally consistent with previous studies on climate change trends in the same study area, which reported significant increasing trends in the mean annual maximum temperature (Asfaw et al., 2018;Gedefaw et al., 2018;Jemal et al., 2020).Inter-annual variability of temperature is also more persistent in the study area than rainfall variability (Fig 4).The observed increasing temperature in the area has a significant impact on agricultural activities, and more evapotranspiration lead to lose more water from the catchment.These changes may be due to land use, land cover changes and human activities.
Correlation analysis
The correlations analysis between precipitation and temperature showed a coherent pattern of relationship (Figure 6).It was confirmed that precipitation is mainly caused by cold summer, and thus correlates to a large extent with temperature in the lowland eco-regions.There is a fluctuation of the two climatic variables over the different stations across the eco-regions.The findings of this study are consistent with Behailu et al. (2014), Gedefaw et al. (2018), Girma et al. (2016), Tekleab et al. (2013) and Yenehun et al. (2017).The cause of these changes could be associated with anthropogenic actions.
Some stations, such as Sekoru and Degahabur, showed decreasing trends of precipitation (Figure 6(c,e)).These could impact the sustainability of water resources recharge (Karmeshu, 2015).Increasing transpiration due to increasing of temperature could increase the chance of rainfall and may interfere groundwater recharge triggered by summer season reduction.
Conclusions
The present study investigated the trends of mean annual precipitation and maximum temperature at two eco-regions of Ethiopia.The findings showed that the highland eco-regions received high amount of rainfall (≥650 mm), but the lowland regions received less rainfall.The mean annual rainfall was found to be 834.97mm.The maximum rainfall was 1015.90 mm and the minimum rainfall was 509.93 mm.The mean annual temperature was found to be 29.16°C.The maximum and minimum temperatures were 30.35°C and 27.92°C, respectively.Precipitation showed statistically significant increasing trend in Gondar, Bahir Dar, Negele and Gewane stations.The declining trend in mean annual precipitation in the latest decade has changed the overall scenario of rainfall in the study area, and this variability may impact future climatic conditions and agricultural practices.
As far as trends of temperature are concerned, statistically significant increasing trend of temperature was observed in Gondar and Bahir Dar highland eco-regions as well as in all lowland eco-regions.This could imply that there is an occurrence of climate change.The major contributor to the decline in annual precipitation is the occurrence of frequent drought.Lowland eco-regions are very sensitive to seasonal rainfall changes.Further study should be conducted to confirm the change in climate in the study eco-regions by taking latest datasets and including more stations.However, the findings of this study could help to understand the trends in precipitation and temperature in the two eco-regions of Ethiopia.Thus, policy-makers should plan to minimize the adverse effects of climate change through improving the accessibility of weather forecasting and improving climate resilient green economy.Policy intervention should be taken towards climate change adaptation and mitigation strategies in these two eco-regions.
Figure 1 .
Figure 1.Location map of the study area.
Figure 4 .
Figure 4. Seasonal variability of rainfall across the country.
Figure
Figure 5. Trend analysis of precipitation.
|
2023-12-05T17:04:24.158Z
|
2023-11-28T00:00:00.000
|
{
"year": 2023,
"sha1": "ffcaaf950943a4181cf9a33b76a26eb6127bb71e",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23311916.2023.2283337?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "cca9dd3fb3b3eb15401e25873b3f4f5abd712e37",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": []
}
|
253285668
|
pes2o/s2orc
|
v3-fos-license
|
Recent Advances in Hepatocellular Carcinoma Treatment with Radionuclides
As the third leading cause of cancer death worldwide, hepatocellular carcinoma (HCC) is characterized by late detection, difficult diagnosis and treatment, rapid progression, and poor prognosis. Current treatments for liver cancer include surgical resection, radiofrequency ablation, liver transplantation, chemotherapy, external radiation therapy, and internal radionuclide therapy. Radionuclide therapy is the use of high-energy radiation emitted by radionuclides to eradicate tumor cells, thus achieving the therapeutic effect. Recently, with the continuous development of biomedical technology, the application of radionuclides in treatment of HCC has progressed steadily. This review focuses on three types of radionuclide-based treatment regimens, including transarterial radioembolization (TARE), radioactive seed implantation, and radioimmunotherapy. Their research progress and clinical applications are summarized. The advantages, limitations, and clinical potential of radionuclide treatment of HCC are discussed.
Introduction
Hepatocellular carcinoma (HCC) is the third leading cause of cancer death worldwide [1]. HCC is the main histologic type of primary liver cancer, accounting for 70-90% of liver cancer. Cirrhosis is the strongest risk factor of HCC, and the main causes of cirrhosis are chronic hepatitis B (HBV) or hepatitis C (HCV) virus infection, excessive alcohol consumption, and excessive dietary intake of aflatoxins [2,3]. Aflatoxin, a food contaminant produced by Aspergillus molds, has been shown to be an important pathogen in the pathogenesis of HCC. Increased aflatoxin intake is associated with the risk of HCC [4].
Clinical treatments for liver cancer mainly include surgical resection, liver transplantation, radiofrequency ablation (RFA), external radiation therapy, transcatheter arterial chemoembolization (TACE), and targeted drugs such as sorafenib. Sorafenib is a multipletarget tyrosine kinase inhibitor, which can inhibit RAF-1, B-Raf, and kinase activities in the Ras/Raf/MEK/ERK signaling pathway to inhibit tumor cell proliferation, and prolong the overall median survival of patients with advanced HCC [5]. According to the progression, the Barcelona Clinic Liver Cancer (BCLC) classification defines liver cancer into four stages: early (BCLC 0/A), middle (BCLC B), late (BCLC C), and terminal (BCLC D) [6]. For patients with early liver cancer or cirrhosis (BCLC grade 0 or A), surgical resection, liver transplantation, and radiofrequency ablation (RFA) are the main treatments. These treatments are effective and significantly prolong the survival of patients. However, liver cancer is usually asymptomatic or asymptomatic in the early stages. Therefore, most patients are in the middle or late stage at diagnosis, and they are not suitable for the above treatment protocols.
Unlike most cancers, HCC can be diagnosed by imaging without tissue sampling. MRI and CT are clinical methods used to diagnose HCC with an excellent performance. Dynamic MRI has slightly better diagnostic performance than CT imaging. CT has the advantages of a lower cost, higher availability, and faster scanning time [7]. For unresectable HCC (BCLC B), TACE may be used to deliver the drug to the tumor site via the hepatic artery. Considering that the liver cancer cells are mainly supplied by the hepatic artery, the treatment can effectively reduce the damage to normal liver tissue caused by the drug. Patients with TACE failure or BCLC grade C can be treated with systemic therapy agents such as sorafenib [8][9][10][11].
In addition to the above modalities, the therapeutic methods related to radionuclides represent an important research direction in the field of HCC treatment. The main radionuclide-related therapies for HCC include transarterial radioembolization (TARE), intratumoral implantation of radioactive particles, and radioimmunotherapy. The radionuclides commonly used in these treatments are 131 iodine ( 131 I), 90 yttrium ( 90 Y), 188 rhenium ( 188 Re), 166 holmium ( 166 Ho), and 125 iodine( 125 I); the related studies and data are shown in Table 1 [12][13][14]. This review article introduces the above three therapeutic methods; summarizes the clinical application status and research progress of related radiopharmaceuticals; and discusses the advantages, limitations, and prospects of radionuclides in the treatment of HCC.
Transarterial Radioembolization
TARE is a new modality of radionuclide therapy of HCC [15]. Between 70% and 80% of the blood supply of liver tumors comes from the hepatic artery while normal liver tissue mainly relies on the portal vein for blood supply, with only 20%-30% coming from the hepatic artery [16,17]. According to the differences in the blood supply source between tumor tissue and normal liver tissue [18,19], the injection of radioactive agent into patients through the hepatic artery can deliver more radiation to the tumor site, thus reducing drug-induced hepatotoxicity [20,21].
TARE-Related Radiation Agents
The main radioactive agents used in TARE are 131 I-lipiodol, 90 Y-microspheres, 188 re-lipiodol, and 166 Ho-microspheres. The properties of these radionuclides are shown in Table 1. The beta rays emitted by these radionuclides break the double strands of DNA and kill surrounding cells. TARE allows the drug to be delivered to the tumor to kill more tumor cells and cause less damage to normal tissue [14,[22][23][24]. Under normal circumstances, the radiation dose of external radiotherapy is 35 Gy, and the therapeutic effect is limited, with approximately 5% of patients going on to develop radiation-induced liver disease. Internal radiotherapy embolization can increase the radiation dose to 120 Gy or even higher, which not only effectively improves the therapeutic effect but also greatly reduces other side-effects caused by radiation [25,26]. 131 I has a long physical half-life of 8.02 days and emits both beta and gamma rays. Patients require hospitalization after 131 I injection for radiation protection after treatment. Although 188 Re also emits beta and gamma rays, it has a 16.9 h half-life and emits fewer low-energy rays, which means that hospitalization after treatment is unnecessary. 90 Y is a pure beta emitter with a physical half-life of 64.1 h. Patients can be discharged quickly after injection without radiological protection [27].
131 I-Lipiodol
131 I is the first radionuclide used for transcatheter arteria radioembolization. Lipiodol is a mixture of ethyl iodide of fatty acids from poppy seed oil, which typically contains 37% iodine. It is formed by replacing iodine in lipiodol with radioactive 131 I through an atom-atom exchange reaction [28]. 131 I-Lipiodol was first applied to humans in 1986 [29]. Intrahepatic arterial injection of 131 I-lipiodol is selective and remains in tumors for a long time. Lipiodol is often used as a carrier of anticancer agent and contrast agents for radiography [30].
Studies have shown that 131 I-lipiodol treatment is well tolerated. It has little toxic side effects and relieves patients' pain to a certain extent. In recent studies, 131 I-lipiodol has been used either as a single treatment or as an adjuvant treatment along with other regimens. 131 I-Lipiodol as a treatment alone can effectively increase the survival rate of patients. In the study by Lintia-Gaultier et al. , 50 patients with advanced liver cancer received 131 I-lipiodol and 36 patients received only medical support. The 6-month, 1-year, and 2-year survival rates of patients in the 131 I-lipiodol group were 65%, 35%, and 22%, respectively, while those in the control group were 28%, 8%, and 0%, respectively. The results indicate that 131 I-lipiodol treatment significantly prolongs the survival time of patients with advanced HCC [28].
The combination of 131 I-lipiodol therapy with other therapies also significantly prolongs the survival time of patients. In the study by Raoul et al., 34 patients were treated with 131 I-lipiodol before liver surgery, among whom 25 showed an objective tumor response or histological necrosis of the major lesion site [31]. Boucher et al. conducted a retrospective study of patients treated with 131 I-lipiodol after liver resection, and they found that treatment with 131 I-lipiodol after surgery prolonged the disease-free and overall survival ( Figure 1) [32]. I is the first radionuclide used for transcatheter arteria radioembolization. Lipiodol is a mixture of ethyl iodide of fatty acids from poppy seed oil, which typically contains 37% iodine. It is formed by replacing iodine in lipiodol with radioactive 131 I through an atom-atom exchange reaction [28]. 131 I-Lipiodol was first applied to humans in 1986 [29]. Intrahepatic arterial injection of 131 I-lipiodol is selective and remains in tumors for a long time. Lipiodol is often used as a carrier of anticancer agent and contrast agents for radiography [30].
Studies have shown that 131 I-lipiodol treatment is well tolerated. It has little toxic side effects and relieves patients' pain to a certain extent. In recent studies, 131 I-lipiodol has been used either as a single treatment or as an adjuvant treatment along with other regimens. 131 I-Lipiodol as a treatment alone can effectively increase the survival rate of patients. In the study by Lintia-Gaultier et al., 50 patients with advanced liver cancer received 131 Ilipiodol and 36 patients received only medical support. The 6-month, 1-year, and 2-year survival rates of patients in the 131 I-lipiodol group were 65%, 35%, and 22%, respectively, while those in the control group were 28%, 8%, and 0%, respectively. The results indicate that 131 I-lipiodol treatment significantly prolongs the survival time of patients with advanced HCC [28].
The combination of 131 I-lipiodol therapy with other therapies also significantly prolongs the survival time of patients. In the study by Raoul et al., 34 patients were treated with 131 I-lipiodol before liver surgery, among whom 25 showed an objective tumor response or histological necrosis of the major lesion site [31]. Boucher et al. conducted a retrospective study of patients treated with 131 I-lipiodol after liver resection, and they found that treatment with 131 I-lipiodol after surgery prolonged the disease-free and overall survival ( Figure 1) [32]. Additionally, several studies have compared 131 I-lipiodol therapy with non-radioactive lipiodol therapy. Lipiodol is a radiation carrier for the treatment of unresectable HCC, which is selectively injected into the hepatic artery of HCC patients. Lipiodol has Additionally, several studies have compared 131 I-lipiodol therapy with non-radioactive lipiodol therapy. Lipiodol is a radiation carrier for the treatment of unresectable HCC, which is selectively injected into the hepatic artery of HCC patients. Lipiodol has prolonged retention in the tumor, but it shows no obvious anticancer effect. With the addition of 131 I, 131 I-lipiodol has been proven to be an effective therapeutic agent for HCC. The study by Dumortier et al. compared the efficacy of lipiodol and 131 I-lipiodol. Patients with liver cancer (n = 58) were randomly treated with lipiodol or 131 I-lipiodol within 6 weeks of tumor resection. The results showed that 131 I-lipiodol effectively reduced the recurrence of HCC after hepatectomy, but no significant difference was found in improving the overall survival rate [33]. Moreover, Raoul demonstrated that the 0.5-, 1-, 2-, 3-, and 4-year overall survival rates of patients treated with 131 I-lipiodol were 69%, 38%, 22%, 14%, and 10%, respectively, while those of patients in the TACE group were 66%, 42%, 22%, 3%, and 0%, respectively. There was no significant difference in the patient survival between the two treatments, but patients treated with 131 I-lipiodol showed better tolerance [29].
Most patients with HCC tolerate 131 I-lipiodol therapy well, although interstitial pneumonia is a serious complication that may occur. According to the statistics of Jouneau et al., 15 of 1000 patients developed interstitial pneumonia after treatment and 12 of them died during 1994-2009 [34]. The above 131 I-lipiodol-related studies and data are shown in Table 2. Reduced postoperative recurrence, but no significant difference in OS [33] Overall, when 131 I-lipiodol is used as a radiopharmaceutical in the treatment of unresectable HCC patients for whom TACE or sorafenib is not appropriate, it can prolong disease-free survival, although its effect on overall survival is limited. Patients treated with 131 I-lipiodol had a longer time from clinically confirmed complete remission to lesion recurrence, which greatly reduces the risk of tumor recurrence. Moreover, for patients waiting for liver transplantation, treatment with 131 I-Lipiodol during the waiting period can slow tumor growth and metastasis and reduce the risk of being removed from the waiting list. Currently, the application of 131 I-lipiodol for treatment HCC still needs the support of more effective clinical data.
188 Re-Lipiodol
In the study of radionuclides applied in the medical field, 188 Re is one of the ideal radionuclides used in treatment. It has a half-life of 16.9 h and emits both β and γ rays. Compared to 131 I, 188 Re has the advantages of a low price, no hospitalization and isolation after treatment, and it is more suitable for Asian and African countries [6]. At present, there are three types of 188 Re-related preparations in clinical research, including 188 Re-HDD lipiodol, 188 Re-SSS lipiodol, and 188 Re-DEDC lipiodol. Various methods of labeling lipiodol with 188 Re have been proposed. So far, three different 188 Re-labeled lipiodol complexes have been tested in humans, namely 188 Re-HDD lipiodol, 188 Re-SSS lipiodol, and 188 Re-DEDC lipiodol. 188 Re-HDD lipiodol is the most widely studied compound, but the in vivo stability of this complex is not optimal. Compared with 188 Re-HDD lipiodol, 188 Re-SSS lipiodol has superior in vivo stability. 188 Re-DEDC lipiodol has been tested in animals and humans and showed prolonged retention in tumors with no significant release of the complex after in vivo administration [35].
The 188 Re-HDD lipiodol Phase I and II clinical studies sponsored by the International Atomic Energy Agency (IAEA) evaluated the safety and efficacy of transarterial 188 Re-HDD lipiodol for treatment-inoperable HCC. In the Phase I clinical trial, 70 patients received at least one 188 Re-HDD lipiodol treatment and the results showed a median survival of 9.5 months [36]. The Phase II clinical trial results of the study, published in 2007, show that of the 185 patients from 8 countries who received 188 Re iodine oil treatment, the 1-year and 2-year survival rates were 46% and 23%, respectively, with an observed good tolerance [37].
Kostas Delaunay et al. conducted a Phase I study of 188 Re-SSS lipiodol for the treatment of HCC. The results show that 188 Re-SSS lipiodol has a good biodistribution in radioactive embolism, and, of the radiolabeled lipiodols reported to date, it is the most stable in the body [38]. However, clinical studies of 188 Re-DEDC lipiodol only show that it is safe and effective for treating inoperable HCC [39]. Further studies and clinical trial data are required to support the use of 188 Re-related lipiodol in HCC. The above 188 Re-lipiodol-related studies and data are shown in Table 3. Good biodistribution and high stability in vivo [37] 2.1.3. 90 Y-microspheres 90 Y-microspheres was first used for tumor treatment in the 1960s [40], and it is the first radionuclide used for the treatment of HCC with portal vein thrombosis [41]. Clinical studies of 90 Y-microspheres have been focused on bridging and downgrading in the middle and late stages of HCC and before liver transplantation [42,43]. Currently, 90 Y-microspheres for the treatment of HCC are mainly made of glass and resin. 90 Y-glass microspheres were approved by the Food Drug Administration (FDA) in 1999 for the adjuvant therapy of unresectable HCC and bridging liver transplantation, and it was later approved for the treatment of HCC with portal vein thrombosis. 90 Y-resin microspheres were approved by the FDA in 2002 to be used along with fluorouridine for treating liver metastatic colorectal cancer [44,45]. 90 Y-glass microspheres range from 20 to 30 microns, whereas 90 Y-resin microspheres are usually 20 to 60 microns. The radiation activity of the 90 Y-glass microspheres generally used is 2500 Bq while that of 90 Y-resin microspheres is only 50 Bq [46,47].
In 2018, the American Association for the Study of Liver Diseases recommended the use of 90 Y-microspheres TARE as the first-line treatment for HCC [15]. lyzed the data from 108 patients with advanced liver cancer and cirrhosis who received 90 Y-microspheres TARE. According to the European Association for the Study of the Liver (EASL) criteria, the patients with a complete response, partial response, and disease stability accounted for 3%, 37%, and 53%, respectively, and 6% of the patients showed primary progression. The median progression time was 10 months and the median survival time was 6.4 months. In this study, the time to progression (TTP) and survival data of patients with advanced HCC were analyzed. It was found that the efficacy of 90 Y-microspheres TARE was comparable to that of systemic therapy for patients with advanced HCC (Figure 2) only 50 Bq [46,47].
In 2018, the American Association for the Study of Liver Diseases recommended the use of 90 Y-microspheres TARE as the first-line treatment for HCC [15]. The institute determined the overall survival of patients with HCC who received 90 Y-microspheres radionuclide embolization between 2003 and 2017 according to the BCLC staging. The overall survival of BCLC A, B, and C was 47.3 (39.5-80.3 months), 25.0 (17.3 to 30.5 months), and 15.0 months (13.8 to 17.7 months), respectively. The efficacy of the 90 Y-microspheres TARE treatment for HCC has been confirmed by several studies. Hilgard et al. analyzed the data from 108 patients with advanced liver cancer and cirrhosis who received 90 Y-microspheres TARE. According to the European Association for the Study of the Liver (EASL) criteria, the patients with a complete response, partial response, and disease stability accounted for 3%, 37%, and 53%, respectively, and 6% of the patients showed primary progression. The median progression time was 10 months and the median survival time was 6.4 months. In this study, the time to progression (TTP) and survival data of patients with advanced HCC were analyzed. It was found that the efficacy of 90 Y-microspheres TARE was comparable to that of systemic therapy for patients with advanced HCC (Figure 2) [48]. D'Avola et al. demonstrated that 90 Y-microspheres TARE extends the median survival compared to conventional care alone. This study compared the overall survival of 35 patients with unresectable HCC who received 90 Y-microspheres treatment with 43 patients who received routine care only. The results showed that the median survival time was 16 months in the embolization group versus only 8 months in the control group [49]. Additionally, 90 Y-microspheres TARE is used as an adjunctive therapy for preoperative bridging and degradation in patients awaiting liver transplantation. Gabr et al. performed a study of 90 Y-microspheres for the treatment of liver transplantation patients from 2004 to 2018, among which 169 of 207 patients were treated with 90 Y-microspheres TARE before liver transplantation, and another 38 patients received liver transplantation after staging was reduced by 90 Y-microspheres TARE. According to the histopathology, 94 patients had complete necrosis of the tumor, accounting for 45% of the total patients; 60 patients had major necrosis of tumor tissue; and only 53 patients had local necrosis, accounting for 23%. The 3-, 5-, and 10-year survival rates were 84%, 77%, and 60% for patients with complete, major, and partial tumor necrosis, respectively. These results suggest that 90 Y-microspheres TARE as an emerging adjunctive therapy is highly effective for bridging or reducing staging before liver transplantation [50]. Levi Sandri et al. also published similar data following a review of 20 global studies on 90 Y-microspheres TARE as bridging and staging reduction for liver transplantation. A total of 178 patients were treated with 90 Y-microspheres TARE before liver transplantation. The statistical results showed that 90 Y-microspheres TARE was more effective than TACE in patients with advanced HCC (BCLC C) [51]. 7 of 18 90 Y-microspheres TARE is also used to treat patients with HCC with iatrogenic acute liver failure and portal vein thrombosis (PVT). 90 Y-microspheres is the first radiopharmaceutical to be used for the treatment of HCC with PVT. According to the statistics of Ozkan et al., among 29 patients with HCC treated with 90 Y-microspheres TARE between 2009 and 2014, PVT was formed in 12 patients, and the median survival was 17 ± 2.5 months.
The statistical results showed that PVT formation is not an important factor affecting prognosis, and that 90 Y-microspheres TARE treatment did not affect the median survival time of patients with PVT; however, TACE was contraindicated [52]. Similar results were found in a retrospective analysis published in 2010 by Inarrairaegui et al. The authors analyzed the data of 25 patients with PVT-formed HCC treated with 90 Y-microspheres TARE. The statistical results demonstrated that the treatment of the PVT-formed HCC was well tolerated and had a favorable median survival. No hepatotoxicity was observed after 1-2 months of treatment, and the median survival of the patients was 10 months. However, the statistical results lacked further validation [53].
Whether 90 Y-microspheres TARE combined with other methods is better than single therapy for HCC remains to be determined. Sorafenib and Micro-therapy Guided by Primovist Enhanced MRI in Patients With Inoperable Liver Cancer (SORAMIC) is a multicenter, randomized controlled trial for treating HCC that combines 90 Y-microspheres TARE with sorafenib. In this study, a total of 424 patients with advanced HCC were randomized to 90 Y-resin microspheres along with sorafenib treatment or sorafenib alone. The results showed that the median survival was 12.1 months in the combination group and 11.4 months in the other group, suggesting that the combination therapy showed no significant improvement regarding the survival of the patients [54].
Researchers have also tried combining this treatment with the PD-1 inhibitor in a clinical study. PD-1 inhibitors are important immunosuppressive molecules that help immune cells in the body recognize and kill tumors. Nivolumab, a PD-1 inhibitor approved by the FDA in 2015, is aimed at patients with advanced HCC who have been treated with sorafenib. In 2018, Wehrenberg-klee reported a case in which a patient was successfully bridged for partial hepatectomy using 90 Y-microspheres TARE combined with PD-1 inhibitor therapy. The combined use of 90 Y-microspheres with nivolumab or other immunotherapies may help improve the efficiency and degree of response to HCC therapy, enhance the ability to deliver radiation doses to tumors, and mediate other possible pro-inflammatory effects of embolism. Therefore, 90 Y-microspheres TARE combined with immunotherapy may have an impact on advanced HCC [55].
Compared to TACE, 90 Y-microspheres TARE does not significantly extend the total survival time of patients, but it is obviously superior to TACE in prolonging the time before progression. According to a Phase II clinical trial by Salem et al., between 2009 and 2015, 179 BCLC A or B patients with HCC were randomized to conventional TACE or 90 Y-microspheres TARE. The results showed that the median progression time in the 90 Y-microspheres TARE group was longer than 26 months while that in the TACE group was only 6.8 months [56]. Salem et al. retrospectively analyzed the data of 245 patients with HCC, including 122 who received TACE and 123 who received 90 Y-microspheres TARE. The median progression time was 13.3 months in the 90 Y-microsphere TARE group and 8.4 months in the TACE group while the median survival time was 20.5 months in the 90 Y-microspheres TARE group and 17.4 months in the TACE group [57]. These studies showed that 90 Y-microspheres TARE significantly prolongs the median progression time in patients with HCC.
Although 90 Y-microspheres TARE has no significant improvement on the survival of patients compared to the traditional drug sorafenib, 90 Y-microspheres TARE significantly increases the tumor response, reduces the occurrence of adverse events, and improves patients' quality of life. This conclusion is supported by two large randomized controlled clinical trials. Chow et al. reported a Phase III trial in which 360 patients with HCC from 11 countries in the Asia-Pacific region were randomly assigned to be treated with 90 Y-microspheres TARE or sorafenib. The results showed that the median survival was 8.8 months for patients treated with 90 Y-microspheres TARE while that of patients treated with sorafenib was 10 months, indicating that there was no significant difference in extending the median survival in patients with locally advanced HCC [58]. Moreover, a Phase III clinical trial in Germany on advanced HCC with TARE examined 467 patients with advanced HCC who were randomized to receive 90 Y resin-based microspheres or sorafenib treatment. The median survival time was 8 months for patients treated with 90 Y resin-based microspheres TARE while that of patients treated with sorafenib was 9.9 months. The results demonstrate that there is no significant difference between the two treatments in extending the median survival of patients [59].
Patients with HCC who are treated with 90 Y-microspheres TARE may have minimal adverse effects with less severe symptoms, including fatigue, nausea, vomiting, anorexia, fever, and abdominal discomfort; these symptoms are less likely to occur and generally do not require hospitalization. More serious symptoms include hepatic dysfunction, biliary toxicity, fibrosis, radiation pneumonitis, gastrointestinal complications, and vascular injury [44]. However, the probability of these serious side effects is extremely low, with less than 4% of liver disease cases being induced by radiation. According to Salem et al., less than 2% of patients require interventional therapy due to biliary toxicity induced by radioembolization, and the incidence of radiation pneumonitis induced by radioembolization is less than 1% [60][61][62][63]. Kallini et al. performed a retrospective analysis to determine whether there is a safety difference between 90 Y-glass microspheres and 90 Y-resin microspheres. A total of 1579 patients in 24 studies were treated with 90 Y-glass microspheres, and 720 patients in 9 studies were treated with resin microspheres. The statistical results showed that compared to the 90 Y-resin microspheres, 90 Y-glass microspheres have a lower incidence of gastrointestinal and pulmonary adverse events for the treatment of HCC [64]. The 90 Y-microspheres-TARE-related studies and data are detailed in Table 4. [50] 90 Y-microspheres TARE treatment is not significantly different from TACE or sorafenib treatment in terms of extending the overall survival in patients. TACE is used for bridging or degrading before liver transplantation, reducing the risk of patients being disqualified from transplantation due to tumor progression while waiting for liver transplantation. For patients with HCC with PVT, the replacement of TACE with 90 Y-microspheres TARE does not affect the median survival. Patients with advanced HCC who are not responding to TACE or sorafenib may also be considered for treatment with 90 Y-microspheres TARE. The phase of the use of 90 Y-microspheres TARE in the standardized treatment of HCC is not clear yet, and there are also uncertainties about the prognostic effect of 90 Y-microspheres TARE in different HCC patients.
166 Ho-Microspheres
At present, there are three types of commercial radioactive microspheres, namely, 90 Y-resin microspheres, 90 Y-glass microspheres, and 166 Ho-poly-l-lactic acid microspheres. 166 Ho emits 81 keV gamma photons when it decays and is also a lanthanide element, and it can be imaged by single-photon emission computed tomography (SPECT)/magnetic resonance imaging (MRI) [65].
The Holmium Embolization Particles for Arterial Radiotherapy (HEPAR) trial is a Phase I clinical trial of 166 Ho-microspheres, which eventually determined the maximum radiation dose tolerated by the 166 Ho-microspheres to be 60 Gy [66]. Among the 37 patients in Phase II of the HEPAR trial, 73% of the patients showed complete remission, partial remission, or a stable condition after 3 months of treatment. Additionally, the adverse event rate is comparable to that of known 90 Y-microspheres TARE therapy [67]. More Phase II trials of 166 Ho-microspheres are underway.
Radioactive Seed Implantation
Radioactive seed implantation relies on stereoscopic imaging equipment to implant radioactive seed into the tumor for eradication by radiation. The research of 125 I seed implantation for the treatment of HCC has increased in recent years.
The 125 I seeds are prepared by wrapping a titanium alloy around a silver rod with 125 I. This technique relies on B-scan ultrasonography, computed tomography (CT), MRI, and other imaging equipment to guide the 125 I seed into the tumor tissue, through which the 125 I seed continues to emit low-dose γ rays to treat the tumor. 125 I has a long half-life of 60.1 days, which allows it to function continuously in tumor tissue. Additionally, the radiation distance of 125 I is only 1.7 cm, which causes a low level of damage to normal tissue [68,69]. Recent studies of 125 I seed implantation for the treatment of HCC have focused on the combination of other therapies. Among them, 125 I seed implantation combined with TACE, RFA and surgical treatment, or treatment of PVT-formed HCC is the focus of research. 125 I seed implantation combined with TACE therapy has received considerable attention, with some studies showing that 125 I seed implantation combined with TACE is safe and effective for treating HCC, with a significantly prolonged total and progression-free survival time. Zhang et al. collected clinical data from 110 patients with advanced primary liver cancer from 2014 to 2016, among whom 55 patients received 125 I seed implantation plus TACE and 3D conformal radiotherapy while the other 55 patients received TACE plus 3D conformal radiotherapy. The results showed that the objective remission rate of the 125 I seed implantation plus TACE and 3D conformal radiotherapy group was 84% while the disease control rate was 96%. However, patients that only received TACE plus 3D conformal radiotherapy showed a conventional objective response rate of 64%, and the disease control rate was 84%, respectively. The results showed that 125 I seed implantation combined with conventional treatment can significantly prolong the overall and progression-free survival [70]. In the work of Fang et al., 76 patients with HCC with PVT received TACE plus RFA or 125 I seed implantation plus TACE and RFA treatment, respectively; the median survival was 30 and 42 months and the median progression-free period was 11 and 18 months, respectively. These results further validate the safety and efficacy of the combination of the three therapies [71].
Some patients with HCC can be treated with 125 I seed implantation after RFA treatment; however, the effect of this combination therapy on patient survival rates has shown variability across studies. In a randomized trial by Chen et al., 136 patients with HCC were randomly divided into two groups: One group received 125 I seed implantation therapy after RFA treatment while the other was treated with RFA only. The results showed that the survival rate of the RFA plus 125 I seed implantation group was obviously better than that of the single RFA group [72]. However, a randomized controlled trial conducted by Wu et al. showed that the progression-free survival was 18 months in the combined treatment group, which was 7 months longer than that in the RFA group, but there was no significant difference in the overall survival between the two groups [73]. In another clinical trial of 125 I seed implantation by Chen et al., 68 patients with HCC undergoing surgery were randomly assigned to receive 125 I seed implantation or medical support. The relapse time of the two groups was 60 and 36.7 months, respectively, and the 1-, 3-, and 5-year survival rates were 94%, 74%, and56 %, and 88%, 53%, and 29%, respectively. The results showed that 125 I seed implantation therapy after surgery can significantly prolong the disease-free survival and overall survival in patients with HCC [74]. Currently, the study of 125 I seed implantation combined with RFA or surgical treatment is unsatisfactory, and more clinical data and statistical analysis are needed to obtain a clear conclusion on the effects on survival.
Additionally, studies have examined the use of 125 I seed implantation to treat PVT-formed HCC. The available data only shows that it is safe for use in these patients but does not determine whether it is effective. According to the statistical results of research by Zhang et al. on six related studies, 406 patients with HCC with PVT received 125 I seed implantation treatment. The side effects of radiation included leukopenia while the adverse reactions associated with 125 I seed implantation included fever, abdominal pain, bleeding, and anorexia. No stent or particle migration was reported in these patients. The results indicated that the use of 125 I seed implantation is safe in patients with HCC [75], but the efficacy of the treatment needs to be determined in more clinical trials. The relevant studies and data of 125 I seed implantation are detailed in Table 5. Overall and progression-free survival were significantly prolonged [70] 125 I seed implantation has advantages, including less trauma, a uniform distribution in the tumor, less damage to normal tissue, reduced treatment time, fewer treatments, and no isolation after treatment. This approach can be used to treat inoperable HCC or PVT-formed HCC that does not respond to TACE or sorafenib treatment. However, based on the current studies, more clinical data are needed to support the safety and efficacy of 125 I seed implantation.
Radioimmunotherapy
Radioimmunotherapy can be used as a means to treat tumors with radionuclidelabeled antibodies. HCC-targeted antibodies labeled with 131 I have been intensively studied for the treatment of HCC, with the most common antibodies including mouse anti-human monoclonal antibody fragment HAb18F(ab) 2 (metuximab), ChTNT human-mouse chimeric antibody, hepama-1 HCC cell membrane monoclonal antibody, CD133 monoclonal antibody, anti-hepatitis B virus antibodies, anti-machine protein monoclonal antibody, and anti-human HCC transferrin monoclonal antibody. Radioimmunotherapy agents used for HCC with clinical trials include 131 I-metuximab, 131 I-chTNT, and 131 I-hepama-1 monoclonal antibody [76,77].
131 I-Metuximab
Metuximab is a mouse anti-human monoclonal antibody fragment HAb18F (ab) 2 , the antigen of which is HAb18G/CD147, which has high expression in liver cancer, colon cancer, and cervical cancer, among others. HAb18G/CD147 is a highly glycosylated cell surface transmembrane protein belonging to the immunoglobulin superfamily. It has been reported that the high expression of CD147 is closely related to the invasion, metastasis, and growth of tumors and is a significant independent predictor. It has been reported that blocking HAb18G/CD147 expression with 131 I-metuximab effectively inhibits HCC growth and metastasis in vivo [78].
Studies on the safety and efficacy of 131 I-metuximab for the treatment of HCC have shown no life-threatening toxicity. In a Phase I clinical trial published by et al., the safe dose of 131 I-metuximab was 27.75 MBq/kg. In the subsequent multicenter Phase II trial, of 73 tracked patients, 6 showed partial remission (8 %), 14 showed mild remission (19 %), and 43 were in a stable condition (59 %), with a 21-month survival rate of 45 % [79].
Studies have shown that combined treatment with 131 I-metuximab and TACE improved the survival and delayed recurrence in patients with unresectable HCC. Ma et al. conducted a Phase IV clinical trial of 131 I-metuximab along with TACE for the treatment of inoperable HCC. In this multicenter, open-label clinical trial, 341 patients with stage III/IV HCC were non-randomly assigned to the trial group (n = 167) and the control group (n = 174) to receive combination therapy of 131 I-metuximab plus TACE or TACE alone. It was found that 131 I-metuximab combined with TACE improved the 1-year survival rate and prolonged the time of tumor progression, and the 1-year survival rate of the experimental group was 79.47% while that of the control group was 65.59%. The time of progression in the experimental group was 6.82 ± 1.28 months, which was approximately 2 months longer than that of the control group [80]. Similar results were found in the studies of He et al., in which 185 patients with unresectable HCC were treated with 131 I-metuximab plus TACE (95) or with TACE alone (90). The 1-month effective rate was 71% in the trial group and 39% in the control group. The 6-, 9-, and 12-month survival rates in the combined treatment group were 86%, 74%, and 60%, respectively, while those in the control group were 60%, 42%, and 34%, respectively. The results of this study showed that the combination of 131 I-metuximab plus TACE significantly increased the efficacy within 1 month and prolonged the survival of patients with HCC compared to those with TACE alone [81].
Delaying the recurrence of HCC is the key to the treatment of HCC. Treatment with 131 I-metuximab after liver transplantation or RFA is helpful to reduce recurrence. In the study by Xu et al., 60 patients with HCC with liver transplantation were randomly divided into two groups. The treatment group received 131 I-metuximab at 15.4 MBq/kg 3 weeks after liver transplantation and the control group was given a placebo intravenously. At the 1-year follow-up, compared to the control group, the recurrence rate was significantly reduced by 30% and the survival rate increased by 21% in the treatment group. The results showed that 131 I-metuximab is effective in reducing tumor recurrence and improving the survival rate in patients with HCC after transplantation [82]. Moreover, Bian et al. evaluated the efficacy of 131 I-metuximab along with RFA for the treatment of HCC. In this study, 127 patients with HCC with stage 0-B BCLC were randomly divided into two groups.
One group received RFA followed up with 131 I-metuximab while the other group received only RFA. The results showed that the 1-and 2-year recurrence rates were 32% and 59% in the combined group and 56% and 71% in the RFA group, respectively. The median time of recurrence was 17 and 10 months in both groups. The results of this study suggest that the use of 131 I-metuximab after RFA may be helpful in the prevention of postoperative recurrence [83].
131 I-chTNT
ChTNT is a mouse chimeric antibody. When labeled with 131 I, the 131 I-chTNT antibody binds to intracellular antigens in the necrotic part of the tumor. Intracellular antigen is a complex of double-stranded DNA and histone H1 antigen that is present in scattered areas of degenerated or necrotic cells within a tumor. The antibodies commonly used in targeted therapies primarily bind to antigens on the surface of tumor cells, but TNT antibodies can bind to intracellular antigens at the site of tumor necrosis. 131 I acts to treat the surrounding tumor cells, causing new necrosis, while the chTNT monoclonal antibody expands to the newly necrotic area to continuously expand it to achieve the therapeutic goal. At present, 131 I-chTNT is considered to have a therapeutic effect on lung cancer, brain cancer, and liver cancer, among others [76,80].
Data from patients with HCC treated with 131 I-chTNT were retrospectively analyzed by Tu et al. Among 38 patients with HCC, 22 were treated with RFA only while the other 16 patients were treated with RFA plus 131 I-chTNT. The median survival of the two groups was 37 and 43 months, respectively, while the 1-, 2-, and 3-year overall survival rates were 100%, 88%, and 75% (RFA plus 131 I-chTNT), and 82%, 58%, and 52% (RFA). The retrospective analysis showed that RFA combined with 131 I-CHTNT prolongs disease-free survival in the short term, better than RFA alone. However, a randomized controlled trial with a larger sample is needed to assess the efficacy of the treatment [84].
131 I-Hepama-1 mAb
Hepama-1 is a monoclonal antibody against the HCC cell membrane. HAb18G/CD147 is a highly glycosylated cell surface transmembrane protein belonging to the immunoglobulin superfamily. It has been reported that the high expression of CD147 is closely related to the invasion, metastasis, and growth of tumors and is a significant independent predictor. It has been reported that blocking HAb18G/CD147 expression with 131 I-metuximab effectively inhibits HCC growth and metastasis in vivo [85]. Several studies in the late 1990s investigated the value of 131 I-hepama-1 monoclonal antibodies in treating HCC. A Phase I trial conducted by Chen et al. treated 45 patients with HCC who could not be treated surgically with 131 I-hepama-1 mAb. The results demonstrate that 131 I-hepama-1 mAb is safe by intravenous injection, and the recommended dose of 131 I-hepama-1 mAb is 1480-2960 MBq/10 mg [86]. The accompanying radioimmunoassay-related studies and data are detailed in Table 6. Through an extensive literature review, it was found that the efficacy of radioimmunotherapy for the treatment of solid tumors needs to be improved. On this occasion, some in vivo studies have demonstrated the safety of radioimmunotherapy. For patients with HCC who are unamenable to surgical resection or monotherapy, a combination of radioimmunotherapy may be considered. The efficacy of the treatment is affected by several factors, including the targeting ability of monoclonal antibodies, the stability of the radioimmunoconjugate in vivo, and the mode of administration. The development of more targeted monoclonal antibodies for HCC, improvement of the radiochemical stability of the radiolabeled MAbs, and identification of suitable administration routes are future directions that require further investigations.
Summary and Future Prospects
TARE is well-tolerated and has few side effects in the treatment of advanced HCC. Although it has no obvious survival benefit compared to TACE or sorafenib in clinical trials, TARE can prolong disease-free survival and improve patients' quality of life. Moreover, TARE may be considered in cases with PVT formation, failure of TACE/sorafenib therapy, bridging liver transplantation, or reduced-grade liver transplantation. 90 Y-microspheres TARE is one of the most promising approaches of translating radionuclide therapy for HCC into routine treatment practice. However, the proper use of 90 Y-microspheres TARE in the standardized treatment of HCC has not been cleared, and there are also uncertainties about the prognostic effect of 90 Y-microspheres TARE in different HCC patients. 125 I seed implantation and 131 I-metuximab radioimmunotherapy for HCC have gained increasing attention, but their efficacy requires clinical validation by further randomized controlled trials. Clinical trials on nuclide treatment for liver cancer have mainly included TARE and radioactive seed implantation until now. There are 21 ongoing clinical trials of TARE for liver cancer, mainly on 90 Y TARE. 90 Y TARE has been demonstrated to be more effective and less toxic than TACE. In addition, clinical trials on the 166 Ho radio-embolism and the combination of sorafenib with 90 Y radio-embolism are also underway. There are five clinical trials on radioactive seed implantation, mainly conducted around 125 I. The above information about clinical trials comes from Clinicaltrials.gov. The use of radionuclides carries a certain risk to medical staff, and how to regulate the operation during treatment for risk mitigation is worthy of attention.
|
2022-11-04T19:38:19.380Z
|
2022-10-28T00:00:00.000
|
{
"year": 2022,
"sha1": "fa52ef69a84b97a54d4941a804fc9833ae74ae85",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/15/11/1339/pdf?version=1666965955",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65ff9346a52203aa0d7c6d75d720b01d9677d22f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226866347
|
pes2o/s2orc
|
v3-fos-license
|
Difficulties of access to health services among non-institutionalized older adults: prevalence and associated factors
Objective: To estimate the prevalence and factors associated with the difficulties of access to health services among non-institutionalized older adults in the town of Montes Claros, Minas Gerais, Brazil. Method: A cross-sectional study nested in a population-based cohort of community-dwelling older adults was carried out in Montes Claros, Minas Gerais, Brazil. Data collection was performed in the homes of the older adults between November 2016 and February 2017. Demographic, socioeconomic, and health-related variables and access to and use of health services were evaluated. Bivariate analyzes (Pearson’s chi-squared test) were conducted, adopting a level of significance lower than 0.20 for inclusion of the independent variables in the multiple model. The final model was generated by Poisson regression analysis, with robust variance, and the variables maintained were associated with difficulty in using the health services up to a level of significance of 0.05 (p<0.05). Results: 394 older adults participated in this study, 33% of whom reported difficulties with access. In multiple analysis, greater difficulty of access was registered among older adults without a partner; who could not read; were frail and had a negative self-perception of health. Older adults face greater difficulties with access when seeking public services. Conclusion: A high perception of difficulty with access was identified, determined by social and physical aspects inherent to aging, and which may be worsened by the characteristics of public services. There is a need for investments in the health care of older adults, in order to guarantee care that promotes healthy aging.
INTRODUCTION
Changes in demographic patterns and increased longevity are trends that have redesigned the age structure of the population, both in Brazil and on a global scale 1,2 . This scenario requires changes in the structure and provision of fundamental health services, establishing standards of quality and ensuring that allow older adults are allowed to live not only longer, but actively and healthily 2,3 .
Based on this paradigm of healthy aging emerges the need for adaptations to the health system to ensure quality of access and use of health services. These adjustments suggest the reformulation of health policies to include new forms of care, based on improvements in quality of life, the maintaining of functional ability and the prevention of chronic health conditions 2 . In other words, models of care that respect the characteristics of older adults and envisage integrated care throughout the care pathway 4 .
Access to health services is an important factor that underlies the quality and effective performance of such services 5 . Access is a set of dimensions that determine the relationship between the demand for and entry into the service 6 . The use of health services comprises all direct contact with points of care and is evidence that access has been achieved 6 .
The relationship of aging and access can be considered a worrying situation. Characteristics inherent to aging present, as a consequence, less physical willingness on the part of older adults to seek health services and to move between different levels of care 7 . Other factors associated with the morbidity profile, such as geographical and socioeconomic variations; individual needs; quality of life; and level of health knowledge are determinant in the use of health services and how often they are used and, therefore, may determine difficulties in access to health services for the older adult population 8 .
The difficulties of access to health services go far beyond geographical aspects, and are mainly related to the insufficient supply of services 9 . In addition, organizational aspects should be considered, namely economic; social; cultural; religious and epidemiological factors, and communication with health teams 9,10 .
In a general sense, there are still gaps in knowledge about access to and use of health services. Most studies are based on the needs of those who are already present in such services, demographic characteristics and the most prevalent health problems 11 . Studies conducted with users of health services exclude those not seeking care, and hamper knowledge at a population level 11 . Therefore, population-based studies are warranted. In addition, it is clear that there is little uniformity in the process of analyzing the difficulty of access to health services, which represents an obstacle for comparative investigation of the literature and highlights the need for further studies in the area 12 .
Estimating the prevalence and identifying factors associated with poor access to health services emphasizes the real situation regarding access for older adults and contributes to raising the awareness, through reliable data, of managers and health professionals about the need for adaptations, interventions, knowledge and planning of public policies in order to promote the expansion of access, reception and care that is decisive if aging with quality is to be achieved 12 .
In terms of health professionals, the present study can stimulate a need for training and changes in the organization of work processes in order to provide older adults with access to quality health services. Frailty, morbidity and other determinants are barriers to access to health and recognizing them is important for professionals working in health services, family members, and those involved in the intake and integrated care of older adults 8 .
It is also important to highlight that the north of the state of Minas Gerais, where the present study is located, represents one of the most deprived regions in Brazil and has human development indexes that are among the lowest in the state and, therefore, requires research related to health care for older adults, including the assessment of possible difficulties in access and their determinants 13 . In this context, the study aimed to estimate the prevalence of difficulties in access to health services among noninstitutionalized older adults in the city of Montes Claros, Minas Gerais, Brazil, and identify factors associated with the same. METHOD This is a cross-sectional study nested in a population-based cohort conducted in the municipal region of Montes Claros, in the north of the state of Minas Gerais, Brazil, which has a population of approximately 404,000 inhabitants and represents the main regional urban center 14 .
The sample size at baseline was calculated to estimate the prevalence of each health outcome investigated in the epidemiological survey, considering an estimated population of 30,790 older adults (13,127 men and 17,663 women) living in the urban region, according to 2010 census data from the Brazilian Institute of Geography and Statistics (or IBGE); a 95% confidence level; a conservative prevalence of 50% for unknown outcomes and a sampling error of 5%. As cluster sampling was used, the number identified was multiplied by a correction factor and delineation effect (deff) of 1.5%, plus 15% for any losses. The minimum number of older persons defined by the sample calculation was 360 (baseline).
The baseline sampling process was probabilistic, by cluster and in two stages. In the first stage, the census tract was used as the sampling unit. Fortytwo census tracts were randomly selected among the 362 urban sectors in the municipal region, according to IBGE data 14 . In the second stage, the number of households was defined according to the population density of individuals aged 60 years or older. At this stage, more households were allocated from the sectors with the largest number of older adults, in order to produce a more representative sample of the population. After the households were drawn, checks were carried out to see if the selected house contained older residents. If not, the researchers checked if the household to the left or right contained such individuals.
Data collection was performed between November 2016 and February 2017. The inclusion criterion was 60 years of age or older. Older people who were not available to participate following at least three visits on different days and at different times, even with prior appointment, were considered losses, as well as older adults whose caregivers/family members refused to participate in the study.
The data collection instrument used was based on similar population-based studies 15,16 . Specifically, the dimension of access was adapted from the Ministry of Health's Vigitel 2010 survey 17 , and was previously tested in this research project through a pilot study in a specially selected census tract, the data of which were not included in the final survey. The process of form completion, verifying data consistency and quality control, as well as storing the information was coordinated by the principal investigator.
The interviewers (undergraduate students in Nursing and Medicine) were previously trained and calibrated, with the Kappa agreement measure (0.8) used. For data collection, the census tracts were traversed from a previously defined point in each tract, for the carrying out of the interviews. The questionnaire questions were answered with the help of family members or caregivers for those older adults who were unable to respond, following the guidelines of the data collection instruments.
The demographic, social and economic characteristics of the group were evaluated; as well as variables related to health care and access to and use of health services. Frailty was assessed using the Edmonton Frail Scale (EFS) scale 18 . The perception of difficulty in using the most sought after health service was also assessed, through the question "Do you have any difficulty in using your main health service when you need it?". The answer to this question was taken as a dependent variable and was dichotomized as yes or no.
The independent variables studied were: demographic: sex (male and female), age group (dichotomized as up to 79 years old and equal to or above 80 years old, due to a worsening of frailty in this age group). Social: marital status (with or without partner), condition of living alone or with others, education (up to 4 years of schooling or more than 4 years), reading (knowing how to read or not). Economic: own income, monthly family income (up to 1 minimum wage or more than 1 minimum wage). Medical: presence of chronic comorbidities (hypertension, diabetes mellitus, acute myocardial infarction, osteoarticular diseases, neoplasia, stroke). Self-perceived health, presence of caregiver, falls in the last 12 months, hospitalization in the last 12 months, frailty. Relating to access: transportation difficulties, financial difficulties, absence of company, poor health services, geographical and architectural barriers, as well as the time needed to reach the health service. Having a health plan, the main type of service sought (public or private), types of service that the individual found most difficult to access: private emergency care, unified national health service (or SUS), specialty center and basic unit of the Family Health Strategy (FHS).
Frailty was assessed using the Edmonton Frail Scale (EFS) 18 , an instrument that assesses nine domains: cognition; health condition; functional independence; social support; use of medication; nutrition; mood; urinary continence and functional performance. These domains are divided into 11 items, with a score from 0 to 17. For statistical analysis, the scale results were divided into two levels: not frail (final score ≤6) and frail (score >6).
The analysis of the results involved the construction of a spreadsheet in the Excel® program, for organization and double entry of data with conferring and comparison of such data entry. The information was coded and transferred to a database of the analytical software program the Statistical Package for Social Sciences -SPSS, version 18.0, (SPSS for Windows, Chicago, USA), in order to evaluate possible relationships of association between the variables.
Bivariate analyzes were performed to identify factors associated with the response variable using the chi-squared test. The magnitude of the associations was estimated from the prevalence ratio (PR). Poisson regression with robust variance was used to calculate the adjusted PR, considering, jointly, the independent variables most strongly associated with difficulty with access in the bivariate analysis, up to a 20% significance level ( p<0. 20). For the analysis of the final model, a significance level of 0.05 ( p<0.05) was considered.
RESULTS
A total of 394 older community members participated in this study. The evaluation of the sample characteristics showed a predominance of women, 263 (66.8%). The most prevalent age group was between 60 and 79 years old, 302 (76.6%), with a mean age of 73.9 (sd ±7.9) years. A total of 199 (50.6%) older adults lived without a partner; 295 (74.9%) had up to four years of schooling. In terms of the social variables, 348 (88.3%) older adults did not have a caregiver. Of the medical variables, 281 (71.3%) were hypertensive; 189 (48.0%) reported osteoarticular diseases.
The most sought after health services were Family Health Strategies, 259 (65.7%), followed by Emergency Room, 188 (47.7%). Private or health insurance services (plans) were sought by 132 (33.5%) older adults. A total of 122 (17.8%) older adults were hospitalized in the 12 months prior to the survey.
Regarding access issues, the principal difficulties in accessing the main health service were: transportation difficulties, 39 (30%), lack of financial resources, 32 (24.6%), lack of company, 30 (23.1%), the perception that the service was poor, 58 (44.6%), architectural barriers, 24 (18.5%), geographic barriers, 28 (21.5%). The average time taken to reach the main service was 16.4 minutes. Table 1 shows the bivariate analysis of the difficulty of access to health services according to demographic, socioeconomic and health variables and access to health services data.
DISCUSSION
In the present study, it was found that 33% of older adults reported difficulty in accessing health services. This prevalence is high, which is an important finding, as older adults constitute a significant section of the demand for care within health services, due to their characteristics of comorbidities, frailty and their health conditions, which make them vulnerable 8 . Possibly, access to health services for older adults, as determined by the National Older Adult Health Policy 19 , is not being carried out in practice.
In a study conducted in João Pessoa in the northeast of Brazil, a difficulty of access to services caused by transport and geographical barriers was observed 20 . However, in the present study 67% of older adults had access to the health services they sought. There is a progressive path of health policies aimed at older adults which must be improved and the access of which must be expanded to achieve full, equal and universal reach.
The significant predominance of women illustrates the phenomenon of the growing feminization of older adults population. This trend mainly occurs due to the difference in mortality by sex, which affects the growth rate of the male and female populations and which prevails in the Brazilian population, resulting in the greater survival of women 2 . One of the challenges of the feminization process of aging is to create social spaces within health services in order to motivate older women to have a social life, and to ensure their access to health services when required. This would prevent isolation and strengthen female self-esteem and autonomy 2,21 . In the present study it was found, in multiple analysis, that there was greater difficulty with access among older adults without a partner; those who could not read; who had negative self-perceptions of their own health and who were frail. Regarding the health services sought, it was found that older adults faced greater difficulties when attempting to access public services.
The greater difficulty of access among older adults without partners can be explained by to the absence of a companion to bring them to the services 22 . Research has shown that widowed, divorced or separated older people have difficulty walking, and that this and the lack of companionship in health care are determinants of problems related to the lack of demand for health services.
The relationship between poor reading and inferior health indicators, including greater difficulties in accessing health services, has already been well described 21,23 . Also in a study conducted in Ceará, it was found that low levels of education may potentiate a worsening of health, due to unhealthy habits caused by a lack of knowledge; greater exclusion and lower levels of information about seeking out health services as early as possible 23 .
The chances of seeking health services increase as individuals grow older and have lower levels of education 24 , with greater demand expected to lead to greater difficulties in access 11 . Studies conducted in Germany, France and the UK 25 also revealed that users with low levels of education face the greatest obstacles in using the health services they seek. The continued encouragement of literacy among older adults is needed, providing them with learning opportunities that will result in improved self-care and accountability for their health and the timely seeking out of health services 26 .
In the present study, older people with a negative self-perception of health and who were frail reported greater difficulties in accessing health services. Similar results were observed in studies conducted in São Paulo 27 and Minas Gerais 28 . A negative perception of health may be related to the presence of morbidity, frailty and other conditions that determine a greater need for medical services. Under these conditions, the more frequent seeking out of such services also implies greater difficulties in access and use 28 .
The significant association between access and frailty, a syndrome that involves biological, psychological and social aspects and can negatively impact the social and personal life of older adults, can be understood through the greater need that was observed. Although frailty is a progressive condition, through effective access to health services adequate care can alleviate and prevent symptoms. With increasing frailty older adults have difficulty getting around and require help; the presence of a caregiver and such disorders are barriers for older adults when seeking and using health services 29 .
The present study found greater difficulties among older adults who sought the public health service. These difficulties were mainly related to a lack of transportation to get to the health service, a lack of financial resources, the absence of company, a perception of inefficient services and also due to geographical and architectural barriers that prevented or hindered access. In a similar manner, a study conducted in Paraná identified a negative perception among the population about public services, which they saw as offering poor care, with older adults reporting obstacles to obtaining treatment when seeking such services 4 . Such services presented problems related to the non-continuity needs to be expanded and all health professionals, especially those working in the primary care network, the gateway to health services, must be undergo continuous training and skill building to meet the needs of the older population. The greater the access to goods and services of society, the greater the quality of life during the aging process. In this context, health services play a fundamental role in health care, if the older adult population is to enjoy life with all that they have built. This requires investments that prioritize disease prevention; the control of chronic conditions and increased access to health services that enable older adults to live with well-being 2 .
The data of the present study should be interpreted in the light of certain limitations, such as the significant loss of older people between the beginning of the study (baseline) and the first wave. There is also the condition of the older adults, with its limitations, such as loss of functionality and cognitive ability, which may have hampered the answering of certain questions, since the questionnaire used was broad and the physical and mental tiredness of older adults may have been an impediment. It is suggested in other studies that data collection is performed at more than one time, in stages.
Although data from a larger longitudinal study were used in the present investigation, the information on access comes from a cross-sectional perspective. Cross-sectional studies have limitations regarding the temporal identification of the studied factors. There is a need for longitudinal studies on the theme that develop and validate access assessment tools and the quality of specific health services for older adults in view of the particularities of this segment of the population, and the lack of standardization in assessing the access and use of health services.
The results show that conditions related to difficulties in access are subject to intervention, which is fundamental for the health promotion and disease prevention among older adults, in order to avoid adverse medical outcomes, especially regarding the difficulties of using health services. Knowledge of the factors associated with difficulties in access among older adults allows health actions aimed at this group to be developed in order to minimize such difficulties 15 .
of the programs carried out, mainly due to changes in government, and consequent changes in public health policies 4 . In a survey conducted in Maranhão 30 access to public services was also considered poor due to the opening hours of basic health units, which operate during business hours, the lack of a telephone number to schedule appointments, and issues with poor organization.
In the present study, the fact that there was less difficulty in accessing private services among older adults can be explained by the significant portion of the participating population with health plans (37.8%). In addition, about 70% of older adults paid for their own plans. Health insurance coverage among older adults in Brazil has grown rapidly and includes approximately five million people aged 60 and over, representing 29.4% of the total number of older adults in the country 31 .
Although not significant in the final model, in this study, the most sought after service was the FHS, and probable problems in this service may explain the greater difficulties in access to public services. Although FHS coverage is increasing across Brazil, inequity of access still persists. Providing quality care is one of the primary goals of health systems, but this intention alone is not always enough. Balancing demand with care capacity still seems to be a serious problem in relation to access to primary health care 32 . Despite these difficulties, the FHS has been able to minimize longstanding inequalities in access. It is believed that for a positive impact on access to be perceived by the population, more time will be needed for the FHS to become fully established 33 .
Longevity is paradoxical, as the benefits of living longer are offset by the possibility of chronic illness, physical and psychological decline, isolation, depression, and a reduction in social and economic status. With the increase of older people living in the community, there is a need for more qualified health care and a dependency for care that falls on both the health team and family members, as well as an increased demand for health services 34 .
Given this, there is much to be done if the Unified Health System is to provide an effective and efficient response to the health needs and demands of the older adult population. Access to the various health services CONCLUSION Difficulty in accessing the health services sought was reported by a significant proportion of the older adult participants of the study. The main conditions associated with such difficulty were not having a partner; not knowing how to read; having a negative self-perception of one's own health and being classified as frail. In addition, greater difficulties were reported in seeking care from public services.
The present study demonstrates the need for investments aimed at the health of older adults, in order to ensure the care of this growing population. Older adults and health services are closely linked and the relationship between the two may reflect inequities that negatively impact the quality of life of this population, which depends on integrated and effective public policies.
Edited by: Tamires Carneiro de Oliveira Mendes
|
2020-01-01T23:01:28.852Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ab447d5be0ba5a125d2c9b516292b8007848a21a",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rbgg/v23n6/1809-9823-rbgg-23-06-e190113.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ec8c0e50ed1601f5620f43ccce984a8b6dcfa086",
"s2fieldsofstudy": [
"Medicine",
"Political Science",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245811116
|
pes2o/s2orc
|
v3-fos-license
|
Chitosan-clay nanocomposite as a drug delivery system of ibuprofen Nanocompósitos de quitosana-argila como sistema de liberação controlada de ibuprofeno Nanocompuesto de quitosano-arcilla como sistema de liberación controlada de ibuprofeno
Chitosan/montmorillonite nanocomposite films were prepared by the solvent evaporation method to immobilize the drug ibuprofen (IBU) and delay its release in a medium that simulates the environment of the gastrointestinal tract. The effects of montmorillonite, at different mass proportions (10, 20, and 50%), on the morphological and physical properties of the films were studied. The samples were characterized by X-ray diffraction (XRD), Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), degree of swelling, drug encapsulation, and drug release efficiency. According to the XRD it was evidenced that the incorporation of montmorillonite to chitosan led to the formation of nanocomposites of ordered morphology. The infrared spectra confirmed the good interaction between montmorillonite and chitosan by the formation of nanocomposites. This fact, which favored the imprisonment of the IBU, reduced the diffusion coefficient in the studied systems. The micrographs comproved the formation of dense and uniform films. The controlled release profile, especially for the nanocomposite with 10% clay mass, showed a slow drug release rate. The incorporation of montmorillonite at different proportions produced different morphologies, with good encapsulation efficiency and an adequate profile for the controlled release of the drug.
Introduction
Clay-polymer nanocomposites have been being subject of study of multiple areas of knowledge. In particular, these hybrids show interesting biomedical properties while produced with biopolymers, such as chitosan, associated with layered silicates (clay minerals) (Barbosa et al., 2018). Clay-biopolymer nanocomposites are a novel class of versatile materials with an expanding range of possible applications involving drug delivery systems and tissue engineering (Mukhopadhyay et al., 2020).
Due to their cation exchange capacity and adsorptive potential, mineral clays can interact with drug molecules, facilitating their liberation. Sustained release of drugs, controlled by desorption from clay mineral excipients, was found favorable in the case of antibiotics, amphetamines, and anti-inflammatory drugs (Dziadkowiec et al., 2017). The montmorillonite (MMT) exhibits mucoadhesion and the capability to cross the gastrointestinal (GI) barrier, and adsorb bacterial and metabolic toxins such as steroidal metabolites. In particular, ibuprofen (IBU), which is cationic in nature, can also facilitate drug loading into interlayer regions of MMT and help to achieve adequate sustained-release (Manzoor et al., 2018).
Among the biopolymers, chitosan, a cationic polysaccharide consisting of D-glucosamine and N-acetyl glucosamine units, has been extensively applied in drug delivery systems because it is biodegradable, biocompatible, nontoxic, nonimmunogenic, noncarcinogenic, antibacterial, and mucoadhesive properties. In addition, this polymer not only protects the drug molecules from degradation by proteolytic enzymes and prolongs the half-life time of the drug but also improves the drug bioavailability in vivo (Vukajlovic et al., 2019). The ability of chitosan to form films may permit its extensive use in the formulation of film dosage forms or as drug delivery systems. Biopolymers become adsorbed and intercalated in the interlayer Research, Society andDevelopment, v. 11, n. 1, e25911124684, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i1.24684 space of smectite clay minerals, driven by electrostatic interactions. As a result, clay minerals may undergo delamination leading eventually to exfoliation. In such materials, clay mineral layers act as a nanometer-sized phase domain associated with a polymeric matrix. Such a unique structure may induce the modulated release of the drug due to the interaction with both the polymer and the clay mineral (Vieira et al., 2017).
In therapeutic applications, the efficiency of a drug lies in targeting specific body parts and maintaining the desired concentration level for a longer period. The IBU, a non-steroidal drug, an anti-inflammatory with analgesic and antipyretic properties, has its use is limited due to side effects, which are often consequences of high plasma levels following the administration of conventional formulations, due to the gastrointestinal toxicity 8. Additionally, IBU has an adverse systemic effect on gastric, mucosal protective agents. Another problematic issue is related to the fast absorption of the drug (maximum blood concentration levels of IBU are reached within 1-2 h) and its rapid elimination from the plasma (~2 h). Due to these disadvantages modified drug delivery systems for IBU are desired. Such systems should exhibit sustained release of the drug to: decrease dosing frequency, prevent reaching a toxic concentration of drug in the body, hinder unsteady release, and minimize the occurrence of side effects. Several authors report modified IBU delivery systems, based on differents matrixes (dos Santos et al., 2018).
By designing controlled delivery systems the desired concentration of drug can be maintained without reaching a higher toxic level or dropping below the minimum effective level 5. For this purpose, the interaction between the drug, biopolymer, and the lamellar host has been considered. In the present study we investigate the effect of nanohybrids chitosan/montmorillonite on physical properties, such as the microstructures, swelling behavior, and drug IBU release.
Methodology
Chitosan (CS) with a deacetylation degree of 92% was obtained from Polymar (Fortaleza, Brazil). Natural montmorillonite (MMT) was provided by Southern Clay Products Inc. (Gonzales, TX, USA) with a cation exchange capacity (CEC), given by the supplier, of 92.6 meq/100 g. This compound was used without any further purification. Ibuprofen (IBU), with a molecular formula C13H18O2 and a molecular weight of 206.28 g/mol (purity ≥ 98%), was acquired from Sigma Aldrich (São Paulo, Brazil) and used as received. Sodium hydroxide, glacial acetic acid, hydrochloric acid and phosphate buffer solution (PBS) were supplied by Sigma Aldrich (São Paulo, Brazil). Ethyl alcohol (purity 99.8%) was obtained from Neon Commercial (São Paulo, Brazil). All aqueous solutions were prepared using distilled water.
The production of films of the biomaterials developed in this work was proposed in order to study the mechanism of drug delivery in the skin (as dressings) and the gastric path (ingested in capsule format). Solution casting method was used to prepare CS films according to Darder et al. (2003) with some modification. In brief, chitosan (1% w/v) was dissolved in 1% (v/v) acetic acid and stirred magnetically at 45°C for about 2 h until complete dissolution of chitosan. After complete dissolution, the solution was filtered through a Whatman filter paper (with pore diameter of 14 m) to remove any undissolved material. A fixed amount of the resulting solution (30 ml) was poured onto Teflon Petri plates (10 cm diameter) to obtain a uniform polymer film. The plates were stored at room temperature (~ 23°C) for 5 days until complete evaporation of the solvent. After complete drying, the chitosan films were neutralized by 1M NaOH for 30 min. Then the neutralized films were washed several times with distilled water to wash off the alkali solution. Finally, these chitosan films were dried at room temperature for about 3 days in a pressed condition to make them wrinkle-free.
According to the same methodology, films of CS/MMT were prepared. The MMT suspension was prepared by dispersing appropriate amount of MMT into 100 ml of distilled water. Chitosan solution, prepared as described above, had the pH adjusted to 4.9 with NaOH solution (0.1 M). Afterwards, the chitosan solution was slowly added to the MMT suspension at 50°C ± 2° C under continuous stirring (500 rpm) in appropriate amount to reach a final MMT concentration of 10, 20 and 50 Research, Society and Development, v. 11, n. 1, e25911124684, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i1.24684 4 wt% (based on chitosan weight). The CS/MMT mixtures were further stirred at 1200 rpm for more 4 h at 50°C ± 2° C.
Subsequently, the mixtures were filtered through a Whatman filter paper (with pore diameter of 14 m) and then casted onto Teflon Petri dishes (10 cm diameter). The castings were dried and neutralized like described in section above. The films prepared with 10, 20 and 50 wt% of MMT were labeled as CS/10MMTf, CS/20MMTf and CS/50MMTf, respectively.
The procedure used to form the CSf and CS/MMTf loaded with IBU model drug was similar to the protocol described above. IBU powder (10% w/w based on the chitosan) was dissolved in ethyl alcohol (10 mg/ml). Next, the prepared solution was added drop by drop to CS solution and CS/MMT mixtures, homogenized under magnetic stirring for 24 hours at room temperature, filtered to remove the excess of unloaded drug and poured into Teflon Petri dishes (10 cm diameter). Finally, all mixtures formed films after drying at room temperature for ~5 days. The formed films were neutralized, washed and dried using the mentioned procedure in section 2.2. The resulting materials were referred as CS/IBUf, CS/10MMT/IBUf, CS/20MMT/IBUf and CS/50MMT/IBUf. (1) Structural characterization of the prepared samples was recorded on a FTIR spectrometer (Bruker, Vertex 70 spectrophotometer) using ATR (attenuated total reflection) mode of operation. All spectra were collected with wavenumber ranging from 4000 to 400 cm −1 during 64 scans, with 4 cm −1 resolution.
The surface morphology of prepared films was analyzed using scanning electron microscopy (SEM, Tescan-model Vega 3). Samples were mounted on aluminum stubs with double-sided carbon adhesive dots, sputter coated with gold, and observed by SEM. Images were taken by applying an electron beam accelerating voltage of 15 kV.
The swelling degree of the CSf and CS/MMTf was investigated in phosphate buffered saline (PBS) at various time intervals. The films (1cm x 1cm) dried at 50°C for 24 h were weighed initially (Mdry) and immersed in PBS (pH 7.4) at 37°C.
At predetermined intervals (15, 30, 60, 120 and 180 minutes), swollen samples were taken out and blotted off carefully in between tissue papers (without pressing hard) to remove the surface-adhered liquid droplets and then weighed (Mwet). Five replicates were performed on each sample. The swelling degree (SD) was calculated using the Equation 2. To determine the significance of differences among samples, analysis of variance (ANOVA) was used. Minitab® 19 statistical software was employed to perform the analysis and Tukey's test was utilized for multiple comparisons. Significance of differences was defined at p < 0.05. (2) Where wdry means the dry weight of the samples, and wwet means the wett weight of the samples.
Samples of drug-loading films were cut into four parts, accurately weighed, and put into glass vessels containing 10 ml of PBS (pH 1.2), as illustrated in Figure 1. The samples were maintained at 37°C until complete dissolution (48 min). Research, Society andDevelopment, v. 11, n. 1, e25911124684, 2022 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v11i1.24684 5 Subsequently, the solution (10 ml) was transferred to dialysis bag (cellulose dialysis membranes, with MCWO 12,000 -14,000). Prior to dialysis bag is filled with the solution, it was immersed in distilled water for 2 h to remove any preservatives, rinsed thoroughly with water and immersed in the PBS pH 1.2 to equilibrate for 1h according to the protocol described by Dziadkowiec et al. (2017). The bag, sealed with plastic clips, was shaken and immediately immersed in the 150 ml of the release medium (PBS pH 1.2), which was sonicated using a constant pulse program for 48 minutes. Aliquots of 3 ml were withdrawn at fixed time interval (48 min until 100 h) and after the concentration of IBU was measured by UV-Vis Spectroscopy in triplicates, using a spectrophotometer (Perkin Elmer, Lambda 35) and a quartz cuvette at λmax = 222 nm, it was returned to released medium. The error was expressed as a standard deviation. The estimation of drug percentage loading and the encapsulation efficiency were obtained using Equations 3 and 4 (Ambrogi et al., 2018): (3) (4) Figure 1. Sample of drug-loading film (a); cut of the films (b) and drug-loading film parts immersed in PBS (c).
Source: Authors.
In vitro ibuprofen release measurements were conducted by UV-Vis spectroscopy (Perkin Elmer, Lambda 35) using dialysis technique. More specifically, drug-loading CS and CS/MMT films were cut into four parts, accurately weighed, and put into a pre-hydrated cellulose membrane containing 10 ml of PBS at different pHs (1.2 and 7.4). After that the dialysis bag were sealed with plastic clips and placed at the bottom of dissolution vessels containing 150 ml of PBS (pH 1.2 and pH 7.4), in order to simulated gastric fluid and blood pH, respectively. The systems were sonicated at the physiological temperature of 37ºC ± 0.5 °C for 48 minutes, and afterward were maintained at the same temperature under continuous shaking conditions (150 rpm) in an incubator shaker (IKA-KS4000i). Finally, aliquots of 3 ml were withdrawn at predetermined time intervals up to 100 h and the concentration of IBU was measured by UV-Vis spectroscopy at λmax = 222 nm. After each measurement, the withdrawn was put back into the system. Prior to characterizing the drug release quantitatively, the calibration curves of IBU in PBS solutions at the two pHs were established. Blank CSf and CS/MMTf without drug was used as the control. The experiments were performed in triplicate to minimize the error variation. Average values were used for further data treatment and plotting. The drug concentration was calculated according to a standard curve, and accumulative release was obtained by the following equation: Where V0 is the volume of the sample (3 ml), Ci is the concentration (mg/l) of release drug collected at time, and w is the mass of the drug containing sample (mg). The montmorillonite diffractogram presented a reflection peak (001) at approximately 6.0°, corresponding to a basal interplanar distance (d001) of 1.47 nm, characteristic of montmorillonite sodium (Paiva, Morales & Díaz, 2008). In the diffractogram of the chitosan (CSf), a broad band of low intensity between 8-12 °, typical of semi-crystalline material was observed, corroborating with the diffractograms presented in other studies (Luo et al., 2017;Baskar & Kumar, 2009). As described by Ogawa et al. (1992), shows the characteristic crystalline peaks around 2θ = 10, corresponds to the crystalline form of the chitosan structure and presents a unit cell with a = 7.76 Å, b = 10.91 Å c = 10.30 Å and β = 90 ° which is related to the diffraction plane (100). Pure ibuprofen exhibits typical reflection at 2θ = 6.06°, the same value found by (Zheng et al., 2007). In the diffractogram of the CS/IBUf sample did not observe the characteristic reflections of the biopolymer and the drug, suggesting the dispersion of the IBU, at molecular level in the chitosan matrix (Hua et al., 2010).
This interaction was possibly favored by the electrostatic interaction of the second layer (-NH3+) with the acetate ions of the chitosan solution, which allowed the access to the sites for anion Exchange (Choi et al., 2016;Tan et al., 2008).
For the CS/50MMTf, CS/20MMTf and CS/10MMTf systems containing IBU (CS/50MMT/IBUf, CS/20MMT/IBUf and CS/10MMT/IBUf), the basal interplanar distance disappeared for the CS/10MMT/IBUf system, suggesting an exfoliated type morphology (Barbosa et al., 2009). For the CS/20MMT/IBUf system, a peak displacement of the clay was presented for values lower than 2θ (2θ = 2.65 °), resulting in an increase in interlamellar distance (d001 = 3.31 nm) suggesting a disordered intercalated morphology tending to exfoliation. In the CS/50MMT/IBUf system there was a peak around 2θ = 2.76º (d001 = 3.20 nm), a value lower than that presented in the system without incorporation of the drug, indicating the formation of a nanocomposite of orderly intercalated morphology .
In general, all systems presented chitosan and IBU intercalation in the clay lamellae and the amount of clay mixed with chitosan affected the morphology of the nanocomposites obtained. For the nanocomposites formed, the reflection around 6.06º characteristic of the IBU crystals disappeared. Hua et al. (2010) attributed this behavior to the dispersion of the drug, at the molecular level in the chitosan matrix, or to the intercalation of the same between the layers of the clay.
FTIR spectroscopy analysis was performed to characterize the presence of specific chemical groups and possible interactions between CS, MMT and IBU in the films and to analyze any chemical structural changes in films obtained after drug loading.
In Figure 3a, all samples presented a spectrum band in the region of 3460-3280 cm -1 that was attributed to OH and NH stretching vibrations. Discrete bands were observed around 2914 and 2817 cm -1 attributed to the asymmetric and symmetrical axial stretch of the C-H bond of the -CH2 and -CH3 groups (Martino et al., 2017). Peaks observed around 1630 cm -1 were assigned to C=C stretching of the secondary amide (Marchessault, Ravenelle &. Zhu, 2006). In the region of 1420-1270 cm -1 that was attributed to the symmetrical angular deformation of the -CH3 group, indicating the presence of acetamide groups, since the chitosan is not totally deacetylated (Kolhe &. Kannan, 2003). In the region of 1135-1026 cm -1 that was attributed to C-O in the COH, COC and CH2OH ring (Mincheva et al., 2004). From Figure 3a, it was concluded that films QCL50, QCL20 and QCL10 observed bands aroud 1580 cm -1 corresponding to the vibrational deformation of the protonated amine group (-NH3+), indicating the possible interaction between chitosan and clay (Silva et al., 2013). The infrarred spectrum in Figure 3b all samples showed a strong carbonyl absorption band at 1621 cm −1 and the bands between 3500 and 3100 cm −1 are related to the aromatic ring of ibuprofen. The bands between 3000 and 2800 cm −1 are the alkyl stretching vibration of ibuprofen. Source: Authors. Figure 4 shows the Scanning Electron Micrographs of the films before and after drug loading and also after in vitro IBU release at different magnifications. The micrograph of the chitosan (CS) film presented, in some areas, a uniform, smooth and flat surface, characterizing the film as dense (Marreco et al., 2004). The micrographs of the chitosan/montmorillonite films in the different mass concentrations of montmorillonite (CS/50MMTf, CS/20MMTf and CS/10MMTf) showed the presence of small but well distributed clusters. According to a study carried out by (Wang et al., 2005), the formation of agglomerates in chitosan/montmorillonite systems is a result of the edge-edge interactions of the hydroxyl groups present in the octahedral layers of montmorillonite. In general, the films loaded with ibuprofen had a good interaction between the components, with the formation of compact films and with good surface dispersion, but it was still possible to verify in some systems the presence of some agglomerates. In the present study, the use of ibuprofen in the treatment of the disease was not associated with the presence or absence of the drug. Regarding the films after being submitted to the in vitro test, the presence of voids/pores can be attributed to the removal of the crystals of ibuprofen, emphasizing the purpose of the research that would be the release of the drug when subjected to conditions that mimic the processes absorption by the human body.
Source: Authors. According to the statistical swelling data shown above it was possible to affirm that in the initial time of the swelling test (t = 15 min) there was no significant influence of the variables (clay and drug). In the second time analyzed (t = 30 min), the CS/20MMT/IBUf showed greater absorption of PBS. In the fourth time studied (t = 2 hours), CS/20MMTf, CS/20MMT/IBUf and CS/10MMTf exhibited a main value of swelling. We suggest the CS/20MMT/IBUf system as a strong candidate in the swelling drug release process, since its hydrophilic matrix absorb fluids while releasing the drug from the polymeric surface, forming a gelatinous layer of polymer (malleable state). As the phosphate buffer solution -PBS hydrates the dry core, a gelled outer layer can be eroded by partial or total solubilization of the polymer (physical erosion). The penetration of a certain fluid causes the polymer chains to move apart, promoting the diffusion of the drug (Lopes et al., 2005; In vitro drug release was studied under PBS (pH 1.2 and 7.4), and release media was quantified by UV-vis spectral absorbance values. Figure 5 shows the release (%) of Ibuprofen versus immersion time from both film. The release profiles at pH 7.4 were different from the release profiles release at pH 1.2 for all films studied. It was observed that there was a slower release (lower slope of the release curve) in the first hours in relation to the sudden release that occurred at pH 1.2. This fact may be related to the dissolution of chitosan in an acidic environment, in the presence of acidic pH. It is also suggested that when the compound is homogeneously dispersed in the polymeric chitosan matrix, the release of the drug may include processes of water penetration in the matrix, promoting swelling, the diffusion of the compound through the pores/voids and, finally, the erosion of the polymer. After 100 hours, the maximum mass (mg), concentration (µg/mL) and IBU released (%) at pH 1.2 and 7.4 in the systems are listed in the Table 3. The release kinetics of the IBU of the films was evaluated using the model de Korsmeyer et al. (1983). The exponent values found for the IBU release profiles in pH 1.2 and 7.4 are shown in Table 4. The value of 'n' indicated that they have suffered influence of the montmorillonite content used in the systems and that in PBS pH 1.2 and 7.4 the release of the IBU from the films the CS/IBUf, CS/50MMT/IBUf, CS/20MMT/IBUf and CS/10MMT/IBUf occurred by a Fick diffusion mechanism (n <0.5), corroborating with the results presented by Tang et al. (2014). They concluded that the rapid swelling and erosion of the chitosan films had little effect on the drug release. It is worth mentioning that the value of b is negative for all profiles of release in PBS pH 1.2 and 7.4, in the mathematical model proposed by Korsmeyer Peppas. However, theoretically, the value of b, which represents the rapid release of drug ("burst effect"), must be a positive value. When releasing IBU in PBS pH 1.2, these negative values may have been caused by the use of the dialysis that restricted the rapid diffusion of drug molecules from the internal environment to the external, according to the result presented by Tan et al. (2014). You can still attribute this effect by the release of the drug on the surface of the system matrix or by changes in the system structure with consequent release immediate drug followed by slower release.
Conclusion
Obtaining chitosan/montmorillonite nanocomposite films, in mass proportions of 10, 20, and 50% clay, using the solvent evaporation method, showed reproducible results to immobilize the ibuprofen drug (IBU) and delay its release when subjected to an environment that simulates the gastrointestinal tract. During the obtaining of the membranes, ibuprofen was lost in the washing step, interfering in the IBU encapsulation efficiency and, consequently, in the capacity of loading the drug through the systems.
According to the results of X-ray diffraction it was evidenced that the incorporation of montmorillonite to chitosan led to the formation of nanocomposites of ordered morphology, tending to exfoliation in CS/50MMT/IBUf, CS/20MMT/IBUf, and CS/10MMT/IBUf. Through the photomicrographs obtained by the Sscanning Electron Microscopy was verified the formation of dense films and the probable formation of voids in its structure by drug's liberation from the system. The infrared spectra confirmed the good interaction between montmorillonite and chitosan by the formation of nanocomposites. This fact, which favored the imprisonment of the IBU, reduced the diffusion coefficient in the studied systems. The mathematical model for release from films, obtained from Fick's diffusion laws, had good representation in PBS medium pH 1.2 and 7.4, with typical behavior of Fick. The nanocomposites showed a controlled release profile, especially the chitosan /montmorillonite/IBU nanocomposite with 10% clay mass (CS/10MMT/IBUf), which reproduced a slower drug release rate.
|
2022-01-08T16:25:00.370Z
|
2022-01-06T00:00:00.000
|
{
"year": 2022,
"sha1": "544994c7f772671ccda3be43029a31a3bbbd0af4",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/24684/21857",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3f9dab9786b5fdd6f8eaeb69350bf1475c558f9",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
9076235
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptional rewiring over evolutionary timescales changes quantitative and qualitative properties of gene expression
Evolutionary changes in transcription networks are an important source of diversity across species, yet the quantitative consequences of network evolution have rarely been studied. Here we consider the transcriptional ‘rewiring’ of the three GAL genes that encode the enzymes needed for cells to convert galactose to glucose. In Saccharomyces cerevisiae, the transcriptional regulator Gal4 binds and activates these genes. In the human pathogen Candida albicans (which last shared a common ancestor with S. cerevisiae some 300 million years ago), we show that different regulators, Rtg1 and Rtg3, activate the three GAL genes. Using single-cell dynamics and RNA-sequencing, we demonstrate that although the overall logic of regulation is the same in both species—the GAL genes are induced by galactose—there are major differences in both the quantitative response of these genes to galactose and in the position of these genes in the overall transcription network structure of the two species. DOI: http://dx.doi.org/10.7554/eLife.18981.001
Introduction
Gene regulatory networks undergo significant divergence over evolutionary time (Carroll, 2005;Davidson, 2006;Doebley and Lukens, 1998;Tuch et al., 2008;Wohlbach et al., 2009;Wray, 2007). Except for the simplest cases of changes in the regulation of a single gene (Chan et al., 2010;Gompel et al., 2005;Tishkoff et al., 2007), we do not fully understand how these evolutionary changes occur or how they impact modern species. In particular, little attention has been paid to the quantitative consequences of transcriptional rewiring. Here we describe an evolutionary analysis of a transcriptional circuit controlling the production of enzymes that convert galactose to glucose-1-phosphate via the Leloir pathway (Kew and Douglas, 1976); these enzymes (a kinase, epimerase and transferase) are conserved across all kingdoms of life (Holden et al., 2003).
In most organisms, expression of the three Leloir enzymes increases when galactose is present in the growth medium (Holden et al., 2004); this regulation has been extensively studied in the budding yeast S. cerevisiae (Campbell et al., 2008;Conrad et al., 2014;Giniger et al., 1985;Johnston, 1987;Lohr et al., 1995;Ptashne and Gann, 2003;Sellick and Reece, 2006;Traven et al., 2006). Here the zinc cluster transcriptional regulator Gal4 binds to short sequence motifs in the upstream region of the genes encoding these enzymes (GAL1, GAL7 and GAL10) and induces their transcription when galactose is present and glucose is absent. The expression of GAL1 has been estimated to be <1 mRNA molecules/10 cells in glucose and~35 mRNA molecules/cell in galactose (Iyer and Struhl, 1996). Due in part to this high induction ratio (>350 fold), Gal4 has been adapted as a tool to artificially turn genes on and off in many animal and plant species (Brand and Perrimon, 1993;Fischer et al., 1988;Hartley et al., 2002;Kakidani and Ptashne, 1988;Waki et al., 2013;Webster et al., 1988).
The Candida albicans genome contains an unmistakable ortholog of each Leloir pathway enzyme Fitzpatrick et al., 2010;Martchenko et al., 2007a), arranged in a cluster as they are in S. cerevisiae (Fitzpatrick et al., 2010;Slot and Rokas, 2010). It also contains a clear ortholog of the transcriptional regulator Gal4 (Martchenko et al., 2007b;Sellam et al., 2010). However, in C. albicans, which last shared a common ancestor with S. cerevisiae at least 300 million years ago (Taylor and Berbee, 2006), Gal4 does not play a role in expressing the three GAL enzymes (Martchenko et al., 2007a); instead, it has been implicated as having a subsidiary role in regulating glucose utilization (Askew et al., 2009). Despite being uncoupled from Gal4, the three GAL genes in C. albicans are transcriptionally activated when galactose is present and glucose is absent in the growth medium . While a few fungal species have lost the GAL1, GAL7, and GAL10 gene cluster entirely, most have retained it, including several pathogenic species closely related to C. albicans such as C. dubliniensis, C. tropicalis and C. parapsilosis (Fitzpatrick et al., 2010;Hittinger et al., 2004;Slot and Rokas, 2010). Since the only known environmental niche for C. albicans is in or on warm-blooded animals (Odds, 1988), it seems very likely that the three GAL genes and their regulation is important for the ability of C. albicans to survive in its host.
In this paper, we use a variety of experimental and bioinformatics approaches to establish the mechanisms through which the GAL1, GAL7, and GAL10 genes are transcriptionally activated in C. albicans. By considering outgroup species, we have also inferred the order of several key events that led to the difference in the circuitry between S. cerevisiae and C. albicans. Finally, we compare the quantitative output of the different regulatory schemes used in C. albicans and S. cerevisiae and document several striking differences.
Identification of galactose metabolism circuit components in C. albicans
We experimentally verified that the closest matches to the GAL1, GAL7 and GAL10 genes in C. albicans did indeed code for the enzymes necessary for galactose metabolism. Each of these genes was deleted individually and the resulting mutants were tested for growth on media that included galactose as the sole sugar (C. albicans is diploid, so two rounds of disruption were needed per gene [Hernday et al., 2010]). To force the cells to ferment galactose in order to grow, we included the respiration inhibitor Antimycin A (Askew et al., 2009). We found that the C. albicans parent strain grows normally under these conditions but none of the three knockout strains could grow in the presence of Antimycin A and galactose as the sole sugar ( Figure 1B, Figure 1-figure supplement 1), a behavior similar to S. cerevisiae GAL mutants (Dudley et al., 2005). Growth in glucose was unaffected by the deletions. From these results, we conclude that the C. albicans GAL1, GAL7 and GAL10 genes are indeed the functional orthologs of the S. cerevisiae GAL genes.
We next considered whether GAL1, GAL7 and GAL10 are required for C. albicans to proliferate in different animal models of infection. Previous experiments in mice have shown that Gal10, but not Gal1, is required for C. albicans to proliferate in a commensal (gut colonization) model of infection, while neither protein is required for C. albicans to disseminate in a systemic (tail-vein injection) model of infection (Pérez et al., 2013). Here we tested whether the GAL1, GAL7, and GAL10 genes are required for colonization in a rat catheter model. In this infection model, a catheter is placed in the jugular vein of a rat and C. albicans strains are introduced to measure their ability to colonize the catheter by forming a biofilm ; this model was designed to recapitulate catheter infections in humans and has been extensively validated Nobile et al., 2012). We found that the knockout strains of GAL1, GAL7 and GAL10 all showed severe defects (compared to a matched parent strain) in this infection model ( Figure 2, Figure 2-figure supplement 1). These were some of the most pronounced defects observed for any previously studied gene knockout in this C. albicans biofilm model, and the results clearly show that the GAL1, GAL7, and GAL10 genes are required for this well-characterized colonization model. We next turn to the regulators of the GAL enzymes. As a result of a whole genome 'duplication,' now known to be a hybridization between two closely related species (Marcet-Houben and Gabaldó n, 2015), S. cerevisiae has a paralog of GAL1, called GAL3, which plays a signaling role in activating the GAL genes in S. cerevisiae (Bhat and Murthy, 2001;Hittinger and Carroll, 2007;Sellick and Reece, 2006, Figure 1A). However, C. albicans branched before this duplication and has only a single GAL1. To test whether the C. albicans Gal1 serves as both an enzyme (like Gal1) and an upstream signaling component (like Gal3), we measured the expression of the GAL1 promoter using a GFP reporter in a strain deleted for GAL1. In this reporter (pGAL1-GFP), we replaced the GAL1 open reading frame with GFP (Cormack et al., 1997 dilutions of a C. albicans parent strain (top) and isogenic strains deleted for GAL1, GAL7, and GAL10 were spotted onto plates containing 2% glucose + 3 mg/ml Antimycin A (left panel) and 2% galactose + 3 mg/ml Antimycin A (right panel). Images were acquired 6 days after growth at 30˚C. The same behavior was observed for independently constructed knockout strains, as shown in Figure 1-figure supplement 1. (C) A GFP reporter strain was constructed by precisely substituting one copy of the GAL1 ORF with the GFP ORF. GFP expression of this reporter was monitored in a C. albicans parent strain, and compared with expression from isogenic strains deleted for GAL1 and orf19.6899 (a potential ortholog of S. cerevisiae GAL80). Expression was measured by flow cytometry after 6 hr of growth in the indicated media. Mean expression levels are reported for each measurement in arbitrary units. Errors indicate standard errors derived from three independent measurements. DOI: 10.7554/eLife.18981.002 The following figure supplement is available for figure 1: significantly lower in the presence of galactose ( Figure 1C), indicating that Gal1 has an activating upstream signaling role in C. albicans, in addition to its enzymatic role. This situation is similar to that in K. lactis, another 'pre whole-genome hybridization' species (Anders and Breunig, 2011;Hittinger and Carroll, 2007;Meyer et al., 1991;Rubio-Texeira, 2005). The C. albicans transcriptional regulator Gal4 is clearly orthologous to S. cerevisiae Gal4, a conclusion supported by extensive phylogenetic analysis (Martchenko et al., 2007b) as well as by the fact that the Gal4 protein from both species has the same DNA-binding specificity (Askew et al., 2009). As discussed above, Gal4 is not required for regulation of the GAL enzymes in C. albicans (Martchenko et al., 2007a). We note that C. albicans also contains a gene (orf19.6899) with~40% amino acid identity to S. cerevisiae Gal80 (Wapinski et al., 2007), the regulatory protein that prevents Gal4 from activating the GAL genes in the absence of galactose (Torchia et al., 1984) ( Figure 1A). To test whether this gene has a role in galactose regulation in C. albicans, we tagged one copy of the GAL1 promoter with GFP (Cormack et al., 1997) and measured gene expression in parent and D/D orf19.6899 mutant cells. We found that the expression of GAL1 in parent and knockout cells was similar in response to galactose and glucose ( Figure 1C), demonstrating that, like Gal4, Parent strain ∆/∆gal1 C. D. the Gal80 ortholog does not play a significant role in regulating the GAL1, GAL7, and GAL10 genes in C. albicans. Taken together, the experiments described above, in combination with observations from the literature, establish that (1) C. albicans GAL1, GAL7 and GAL10 orthologs encode enzymes needed to convert galactose to glucose-1 phosphate, (2) neither Gal4 nor Gal80, the key GAL regulators in S. cerevisiae, control the GAL genes in C. albicans and (3) like the pre-hybridization species K. lactis, the C. albicans GAL1 gene doubles as both an enzyme and a signaling component.
Identification of transcriptional regulator(s) controlling galactose metabolism in C. albicans
Given that Gal4 is not needed to activate the three C. albicans GAL genes in response to galactose (Askew et al., 2009;Martchenko et al., 2007a), some other transcriptional regulator(s) must carry out this function. To identify this protein, we utilized a collection of transcription factor knockout strains in C. albicans (Fox et al., 2015;Homann et al., 2009) and assayed them for growth defects on media containing Antimycin A (to prevent respiration) and galactose as the only sugar. We screened 212 knockout strains and found only two, 4/4rtg1 and 4/4rtg3, that grew several orders of magnitude slower than the parent strain under these conditions ( Figure 3A-B, Figure 1-figure supplement 1). This response is specific to galactose as these cells grow at normal levels on glucose, even in the presence of Antimycin A ( Figure 3A). Adding back the RTG1 and RTG3 genes to their respective knockout strains restores growth on galactose + Antimycin A to levels comparable to the parent strain ( Figure 3A, Figure 1-figure supplement 1). We note that strains knocked out for CPH1 and RGT1, transcriptional regulators previously implicated in galactose metabolism in C. albicans Martchenko et al., 2007a), were represented in our library but did not display a significant galactose-specific growth defect ( Figure 3A, Figure 3B-source data 1).
To test whether Rtg1 and Rtg3 regulate expression of the three GAL genes in C. albicans, we measured the GAL1-GFP expression (using a strain where the GFP coding region precisely replaced the GAL1 coding region in one of the two GAL1 copies) in each of the single 4/4rtg1 and 4/4rtg3 knockout strains as well as in a 4/4rtg1 4/4rtg3 double mutant we constructed. GAL1 expression in the presence of galactose was significantly reduced in the 4/4rtg1 strain and in the double mutant ( Figure 3C). Although the RTG3 deletion strain had a profound growth defect on galactose ( Figure 3A, Figure 1-figure supplement 1B), the deletion showed no significant effect on the steady-state levels of GAL1 expression ( Figure 3C).
In S. cerevisiae, Rtg1 and Rtg3 are basic helix-loop-helix transcriptional regulators that form a heterodimer to regulate metabolic signaling in response to mitochondrial dysfunction (Jia et al., 1997). The two proteins are similar to each other in amino acid sequence, with 34% identity. Knockouts of RTG1 in S. cerevisiae strongly reduce this regulation while the RTG3 knockouts show a milder effect (Hashim et al., 2014;Kemmeren et al., 2014). We note that the rtg1 and rtg3 knockout strains were independently identified 'blindly' in our C. albicans screen, providing further evidence that the two proteins work together as they do in S. cerevisiae. Do Rtg1 and Rtg3 directly regulate the GAL genes in C. albicans?
We scanned the upstream regions (up to 800 base pairs) of the GAL1, GAL7 and GAL10 genes in C. albicans and the orthologous GAL genes from five closely related members of the CTG clade (Figure 3-figure supplement 1) to search for conserved cis-regulatory elements Martyanov and Gross, 2011). The CTG clade, so named because the CTG codon is translated as serine instead of a conventional leucine, represents approximately 170 million years of divergence (McManus and Coleman, 2014). We compared these results to similar (in essence, control) scans in S. cerevisiae and five other members of the Saccharomycotina and Kluyveromyces clades. As expected, the top-scoring motif from the GAL genes in the Saccharomycotina and Kluyvermoyces clades was the well-documented Gal4 motif (Guarente et al., 1982), with two-half sites spaced 11 nucleotides apart (Figure 3-figure supplement 1). This motif was not detected in the three GAL genes of C. albicans or any member of the CTG clade. Instead, the top motif in these 6 species is a longer palindromic motif, similar to the Rtg1-Rtg3 motif that was identified in C. albicans (Pérez et al., 2013) and S. cerevisiae (Jia et al., 1997, Figure 3. Rtg1-Rtg3 regulate galactose-mediated activation of the GAL genes in C. albicans. (A) Log 10 serial dilutions of a C. albicans parent strain (top) and isogenic strains deleted for GAL1, GAL4, CPH1, RTG1 and RTG3 were spotted onto plates containing 2% glucose + 3 mg/ml Antimycin A (left panel) and 2% galactose + 3 mg/ml Antimycin A (right panel). Also included are several gene 'addback' strains where the indicated gene was reintroduced into the corresponding deletion strain. Images were acquired 6 days after growth at 30˚C. See Figure 1-figure supplement 1 for images of independently constructed isolates. (B). Log 10 serial dilutions of 212 transcription factor knockout strains were spotted onto plates containing 2% galactose + 3 mg/ml Antimycin A. The numbers of dilutions where colonies were observed were tabulated and plotted as a histogram. The results show that, of the 212 deletion strains, only D/Drtg1 and D/Drtg3 had severe growth defects under this condition (See Figure 3-source data 1 for complete data). Neither strain showed a growth defect on plates containing 2% glucose + 3 mg/ml Antimycin A. (C) GFP expression driven by the GAL1 upstream region in a C. albicans parent strain and isogenic strains deleted for RTG1, RTG3 or both were measured by flow cytometry after 6 hr of growth in media containing 2% galactose. Mean expression levels are reported in arbitrary units with standard errors derived from three independent measurements. (D) Rtg1-Rtg3 consensus cis-regulatory motifs (found in all three of the C. albicans GAL1, GAL7 and GAL10 regulatory regions) were synthesized and ligated into a promoter lacking upstream regulatory sequences, coupled to a GFP reporter. This construct was integrated into the parent strain and isogenic strains deleted for both RTG1 and RTG3. Two experiments confirm that this Rtg1-Rtg3 motif is responsible for regulating the three GAL genes in C. albicans. First, previous whole-genome ChIP experiments from our lab (carried out for entirely different purposes) showed Rtg1 and Rtg3 binding peaks over the Rtg1-Rtg3 motifs in the upstream regions of all three GAL genes (Pérez et al., 2013, Figure 3-figure supplement 1). Second, we tested whether the Rtg1-Rtg3 motif was sufficient to bring about galactose-induced transcription in C. albicans. We inserted two copies of a consensus Rtg1-Rtg3 motif (derived from the upstream regions of the three C. albicans GAL genes, Figure 3-figure supplement 1) into a truncated CYC1 promoter driving GFP and measured gene expression in cells grown in glucose and galactose. The construct was induced by galactose, compared to no sugar ( Figure 3D); this activation was absent when the motif was omitted from the construct or when RTG1 and RTG3 were deleted from the genome ( Figure 3D). We note that the Rtg1-Rtg3 motifs, although sufficient for induction by galactose, do not completely recapitulate the behavior of the natural pGAL1 promoter; in particular, the Rtg1-Rtg3 motif construct was not repressed in the presence of glucose. This observation indicates that additional cis-regulatory sequences present in the natural GAL1 promoter are needed for glucose repression. Taken together, the results of the genetic screen, the similarity of the C. albicans GAL gene cis regulatory element to the S. cerevisiae Rtg1-Rtg3 binding site, the ChIP experiment in C. albicans, and the reporter expression experiments show that Rtg1 and Rtg3 are responsible for the galactose-inducible expression of the three C. albicans GAL genes by binding directly to the cis-regulatory sequences located upstream of the genes.
As mentioned above, we do not believe that RTG1 and RTG3 are the only regulators of the GAL genes in C. albicans. Their binding motif alone produces a 5-fold induction of galactose which is almost completely dependent on Rtg1-Rtg3; however glucose repression of the GAL genes is not recapitulated from this motif and likely lies at control sequences outside of it. In addition, the intact GAL1 regulatory region exhibits a 12-fold induction, some of which still remains in the 4/4rtg1 4/ 4rtg3 double mutant. For example, it is possible that Rgt1 [a transcriptional regulator implicated in the study of Brown et al. (2009)] and/or Cph1 [implicated by Martchenko et al. (2007a)] contributes to this residual induction, even though neither deletion strain shows a growth defect on galactose ( Figure 3-source data 1).
Measuring circuit output of GAL gene expression in S. cerevisiae and C. albicans
To determine whether the transcriptional rewiring of the GAL genes between S. cerevisiae and C. albicans has an impact on the quantitative output of the circuit in each species, we compared the expression dynamics of GAL1 between the W303 strain of S. cerevisiae and the SC5314 strain of C. albicans. In each species, the GAL1 promoter was fused to GFP; to optically distinguish between the species, C. albicans was engineered to also express a constitutive fluorescent protein, Rpl26b-mCherry ( Figure 4A). These two species were grown individually in media lacking a sugar source; they were then combined in equal proportions in 96 different wells, each containing a different combination of galactose and glucose ( Figure 4D). Expression of the reporters in single cells was continuously monitored in these populations every 20 min for~10 hr using an automated flow cytometrybased fermentation system that allows continuous sampling (Zuleta et al., 2014).
These experiments confirm that, despite the evolutionary wiring change, the basic logic of the galactose circuit is preserved between C. albicans and S. cerevisiae: GAL1 expression increases in galactose and decreases in glucose ( Figure 4C). However, there are several obvious differences in the quantitative behaviors of the two circuits. (1) The dynamic range of activation (ON vs OFF) is much larger in S. cerevisiae (~900 fold) than in C. albicans (~12 fold, Figure 4B observations that C. albicans is able to respond to very low concentrations of sugar (Rodaki et al., 2009). (4) The opposite behavior is observed in the sensitivity of each species to glucose. In the presence of saturating concentrations of galactose, half-maximal repression by glucose occurs at lower levels in S. cerevisiae than in C. albicans (Figure 4-figure supplement 1). (5) A key property of the S. cerevisiae network is bimodal expression of pGAL1; that is, at intermediate levels of galactose and glucose, the population consists of mixtures of fully ON and fully OFF cells (Biggar and Crabtree, 2001;Venturelli et al., 2015). No bimodality is evident in C. albicans across a large range of glucose and galactose ratios ( Figure In comparing the physiologic response of the GAL genes (or any other genes for that matter) between different species, it is important to have some knowledge of the variation in response among individuals of the same species. Several groups have shown that the expression dynamics of GAL1, specifically its potent induction and bimodality, is conserved across many isolates of S. cerevisiae (Nogi, 1986;Wang et al., 2015;Warringer et al., 2011), including W303, the strain used in our analysis (Ralser et al., 2012). To determine whether different C. albicans isolates vary in their response to galactose, we measured GAL1 expression dynamics using 11 different patient isolates of C. albicans, all from different clades (Blignaut et al., 2002;Lockhart et al., 1996;Odds et al., 2007;Pujol et al., 2002), and including strains isolated from different anatomical sites of infection and different parts of the world (Angebault et al., 2013;Hirakawa et al., 2015;Odds et al., 2007;Shin et al., 2011;Wu et al., 2007, Supplementary file 1). Each isolate was engineered so that the GAL1 promoter was fused to GFP; the isolates were then analyzed across a wide range of galactose concentrations with a mCherry-marked SC5314 strain of C. albicans as a control (Figure 4-figure supplement 2). We observed that, despite some small differences (Figure 4-figure supplement 2), the qualitative and quantitative aspects of GAL1 induction were similar across all 12 isolates, including SC5314, the lab strain used for most of our experiments (Figure 4-figure supplement 2). None of the five differences documented between S. cerevisiae and C. albicans were observed between any two C. albicans isolates (Figure 4-figure supplement 2). We conclude from these experiments that the transcriptional response of S. cerevisiae and C. albicans to galactose-although similar in overall logic-differ in almost all of their quantitative output features. Moreover, these differences are characteristic of each species as a whole and not of a particular isolate. albicans (bottom) is plotted as a heatmap of cell density. Cell density, the number of cells normalized by the maximum number of cells, represents the fraction of the population at a given expression level. The x-axis represents galactose concentration (in the absence of glucose) while the y-axis represents fluorescent expression. This data has been re-plotted from the panels in row 1 of Figure 4D. (D) Behavior of GAL1 across a wide range of galactose (~2000 fold) and glucose (~128 fold) concentrations, each plotted for~10 hr. As indicated by the black wedges, galactose concentration increases left to right while glucose concentration increases from top to bottom. Red dots in each plot indicate C. albicans GAL1 expression while green dots indicate S. cerevisiae GAL1 expression. The y-axis on each plot is fluorescence, normalized by side scatter, and it spans three orders of magnitude. The data was collected every 20 min for 10 hr. The panel shown in B is indicated by the black square in the top row. See Comparing the genes induced by galactose in C. albicans to those induced in S. cerevisiae We next tested whether the rewiring of the GAL genes had consequences for the regulons that these genes are part of. We compared, by RNA-sequencing, the genes induced in S. cerevisiae and C. albicans by galactose and glucose compared to a medium lacking a sugar ( Figure 5, Figure 5- figure supplements 1 and 2). In S. cerevisiae, consistent with previous work, we observed very high (>275-fold) galactose-induced expression of the three genes encoding the Leloir enzymes (GAL1, GAL7, GAL10) as well as that encoding the galactose permease GAL2 ( Figure 5A). In C. albicans [which lacks a GAL2 ortholog )], GAL1, GAL7 and GAL10 were all induced by galactose, but to a much lesser extent than in S. cerevisiae ( Figure 5B). As discussed above, this lower induction ratio is due to both higher basal expression (expression in media lacking a sugar) and lower induced levels (in galactose). These observations are all consistent with the single-cell measurements described above; they also show that the pGAL1 reporter is a good proxy, in both species, for the GAL genes in general.
Next, we used this RNA-seq data to determine whether the complete set of genes induced by galactose differs between S. cerevisiae and C. albicans. In S. cerevisiae, nearly all (28/30) genes besides GAL1, GAL7, GAL10 and GAL2 are induced by galactose, but to a much lesser extent (at least two-fold but less than 30-fold in two independent experiments, with p-values <0.01). Most of these 30 genes are involved in some aspect of carbohydrate metabolism (Cherry et al., 2012, Supplementary file 2). In C. albicans, 33 genes met the same criteria for being galactose-induced, yet none of them showed the higher levels of induction characteristic of S. cerevisiae GAL genes; the majority were induced between two-fold and 13-fold. These genes are annotated as being involved in pathogenesis, biofilm formation, filamentous growth, as well as carbohydrate metabolism (Inglis et al., 2012, Supplementary file 3). Except for the GAL1, GAL7 and GAL10 genes, there was little overlap (only 2 additional genes, RHR2 and PFK27, annotated in S. cerevisiae as enzymes involved in sugar metabolism) between the S. cerevisiae and C. albicans galactose-induced genes ( Figure 5-figure supplement 2, Supplementary file 3). Indeed, many of the galactose-induced genes in C. albicans did not have an identifiable ortholog in S. cerevisiae and vice-versa (Byrne and Wolfe, 2005;Fitzpatrick et al., 2010;Maguire et al., 2013, Figure 5-figure supplement 2). These observations indicate that the wiring change between S. cerevisiae and C. albicans maintained the galactose induction of GAL1, GAL7, and GAL10 but changed the remaining genes in the galactose-induced regulon.
Comparing the signals used to induce GAL genes in C. albicans to those used to induce the GAL genes in S. cerevisiae The inclusion of GAL1, GAL7 and GAL10 in a larger regulon controlled by Rtg1 and Rtg3 suggests that signals besides galactose may induce the regulon in C. albicans. Consistent with this idea, the C. albicans GAL genes were observed, in a genome-wide study, to be induced in response to N-acetylglucosamine (GlcNAc), a sugar derivative also known to induce genes involved in pathogenesis, biofilm formation and filamentous growth (Gunasekera et al., 2010;Kamthan et al., 2013). To further examine this observation, we measured the expression of the GAL1 promoter fusion in both C. albicans and S. cerevisiae in response to GlcNAc and observed that GlcNAc induced expression of the GAL genes in C. albicans, but not S. cerevisiae ( Figure 6A). Moreover, this induction is also observed in our test construct, which contained two Rtg1-Rtg3 binding sites driving expression of a heterologous promoter ( Figure 6B). The observed GlcNAc induction of both the artificial constructs and the natural GAL1 promoter requires Rtg1-Rtg3 ( Figure 6A-B). While the GAL genes are induced in response to GlcNAc, they are not required for growth on GlcNAc as a sole sugar source (Figure 6-figure supplement 1).
From these experiments, we conclude that, as a direct result of transcriptional rewiring, the regulon to which the GAL1, GAL7, and GAL10 genes belong differs substantially between S. cerevisiae and C. albicans. Moreover, the regulation of three Leloir enzymes has under gone an important qualitative change; in C. albicans, they can be induced by sugars other than galactose. Inferring the evolutionary transition of GAL regulation So far, we have documented a wiring difference in GAL1, GAL7, and GAL10 gene regulation between the Saccharomyces and Candida clades; here we address when the change occurred. Specifically, we asked whether Gal4 or Rtg1-Rtg3 was the regulator of these genes in the ancestor of C. albicans and S. cerevisiae. To determine this, we examined Yarrowia lipolytica, an outgroup species, that is, a species that branched before the Candida and Saccharomycotina clades diverged ( Figure 7A). The Y. lipolytica genome contains clear orthologs for Gal1, Gal7 and Gal10. Moreover, the ability of Y. lipolytica to metabolize galactose increases when these genes are overexpressed (Lazar et al., 2015), indicating that they carry out the same role as the orthologous genes in S. cerevisiae and C. albicans. Y. lipolytica lacks a clear ortholog for Gal4, the closest relatives being other zinc cluster proteins (Zn(II) 2 Cys 6 ) lacking the surrounding amino acid sequences characteristic of the Gal4 orthologs of many other fungal species. The Y. lipolytica genome does contain a single RTG ortholog (YALI0F11979); it is more similar to Rtg1 than Rtg3, and we will refer to the Y. lipolytica gene as RTG1.
We knocked out the RTG1 gene in Y. lipolytica and measured the mRNA expression of GAL1 in cells grown in galactose. We found that in a Y. lipolytica parent strain, galactose induces GAL1 expression at least three-fold. This induction is not observed in two independently constructed rtg1 knockout strains (Figure 7b), indicating that this transcriptional regulator plays an important role in activating GAL1 in Y. lipolytica. We note that knocking out this regulator did not result in a growth defect of Y. lipolytica on 2% galactose (Figure 7-figure supplement 1). However, the GAL genes in this species appear to be expressed at a relatively high basal level.
Taken together, all the phylogenetic results indicate that the Gal4 mode of GAL gene regulation likely arose along the S. cerevisiae lineage after S. cerevisiae and C. albicans diverged ( Figure 7C). We also know that it occurred before S. cerevisiae and K. lactis diverged as K. lactis also uses the Gal4 mode of regulation (Rubio-Texeira, 2005). Finally, we know from recent work (Roop et al., 2016) that the tight repression of GAL1, GAL7, and GAL10 by glucose-a defining characteristic of S. cerevisiae-occurred much later than the Rtg1-Rtg3 to Gal4 rewiring event, specifically after S. cerevisiae branched from S. paradoxus.
Discussion
The regulation of central carbon metabolism is crucial for all species (Sandai et al., 2012). Here, we documented an evolutionary shift in the molecular mechanisms through which the enzymes that metabolize galactose (Gal1, Gal7, Gal10) are specifically induced by galactose. These enzymes and their functions are conserved across all kingdoms of life; direct evidence shows this is the case for many fungal species including S. cerevisiae (Dudley et al., 2005), K. lactis (Rubio-Texeira, 2005), C. albicans (Martchenko et al., 2007a, this paper) and Y. lipolytica (Lazar et al., 2015). These enzymes have a deeply conserved function (conversion of galactose to glucose-1 phosphate) and a deeply conserved pattern of regulation: their expression increases when galactose is added to the growth medium. However, a shift in the mechanism of galactose induction occurred along the evolutionary pathway to S. cerevisiae resulting in Gal4 'replacing' Rtg1-Rtg3 as an inducer of the GAL genes in response to galactose. This ancestral mode, the regulation of the GAL genes by Rtg1-Rtg3, was retained in the C. albicans clade. S. cerevisiae and C. albicans both have clear orthologs of Gal4, Rtg1, and Rtg3 (Fitzpatrick et al., 2010;Wapinski et al., 2007), and these transcriptional regulators have retained their DNA-binding specificities since the two clades branched, at least 300 million years ago (Askew et al., 2009;Pérez et al., 2013;Taylor and Berbee, 2006, this paper). The rewiring therefore occurred, at least in large part, through changes in the cis-regulatory sequences of the control regions of the GAL genes. In C. albicans, we show that the cis-regulatory sequences for Rtg1-Rtg3, located next to the GAL genes, are sufficient for galactose induction; in the S. cerevisiae clade, these were 'replaced' by cis-regulatory sequences for Gal4.
To understand the consequences of this wiring change, we directly compared, using fluorescent reporters combined with a robotically controlled flow cytometer, the quantitative responses of C. albicans and S. cerevisiae to wide ranges of galactose and glucose concentrations. We documented five important differences between the C. albicans and S. cerevisiae galactose responses. (1) The basal level of GAL gene expression is higher in C. albicans and the maximal induced level is lower resulting in a significantly lower induction ratio (~12-fold) in C. albicans compared with S. cerevisiae (~900-fold).
(2) Induction occurs 20-40 min faster in C. albicans than in S. cerevisiae, with the exact difference dependent on the medium composition. (3) The concentration of galactose required for half maximal induction of GAL1 is 15-fold lower in C. albicans (0.002%) than S. cerevisiae (0.03%). (4) The concentration of glucose required for half-maximal repression is at least two-fold lower in S. cerevisiae than in C. albicans. (5) The well-documented bimodality observed at certain ratios of glucose: galactose in S. cerevisiae appears absent in C. albicans over a wide range of glucose and galactose concentrations; thus, instead of an all-or-none response, C. albicans exhibits a much more graded expression. We showed that these characteristics were true for 11 different clinical isolates of C. albicans, demonstrating that these behaviors are characteristic of the species as whole rather than a particular isolate. Full genome analysis revealed three additional, qualitative differences between regulation of the GAL genes in C. albicans and S. cerevisiae. (6) In C. albicans, the galactose regulon (33 genes induced by galactose) includes, in addition to the GAL genes, genes implicated in several aspects of pathogenesis. In contrast, the galactose regulon of S. cerevisiae functions almost exclusively in galactose regulation, metabolism and transport. Other than GAL1, GAL7 and GAL10, there is virtually no overlap (only two genes) between the galactose-induced regulon in S. cerevisiae and C. albicans. (7) The galactose regulon of S. cerevisiae consists of two major tiers of regulation: Four GAL genes that are induced nearly 1000 fold in response to galactose and 28 genes that are induced to a much lesser extent (2 to 30 fold, Supplementary file 3). In C. albicans, 33 genes that are induced by galactose all show modest induction (31 are induced 2 to 13 fold while the other 2 are induced 20 and 50 fold, respectively). (8) In C. albicans, the GAL genes (and the rest of the regulon) can be induced by signals other than galactose.
Although we cannot pinpoint the extent to which the Rtg1-Rtg3 to Gal4 rewiring contributes to each of these eight differences, we can say that at least two of them (the induction ratios of the GAL genes and the response of the GAL genes to non-galactose signals) critically depend on the rewiring. The moderate induction by galactose of the GAL genes in C. albicans is recapitulated when Rtg1-Rtg3 cis-regulatory sequences are added to a test promoter. Likewise, the induction by GlcNAc in C. albicans is recapitulated in a test construct containing Rtg1-Rtg3 sites and is destroyed when RTG1 and RTG3 are deleted in C. albicans.
Although there is no direct evidence that this rewiring was adaptive, it is tempting to speculate that the characteristics of galactose induction in S. cerevisiae-high induction of GAL genes, low sensitivity to galactose concentration, high sensitivity to glucose concentration, and specificity for galactose-are advantageous for a species that ferments different sugars rapidly (Johnston, 1999). In contrast, the features of C. albicans (high basal levels and lower induction of GAL gene expression, higher sensitivity to galactose, and the inclusion of the GAL genes in a regulon that includes genes needed for pathogenesis) may be important for C. albicans to thrive in niches of warmblooded animals where galactose is probably present at very low concentrations (Gibson et al., 1996). For example, in many mammals, galactose is synthesized and incorporated into glycolipids and glycoproteins; it is ingested through diet, for example, through the hydrolysis of lactose into glucose and galactose. Consistent with these ideas, we know that the Rtg1-Rtg3 circuit is central to the ability of C. albicans to proliferate in a mammalian host, as C. albicans mutants deleted for RTG1 and RTG3 are deficient in several mouse models of infection (Pérez et al., 2013). Although we found that the Rtg1-Rtg3 mode of GAL gene regulation is ancestral, this does not mean that the ancestral fungal species resembled C. albicans in its ability to thrive in animal hosts. The Rtg1-Rtg3 circuit continued to evolve in the C. albicans clade and it seems likely that additional changes in this circuitry were needed to adapt specifically for life in a mammalian host.
In C. albicans, why does galactose induce genes involved in pathogenesis? We suggest two possibilities for this observation. First, galactose may act as a proxy for a type of environment frequently encountered by C. albicans in the host. According to this idea, the presence of low levels of galactose is a signal for C. albicans to mount a general response that involves both metabolizing galactose and inducing virulence properties. This 'proxy' idea might also explain why other sugars also induce this regulon in C. albicans. A second possibility is that galactose is utilized differently in C. albicans than in S. cerevisiae; rather than simply converting galactose to glucose-1 phosphate, C. albicans may use endogenous galactose (or metabolites derived from galactose) in ways that require the induction of other genes.
Irrespective of the possible evolutionary advantages of the Gal4 mode of regulation in S. cerevisiae versus the Rtg1-Rtg3 mode of regulation in C. albicans, the work presented here shows that the regulatory schemes produce different circuit characteristics. Although the overall logic of GAL gene regulation remains the same (the GAL genes are induced by galactose in both species), almost all other features of the circuit differ significantly, ranging from quantitative output (kinetics, induction ratio, bimodality) to the structure of the regulons that include the GAL genes in C. albicans. This study illustrates how transcriptional rewiring over evolutionary timescales can preserve a basic circuit output yet alter almost all of its quantitative and qualitative features.
Orthology analysis
The presence of orthologs for Gal1, Gal7 and Gal10 had previously been established in both C. albicans and Y. lipolytica (Slot and Rokas, 2010). The orthology of other genes (Gal80 in C. albicans) and Rtg1 in Y. lipolytica were determined from Fungal Orthogroups (Wapinski et al., 2007). BLASTP searches (Altschul et al., 1990) conducted with protein sequences from Candida albicans did not find strong sequence similarity to Gal4 in Y. lipolytica; the best 'hits' in each of these species consist of short amino acid segments. None of these 'hits' contain the cysteine residues that are characteristic of the Zn(II) 2 /Cys 6 class of proteins that Gal4 belongs to. As such, we do not consider them orthologs.
Plasmid construction
The GFP tagging plasmid template pMBL179, a gift from Dr. Matthew B. Lohse, was constructed as follows. The C. albicans optimized GFP sequence (Cormack et al., 1997) was inserted into pUC19 between the HindIII and PstI sites. The SAT1 selectable marker from pNIM1 (Park and Morschhäuser, 2005) was amplified and then inserted between the PstI and BamHI sites. The 308 nucleotides just before the GAL1 gene were amplified from C. albicans SC5314 genomic DNA and inserted in the HindIII site of pMBL179. Then the 350 nucleotides just after the GAL1 start codon were amplified from C. albicans genomic DNA and inserted into EcoRI and BamHI sites of the resulting plasmid. The final plasmid, pCKD004, was linearized with BspH1 and EcoRI and transformed into C. albicans strains to make a GAL1-GFP promoter fusion construct.
The synthetic reporter construct, pCKD017, was constructed by amplifying the 600 nucleotides just before the translational start site of CYC1 from C. albicans SC5314 genomic DNA and inserting it just upstream of GFP in pMBL179. Oligonucleotides containing two putative Rtg1-Rtg3 binding sites were annealed and inserted into XhoI and SphI sites of the resulting plasmid, pLN2, a gift from Liron Noiman. The oligonucleotide sequences are as follows (the binding sites are in bold): TCGAGGACGTCTGTACAAAAATGTAACGTTACATTAAGATTAAAATGTAACGTTACAAATTCCA TCTTTATACCATGGGCATG CCTGCAGACATGTTTTTACATTGCAATGTAATTCTAATTTTACATTGCAATGTTTAAGGTAGAAA TATGGTACCC The plasmid used to tag Rpl26b with mCherry, pMBL186, was constructed as follows. 500 base pairs upstream and downstream of the Rpl26b stop codon were amplified from C. albicans SC5314 genomic DNA. The mCherry DNA sequence (Shaner et al., 2004) was modified to account for C. albicans codon usage and alternative coding of CTG (Lloyd and Sharp, 1992) and synthesized by DNA 2.0 (Lohse and Johnson, 2016). The 500 base pairs upstream of the Rpl26b stop codon, mCherry and the 500 base pairs downstream of the Rpl26b stop codon were fused together with PCR and inserted into the SphI and AatII sites of pUC19 to create pMBL182. pSFS2a (Reuss et al., 2004) was digested with XhoI and NotI and the sequences encoding the recyclable SATI marker were subcloned into pMBL182, creating pMBL186. pMBL186 was linearized with SphI and AatII and transformed into AHY135 (Lohse et al., 2013).
pCKD016, the plasmid used to knockout YALI0F11979, the RTG1 ortholog in Y. lipolytica, was constructed by serially cloning in the flanking sequences of RTG1 into pFA6a-hbh-hphmx4 (Tagwerker et al., 2006). The 700 base pairs upstream of the coding region of RTG1 were amplified from Y. lipolytica genomic DNA (P01 strain, a gift from Claude Gaillardin) and inserted into the BamH1 and EcoR1 sites of pFA6a-hbh-hphmx4. The 700 base pairs downstream of the coding region of RTG1 were similarly amplified and inserted into the SpeI and ClaI sites of the resulting plasmid. The final plasmid, pCKD016, was linearized with SpeI and HindIII, purified with a PCR purification kit (Qiagen, Valencia, CA) and transformed into Y. lipolytica to make a RTG1 knockout strain.
Strain construction C. albicans knockout strains
All knockout strains were derived from SN152 (Noble and Johnson, 2005) and constructed by fusion PCR using the His and Leu cassettes as previously described (Hernday et al., 2010;Homann et al., 2009;Noble and Johnson, 2005). OH13 (Homann et al., 2009) was used as a parent strain. The knockout strains used in the in vivo rat catheter model were made Arg+ by transforming individual knockouts with PmeI-digested pSN105 . In this case, SN425 was used as a parent strain.
Y. lipolytica knockouts
The construction of knockout strains in Y. lipolytica was adapted from Davidow et al (Davidow et al., 1987). 10 ml YPD (20g Bacto-Peptone, 10 g Yeast Extract, 50 ml 40% glucose in 950 ml of water) cultures of the P01 strain of Y. lipolytica (CD329), a gift from Professor Claude Gaillardin (Barth and Gaillardin, 1996), were pelleted and resuspended in 2 ml of 10 mM Tris, 1 mM EDTA. They were then repelleted and resuspended in~400 ml of 10 mM Tris, 1 mM EDTA, 0.1 M lithium acetate and mixed gently at 28˚C for 1 hr. 100 ml of cells were added to 1 mg linearized pCKD016 and 50 mg boiled salmon sperm DNA. The mixture was incubated at 28˚C for 30 min. 700 ml of 50% PEG 3350, 10 mM Tris, 1 mM EDTA, 0.1 M lithium acetate were added to the transformation tube. After a 1-hr incubation at 28˚C, a 5-minute heat shock (37˚C) was applied to the transformation tube. Cells were washed, pelleted, resuspended in water and plated onto YPD plates (20 g Bacto-Agar, 20 g Bacto-Peptone, 10 g Yeast Extract, 50 ml 40% glucose in 950 ml of water) and incubated at 25˚C. After one day, the YPD plates were replica plated onto YPD + 200 mg/L hygromycin (20 g Bacto-Agar, 20g Bacto-Peptone, 10 g Yeast Extract, 50 ml 40% glucose, 4 ml 50 mg/ml hygromycin B in 950 ml of water). Colonies were verified by diagnostic PCR by attempting to amplify a small internal fragment of the coding region of RTG1 in Y. lipolytica (for a successful deletion, this intra-coding region PCR yielded no product while a wild-type control yielded a clear product) to create strains CD331 and CD333. (Longtine et al., 1998) with DNA sequence containing the intergenic region of GAL1 fused to GFP and the URA3 marker. This sequence was flanked with homology to LEU2. Colonies were selected on media lacking uracil (2% Dextrose, 6.7% Yeast Nitrogen Base with ammonium sulfate and amino acids) and verified by amplifying the flanks of the LEU2 locus and the GFP itself (strain CD015).
Plating assays
C. albicans strains were grown overnight at 30˚C in S-Raffinose (2% raffinose, 6.7% Yeast Nitrogen Base with ammonium sulfate and a full complement of amino acids). Strains were diluted~100-1000 fold in the morning and grown for 4-6 hr. Strains were serially diluted 10-fold into S-Raffinose five times and~5 ml of cells from each dilution were spotted onto SD (2% Dextrose, 6.7% Yeast Nitrogen Based with ammonium sulfate and amino acids), S-Galactose (2% Galactose, 6.7% Yeast Nitrogen Based with ammonium sulfate and amino acids) plates ± 3 mg/ml Antimycin A (Millipore Sigma, St. Louis, MO), or S-GlcNAc (2% Galactose, 6.7% Yeast Nitrogen Based with ammonium sulfate and amino acids) plates. Strain growth was monitored visually; strains that grew well exhibited growth across all dilutions. We note that Y. lipolytica is an obligate respirator; hence, Antimycin A was not included in any Y. lipolytica plating assays.
Flow cytometry
Cells were grown at 30˚C overnight in rich media lacking a sugar source. This media, YEP, contained 10 g/L yeast extract and 20 g/L Bacto-peptone. This culture was diluted to an optical density (OD 600 ) of approximately 0.05-0.2 and grown for an additional 6 hr in YEP media ± the indicated sugar (glucose, galactose or GlcNAc, all purchased from Millipore Sigma, St. Louis, MO).
Single-cell fluorescence was measured on a LSRII analyzer (BD Biosciences). A blue (488 nm) laser was used to excite GFP and emission was detected using a 530/30 nm bandpass filter. A yellowgreen laser (561 nm) was used to excite mCherry and emission was detected using a 610/20 nm bandpass filter. For each sample, 5,000-30,000 cells were measured and the mean fluorescence level was calculated. Each sample was measured three times; the mean of all three independent measurements is reported in Figures 1, 3 and 6. The errors in these figures refer to the standard error of the mean of these three measurements.
Automated flow cytometry
Cells were grown at 30˚C overnight in YEP, diluted and grown for an additional 6 hr in YEP media. This single well-mixed culture was then diluted with YEP media into a 96-well microtiter plate to a final OD 600 of less than 0.05 (500 mL volume). Cells were grown in the 96-well plate and diluted every 20 min by a liquid-handling robot with YEP media for~2 hr prior to induction with glucose and galactose, as described previously (Venturelli et al., 2015;Zuleta et al., 2014). A 30 ml sample was removed from the culture for measurement on the cytometer at each time point and 30 ml of fresh YEP media containing the appropriate 1X concentration of glucose and galactose was used to maintain a constant culture volume. Single-cell measurements were taken on the LSRII analyzer, as described above. For additional information on the hardware, software and data processing of the automated flow cytometry system, see Zuleta et al (Zuleta et al., 2014).
Motif analysis
Genome sequences of S. cerevisiae, N. castelli, T. pfaffii, Z. rouxii, L. thermotolerans and K. lactis were concatenated into a single file. Similarly, genome sequences of C. albicans, C. dubliniensis, C. tropicalis, C. parapsilosis, C. lusitaniae, and C. guillermondii were concatenated into a single file. Motifs in intergenic regions of all orthologs of GAL1, GAL7, and GAL10 [as identified by Byrne and Wolfe (2005); Fitzpatrick et al. (2010a); Maguire et al. (2013)] in each group were identified using SCOPE Chakravarty et al., 2007). The top motifs from each group of species are shown in Figure 3-figure supplement 1.
In vivo rat catheter biofilm model
The well-established rat central-venous catheter infection model (Andes et al., 2004) was used for in vivo biofilm modeling to mimic human catheter infections, as described previously (Andes et al., 2004;Nobile et al., 2006). For this model, specific-pathogen-free female Sprague-Dawley rats weighing 400 grams (Harlan Sprague-Dawley, RRID:RGD_5508397) were used. A heparinized (100 Units/ml) polyethylene catheter with 0.76 mm inner and 1.52 mm outer diameters was inserted into the external jugular vein and advanced to a site above the right atrium. The catheter was secured to the vein with the proximal end tunneled subcutaneously to the midscapular space and externalized through the skin. The catheters were inserted 24 hr prior to infection to permit a conditioning period for deposition of host protein on the catheter surface. Infection was achieved by intraluminal instillation of 500 ml C. albicans cells (10 6 cells/ml). After a 4 hr dwelling period, the catheter volume was withdrawn and the catheter flushed with heparinized 0.15 M NaCl. Catheters were removed after 24 hr of C. albicans infection to assay biofilm development on the intraluminal surface by scanning electron microscopy (SEM). Catheter segments were washed with 0.1 M phosphate buffer, pH 7.2, fixed in 1% glutaraldehyde/4% formaldehyde, washed again with phosphate buffer for 5 min, and placed in 1% osmium tetroxide for 30 min. The samples were dehydrated in a series of 10 min ethanol washes (30%, 50%, 70%, 85%, 95%, and 100%), followed by critical point drying. Specimens were mounted on aluminum stubs, sputter coated with gold, and imaged using a Hitachi S-5700 or JEOL JSM-6100 scanning electron microscope in the high-vacuum mode at 10 kV. Images were assembled using Adobe Photoshop Version 7.0.1 software. All procedures were approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Wisconsin (protocol MV1947) according to the guidelines of the Animal Welfare Act, and The Institute of Laboratory Animal Resources Guide for the Care and Use of Laboratory Animals and Public Health Service Policy.
RNA-sequencing and analysis
Two independent sets of samples were grown, processed, and sequenced in both S. cerevisiae and C. albicans. Samples were harvested as follows: Cells were grown at 30˚C overnight in YEP, diluted to OD 600 = 0.067 and grown for an additional 6 hr in YEP, YEP + 2% glucose, or YEP + 2% galactose. Cells were pelleted and RNA was extracted from these pellets using the Ambion RiboPureÔ Yeast Kit with the DNase I treatment step. RNA concentration and integrity were assessed with a Bioanalyzer. RNA samples were sent to the Columbia Sulzberger Genome Center where libraries for sequencing were prepared using the TruSeq RNA Library Preparation kit v2 (Illumina, San Diego, CA). Samples were pooled and sequenced (100 bp, single end reads) on a HiSeq 2500 (Illumina, San Diego, CA). Each sample yielded~30 million reads. The data have been submitted to the sequence read archive (SRA) as accession numbers SRP083773 (S. cerevisiae) and SRP083777 (C. albicans).
Sequences were pseudo-aligned to the S. cerevisiae and C. albicans genomes and tabulated using kallisto . Raw data (in transcripts per million reads) are plotted in Figure 5 and Figure 5-figure supplement 1 and are available as Figure 5-source data 1 and Figure 5source data 2. The p-values across conditions (galactose vs YEP) were calculated using sleuth , incorporating variance across independent sets of samples as well as across 100 bootstraps of kallisto output.
q-PCR
25˚C overnights of Y. lipolytica strains were diluted back to OD 600 = 0.2 in the morning and allowed to regrow for 6 hr in glucose and galactose. The equivalent of 10 mL at OD 600 = 1 was harvested for each culture when cells were pelleted.
RNA was extracted from pellets using the Ambion RiboPure Yeast Kit with the DNase I treatment step. Superscript II RT was used for cDNA synthesis. We used a Power SYBR Green mix (Thermo-Fisher, Waltham, MA) for all qPCR reactions. The qPCR cycle consisted of a 10 min holding step at 95˚C, followed by 40 cycles at 95˚C for 15 s and 60˚C for 1 min. This was followed by a dissociation curve analysis. Three +RT and one -RT reaction were run for each strain, 5 mL of a 1:100 dilution were used in each well. A 1:10 dilution of an equal volume mixture of all +RT reactions was used as the starting point for the standard curve, which consisted of 6 1:4 dilutions from the starting sample. GAL1 was amplified with the following primers: Forward: GATTTTGCTCCAACCCTCAAG and Reverse: ACCCGCAGATTGTAGTTTCG. SGA1 was used as a control (Teste et al., 2009). SGA1 was amplified with the following primers: Forward: ACAATGGAGATATGTCGGAGC and Reverse: TCCC TTTGATAACTTCCTGGC.
Major datasets
The following datasets were generated:
|
2016-10-31T15:45:48.767Z
|
2016-09-10T00:00:00.000
|
{
"year": 2016,
"sha1": "7aacde31369b65e13093bb1b8df864c2ba0f8a33",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7554/elife.18981",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c053077aecc3de046f91ad69042c4da3e0e592ce",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
241292239
|
pes2o/s2orc
|
v3-fos-license
|
Eciency of damage control orthopaedics strategy in the management of military ballistic limb trauma
In conict areas, orthopaedic surgeon adopted this concept of damage control orthopaedic (DCO) to face limb fracture due to ballistic trauma because of the gravity of the hurts, the limitation of equipment, and precarious conditions of asepsis. They use external xation as an initial treatment at the nearest health centre. And they delay the denitive treatment to be realized in better conditions. Our study aims to assess the outcome of damage control orthopaedic (DCO) strategy in military ballistic limb trauma according to the experience of the Military Hospital of Tunisia. xation into an internal osteosynthesis, which constitutes a key parameter in the management of limb fracture due to ballistic trauma.
xation into an internal osteosynthesis, which constitutes a key parameter in the management of limb fracture due to ballistic trauma.
Introduction:
Damage control (DC) is not a medical concept; Navy (1) used it in the Second World War to describe all the temporary measures used in the ght to prevent a ship from sinking while pursuing its mission. The damage control surgical was developed to face the abdominal traumas with massive bleeding, to shorten the operating time and avoid the lethal set of three: Hypothermia, coagulopathy and acidosis (2). Thus, it opposes to early total care de ned as an ideal and complete coverage of all the hurts from the rst passage in the operating room. The DC was applied then to manage a polytraumatic having fractures of the long bones and pelvic and de ned as Damage control orthopaedics (DCO). It is a relatively new concept (3) using external xation because of the gravity of hurts, the limitation of equipment, and precarious conditions of asepsis in the con ict areas. The de nitive synthesis is secondarily made as soon as possible, after regression of the in ammatory reaction and oedema.
Our study aims to assess the outcome of DCO strategy in military ballistic limb trauma according to the experience of the Military Hospital of Tunisia.
Materials And Methods:
This study is a retrospective, single-centre, descriptive study on patients who were hospitalized for a gunshot limb trauma at the Orthopaedic and Traumatology Department of the Main Military Hospital of Instruction in Tunis, for seven years from 2011 to 2017. It included military patients with ballistic fractures requiring emergency primary external xation followed secondarily by conversion to internal synthesis, with a minimum follow-up of 12 months. Patients requiring exo-xation as de nitive treatment and patients lost to follow-up were excluded from this study.
All patients had urgent surgical management at the nearest health centre. It consists of soft tissue debridement, lavage and bone stabilization by external xation (Fig. 1A, B, C). And the elaboration of lesion assessment. We adapted Gustilo classi cation to describe the wound opening, Winquist and Hansen classi cation to de ne the fracture comminution and Grading system for bone loss to evaluate the bone loss.
A rst-line antibiotic therapy based on amoxicillin-clavulanic acid and aminoglycoside with tetanus prophylaxis were administered to all our patients.
Victims evacuation to the military hospital was carried out within 24 hours by land transport, except three victims, presenting vascular lesions, were transferred urgently by air transport.
At J2 post-traumatic, all patients had second-look surgery with debridement. Except for three patients with vascular lesions required immediate revision surgery after radiological exploration.
All the wounds were left open. Wound debridement was repeated once every two days until the wound was closed. Vacuum-Assisted Closure or VAC® Therapy was used to accelerate healing. This procedure was changed every two days following debridement. Hyperbaric oxygen therapy was performed in patients with Gustilo III wound opening.
Bacteriological samplings from deep soft tissues were carried out at each debridement. And, the antibiotic therapy was adapted to the results of the bacteriological examination. Victims underwent biological monitoring with an assessment, including Complete blood count (CBC), C reactive protein (CRP), serum protein, and prothrombin time (PT) at a rate of 1 day / 3.
The conversion to internal osteosynthesis had taken place at different times in our patients. The criteria for determining the time to conversion was the absence of local sepsis, an haemoglobin level (> 10 g/dl), a serum protein level (> 50 g/l) and a negative or falling CRP kinetics.
Our study protocol is resumed in the diagram (Fig. 2).
We noted all the reports of the examinations carried out at the time of the accident, throughout the hospitalization and postoperative follow-up. We studied the delay of conversion from external xation to an internal osteosynthesis, bone healing time and complications.
The data were collected using a standardized analytical sheet, then entered using SPSS software. We calculated frequencies and relative frequencies (percentage) for the qualitative variables. We evaluated means and determined the range (extreme values = minimum and maximum) for the quantitative variables. The parametric Pearson correlation coe cient was used to study the relation between two quantitative variables. The link between two qualitative variables was evaluated by the chi-2 test and the Fisher test in case of chi-2 test invalidity. The association between a binary qualitative variable and a quantitative variable by Student's t-test. The signi cance level was set at p ≤ 0.05 for all tests performed.
Results:
Our study included 32 patients. All of our patients were male. The mean age was 31 years (21-51 years).
And nineteen patients had a smoking history.
Ballistic trauma mainly concerned the lower limb (88% of cases) and especially the femur, which represents 50% of lesions of the lower limbs.
The distribution of lesions according to the classi cation of Gustillo, Winquist and Hansen, and Grading system for bone loss are represented by (Table 1). Table 1 Lesions distribution according to the classi cation of Gustillo, Winquist and Hansen, and Grading system for bone loss.
Type I Type II Type III Type IV Gustillo 20 0 IIIa IIIb IIIc 9 0 3 Winquist et Hansen 0 19 11 2 Grading system for boneloss 12 6 12 2 Three patient had vascular damage treated by an emergency revascularization procedure. And one patient had nerve damage (neurapraxia).
VAC Therapy was used in all our patients. Hyperbaric oxygen therapy was performed in twelve patients with Gustilo III wound opening. Bacteriological analysis isolated "Acinetobacter baumani" in twelve cases, and "Staphylococcus" in seven cases. In the rest of cases, the result was multi-bacterial.
The conversion from external xation to an internal osteosynthesis took place in an average delay of 7.8 days (1 day -15 days). Intramedullary nailing was the most used osteosynthesis material in 72% of cases. In the rest 28%, we use the osteosynthesis plate (Fig. 3A, B, C). A pin-site infection was observed in 3 cases, at the conversion surgery, requiring a systemic curettage of the pin sites before any material placement. And nally, the average hospital stay was 13.4 days (10 days -18 days).
With an average follow up of 33,2 months (12 months -72 months). The bone union was achieved in 26 cases, with an average delay of 4.23 months (3 months -8 months) (Fig. 4A, B, C).
Most observed general complications were anaemia, pulmonary embolism and rhabdomyolysis. Anaemia (Hb < 10 g/dl) was observed in sixteen patients treated by transfusion. Pulmonary embolism was detected in three patients. And it has progressed well under anticoagulant treatment, without recourse to reanimation. Rhabdomyolysis was noticed in three patients without an impact on renal function and responded well to medical treatment. No fat embolism, no acute respiratory distress syndrome, and no multiple organ dysfunctions syndrome were observed in our series.
Local complications are essentially summarized in sepsis on osteosynthesis material and pseudarthrosis. Five patients suffered from sepsis on osteosynthesis material, and six patients have evolved into non-union treated by a spongy graft with internal synthesis.
The analysis of these complications objective a signi cant association between the sepsis on osteosynthesis material and four settings: smoking (p = 0.005), pin-site infection (p = 0.009), a Gustillo type III skin opening (p = 0.03) and the delay of conversion to internal osteosynthesis, as shown in Table 2. The non-union rate was also associated with smoking (p = 0.001), a type III and IV of the Winquist fracture comminutions (p = 0.013), a type II and III of the Grading system for bone loss (p = 0.016). And the delay of conversion to internal osteosynthesis, as shown in table 3. Discussion: Our study objective that DCO strategy in the management of fractures due to ballistic trauma is e cient, with a consolidation rate > 80%, and a relatively low percentage of general complications. These complications had minor severity, and they were easy to manage.
Local complications, mainly sepsis on osteosynthesis material and pseudarthrosis had relatively low rates. Our analyses showed that they were associated with several parameters. These parameters can be divided into non-modi able risk factors such as the history of smoking, the wound opening, the fracture comminution, and bone loss. And a modi able risk factors, mainly the delay of conversion from external xation to internal osteosynthesis. The local complications rate was lower when this delay of conversion was shorter.
Our study is distinguished by strict inclusion and exclusion criteria. It was carried out in an institution that has good experience in the eld of ballistics. Its protocol was interested not only at the results of the DCO strategy but also at the parameters which could affect this strategy. However, our study also has some limitations, mainly a selection bias due to its retrospective nature, and the relatively small number of patients.
The DCO is a relatively new concept (3), adopted in the management of ballistic limb trauma, appeared to solve the defects of an older approach, which consists of early stabilization of skeletal lesions called early total care.
The early total care was the principal strategy for management of polytrauma and war wounded in the 1980s and 1990. However, recent studies have shown that adoption of this strategy, in groups of patients with hemodynamic instability, is more associated with signi cant complications such as pulmonary embolism and acute respiratory distress syndrome, and multiple organ dysfunctions syndrome. And this was exceptionally observed in the intra-medullary femur nailing (4). Studies have associated the occurrence of these complications with changes in pro-in ammatory markers (5). Indeed, the initial accident causes in ammatory and immunological reactions proportional to the severity of the trauma called "First hit" (6). This reaction is characterized by local and systemic proliferation of various proin ammatory mediators such as cytokines, complement, coagulation proteins and others. Besides, the prolonged duration of the surgery and the bleeding lead to another signi cant in ammatory and immunological reactions called "Second Hit". This second hit potentiates the effect of the rst hit, and this may lead to a severe consequence in patients.
From these observations, a new strategy, based on minimizing the impact of the second hit by shortening the initial operating time and delaying the de nitive treatment, was adapted to manage limb ballistic trauma. This strategy was called the DCO. Therefore, the treatment of long bone fractures of soldiers wounded on the battle eld is based on the temporary external xation whose objective is controlling haemorrhage, restoring perfusion of the limb, debridement of necrotic soft tissue and ensuring bone stability. And without disturbing resuscitation care measures (7).
However, after hemodynamic stabilization, control of in ammatory phenomena and improvement of the local wound condition, this external xation would be better converted into an internal osteosynthesis.
Because even though the external xation does not always allow an anatomical reduction, it is often associated with a high rate of pin-site infection and low-quality bone callus. Respet and al attempted to determine the time between the realization of external xation and the onset of pin site infection. They found that pins bacteriological cultures were positive in 50% of the cases in 2 weeks and 67% in 4 weeks (8). These results were also con rmed later by Clasper and al (9). Also, Sigurdsen and al (10,11) experimented on rats.
To study the quality of the bone callus after an osteotomy, initially treated with an external xator and then followed by conversion to internal synthesis, with different delays. This conversion delay was seven days in A group, 14 days in B group, 30 days in C group and the control group D without conversion. All the groups had better consolidation than the control group, but only group A had a signi cant difference. Biomechanically, the rigidity and quality of the callus in group A were better than other groups.
These studies not only show the interest of the conversion from external xation to internal osteosynthesis, but it also proves the interest of the delay of this conversion. The shorter it is, the better the quality of bone callus, and the lower the risk of pin site infection and sepsis. This result is consistent with our study, which objecti ed that the delay of conversion is a risk factor for both septic and non-union complications.
In our study, this conversion time was relatively short compared to other papers in the literature (12)(13)(14)(15). It was possible by care aimed at accelerating deferent phases of the wound healing process and ghting infection. Indeed, VAC therapy allows a permanent elimination of exudates from the wound bed. Associated with repetitive debridement, it accelerated the in ammatory phase of the wound healing process. Hyperbaric oxygen therapy improves tissue oxygenation. Also, it enhances broblast and collagen synthesis, neovascularization, and the closure of arterial-venous shunts (16), which shortened the time to granulation formation, especially in open wound fracture Gustillo III (17). As well, several studies had proven the effect of VAC therapy to accelerate granulation tissue formation (18)(19)(20). With normal haemoglobin and serum protein levels, they hasten the proliferative phase of the wound healing. Furthermore, in addition to adapted antibiotic therapy, hyperbaric oxygen therapy and VAC therapy had a con rmed role against infection.
Conclusion:
The DCO is a management strategy not only limited to shortening the initial management and delaying the de nitive treatment, aimed to reduce severe or even fatal complications. But it is a global strategy which involves all measures participating in the acceleration of the wound healing and ghting against the infection. These measures shorten the delay of conversion from external xation into an internal osteosynthesis, which constitutes a key parameter in the management of limb fracture due to ballistic trauma. Availability of data and materials
Abbreviations
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests.
|
2020-09-10T10:16:40.734Z
|
2020-09-02T00:00:00.000
|
{
"year": 2020,
"sha1": "094ec4b522c257807ea9bd91e3599ba2127a7d34",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21203/rs.3.rs-68375/v1",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "79d49dac09f898ee8d2c2ebb0bcc8d0bf72aa63d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
248781219
|
pes2o/s2orc
|
v3-fos-license
|
LEOD-Net: Learning Line-Encoded Bounding Boxes for Real-Time Object Detection
This paper proposes a learnable line encoding technique for bounding boxes commonly used in the object detection task. A bounding box is simply encoded using two main points: the top-left corner and the bottom-right corner of the bounding box; then, a lightweight convolutional neural network (CNN) is employed to learn the lines and propose high-resolution line masks for each category of classes using a pixel-shuffle operation. Post-processing is applied to the predicted line masks to filtrate them and estimate clear lines based on a progressive probabilistic Hough transform. The proposed method was trained and evaluated on two common object detection benchmarks: Pascal VOC2007 and MS-COCO2017. The proposed model attains high mean average precision (mAP) values (78.8% for VOC2007 and 48.1% for COCO2017) while processing each frame in a few milliseconds (37 ms for PASCAL VOC and 47 ms for COCO). The strength of the proposed method lies in its simplicity and ease of implementation unlike the recent state-of-the-art methods in object detection, which include complex processing pipelines.
Introduction
Object detection tasks are one of the most important tasks in computer vision as it is mainly included in understanding and analyzing a scene in images, and this task becomes more efficient when it is performed in real time in order to be useful in live-video processing. The object detection is usually performed using bounding box regression by predicting the x and y values of the top-left corner in addition to the width and the height of the box. The recent object detection methods are mainly classified into two main categories: two-stage methods and single-stage methods. The two-stage method is usually complex as it consists of a stage for object proposals and another stage for object classification and bounding box regression; this concept is applied in many recent methods such as RCNN [1], Fast RCNN [2], Faster RCNN [3], and Mask RCNN [4]. The two-stage methods attain high mean average precision (mAP); however, they are extremely slow (0.2-10 frames per second (FPS)) as such pipelines are computationally expensive and include complex processing techniques. On the other hand, the single-stage object detection methods employ fully convolutional neural network architectures and perform the object detection task at a high speed such as Yolo V1 [5], V2 [6], V3 [7], V4 [8], Single Shot Detection (SSD) [9], and RetinaNet [10]; however, the mAP values for such methods are lower than those for the two-stage methods as they mainly depend on small-scale grids that introduce accuracy loss in learning the bounding box coordinates. A good object detection method should have a trade-off between high accuracy and high processing speed, which is the goal of this paper-achieving a relatively high speed and accuracy. An overview of the proposed method is shown in Figure 1. We propose a fast object detection method by training a CNN model to predict lineencoded bounding box masks for each class object. The CNN predicts high-resolution line masks using a lightweight pixel-shuffle operation [11] inspired by a technique employed in the image super-resolution task. An important post-processing stage is employed to filter out the predicted lines and to estimate fine bounding boxes out of the lines by exploiting the progressive probabilistic Hough transform (PPHT) [12] to find clear lines based on a proposed iterative technique under constraints. The contribution of the work presented is as follows: • We propose new bounding box encoding and learning techniques. The bounding box encoding technique is based on encoding the top-left and bottom-right corners of the bounding box in a single line learnable by segmentation map prediction. • We propose a robust post-processing technique to solve the problem of multiple detections of the same object and the problem of many detections of a deformed line of a single object. • The proposed method can successfully achieve a good trade-off between speed and accuracy. It realizes real-time processing (27fps) while keeping a high mAP in object detection.
The rest of the paper is organized as follows: Related work, which details the recent methods in object detection; the Proposed Method, which contains the details of our implementation; Benchmarks For Training and Validation, which contains the dataset employed to train and test our proposed method; Evaluation Metrics of Object Detection and Ablation Study, which contains two main studies on the scale of the line mask and the up-sampling techniques employed in our method; Complexity Analysis of the Proposed Model, Experimental Results, Limitation, and Future Work, and finally the conclusion of the paper.
Related Work
The recent deep-learning-based object detection methods have shown a superior ability of the CNN models to learn and perform object detection accurately and rapidly. As we previously mentioned in Section 1, there are two main CNN-based methods for object detection: two-stage methods and single-stage methods. Sermanet et al. [13] proposed Overfeat, which is one of the early deep-learning-based two-stage object detection methods in which they trained a CNN image classifier (AlexNet [14]) and then applied the trained classifier on every batch of the image using a sliding window with different window scales; however, this method was very slow due to the high number of computations required for classification of each image patch. The authors of [1] proposed RCNN, which is a two-stage CNN model for object detection and employed a selective search method [15] to propose a limited number of regions (typically 2000 regions) for classification by a CNN image classifier (VGG16 [16]) instead of the classification of the whole image with different scale windows; this method still provides an extremely low frame rate (0.2 FPS). Later in 2015, Girshick et al. [2] proposed Fast R-CNN in which the author reduced the complexity of RCNN by feeding the image to a CNN (VGG16) then applying the selective search method on the feature maps obtained from the CNN instead of applying it on the whole image; the author also proposed ROI pooling to reshape all the proposed features into squares and then feeding them to a class classification + bounding box-regression CNN. Fast R-CNN attained a relatively low speed of 2 FPS, although much better (10× faster) than RCNN. Ren et al. [3] proposed Faster R-CNN in which they solved the drawbacks in both R-CNN and Fast R-CNN by eliminating the need for the computationally expensive selective search method; they feed the input image to a CNN (VGG16) to propose a few regions (typically 300 regions) for classification and then another CNN (VGG16 or ResNet [17]) is used to classify the regions and regress the bounding boxes. Faster R-CNN attained a high mAP at a speed of 10 FPS, which is still relatively low.
The recent single-stage object detection methods showed an average accuracy but attained a high speed in frame processing. The first single-stage object detection method was proposed by [5] under the name "You Only Look Once" or YOLO; they proposed a grid-based detection method using a convolutional architecture (specifically Darknet) in which each cell in the grid predicts the class category in that cell in addition to x, y, w, and h coordinates of the bounding box where x and y are the coordinates of the top-left corner of the cell and w and h are the widths and the height of the bounding box of the object exist in that cell; however, although YOLO is fast enough for real-time processing (it can work with a speed of 45 FPS), it has a major problem, which is the failure to detect small objects, as the grid was too small (7 × 7). YOLOV2 or YOLO9000 [6] was proposed by the first and the last authors of YOLO to improve the speed and the accuracy of YOLO; they added patch normalization layers after the convolutional layers in the YOLO architecture, which improved the mAP by 2%; they also used bigger image size, typically 448 × 448, instead of the small image size (224 × 224) used in the initial YOLO version; this modification also increased the mAP by 4%. They also reduced the original Darknet architecture from 26 layers to 19 layers (Darknet-19) to speed up the process (they achieved a frame rate of 67 FPS at 448 × 448 image size). They also proposed anchor boxes to limit the shapes of the predicted bounding boxes to specific object-based shapes instead of the arbitrary boxes predicted by YOLO. YOLOV3 is proposed by [7] of YOLOV2 to improve the detection of small objects; the authors employed Darknet-54, which is a deeper CNN than YOLOV1 and V2 and also employed multiple-scale detection using an architecture similar to the feature pyramid network (FPN) [18]. The detection in YOLOV3 is achieved at three different scales (small, medium, and large) and a non-maximum suppression is applied to obtain the detections with the highest scores. YOLOV3 attained a higher mAP than YOLOV1 and V2 but the frame rate was reduced to 35 FPS at 416 × 416 image size. YOLOV4 was proposed by [8], where they improved the mAP by 10% over YOLOV3 by presenting a new backbone (CSPDarknet53) employing cross-spatial partial connections. They proposed three main parts: backbone, neck (path aggregation networks [19] with spatial pyramid pooling [20]), and head (dense prediction block); YOLOV4 attains a speed of 62 FPS at the best mAP value with an image size of 416 × 416. Duan et al. [21] proposed Center-Net, a keypoint-based method to detect the objects using three points (top-left, center, and bottom-right points) and achieved high accuracy in detection. Tan et al. [22] proposed EfficientDet, which is a fast and accurate object detection method based on the successful architectures of EfficientNet [23] originally proposed for classification. The author also proposed the bi-directional feature pyramid network (BiFPN), which allows the feature fusion of multiscale features.
In the proposed method we employ a CNN backbone (specifically Xception [24]) to extract the image features, then the obtained low scale features are upscaled using the pixel-shuffle algorithm inspired by the efficient sub-pixel CNN [11] originally presented for the real-time image super-resolution task. This algorithm can up-scale many low-resolution images of shape W × H × r 2 C (where r 2 is the scaling factor) into a high-resolution image of shape (rW × rh × C) through pixel shuffling from the depth channel. This algorithm is fast and efficient in the construction of higher resolution images and especially segmentation masks as explored in detail in our previous research [25,26]. The progressive probabilistic Hough transform (PPHT) [12] is a popular method for straight line detection from a small set of edge points instead of all edge points used in the standard Hough transform (SHT) [27], thus PPHT is much faster than HT. As PPHT is an iterative method, a random edge point is selected for each iteration for voting, then the condition of the line is tested. If a specific line has a large number of votes from the randomly selected points, the stopping rule is satisfied and the line is approved as a detection. PPHT can be tuned using the algorithm parameters to control the estimated line/lines, such as controlling whether to combine multiple sparse points or not based on their alignment. The line estimation using PHT is efficient and rapid enough to be performed as a post-processing step to the detected lines in our proposed method, which targets the real-time object detection task.
Proposed Method
The proposed method consists of three main parts. Firstly, the backbone used for feature extraction (Xception-16) is a modified version of Xception with two output branches. Secondly, we use the pixel-shuffle operation, which is used to upscale the final features based on the depth channel. Finally, the post-processing stage combines the probabilistic Hough transform and the per-class object count to decode the line and obtain the bounding boxes.
Xception-16 Architecture
Xception [24] is an efficient feature extractor network presented initially for ImageNet ILSVRC [28] image classification and attained a top-5 accuracy of 0.945, which is relatively high compared to the current state-of-the-art (SOTA) methods. Chollet et al. [24] proposed the depth-wise separable convolution (DW-Conv) as the building block of Xception architecture. DW-Conv consists of two convolution operations. Firstly, the depth-wise convolution performs convolution on each channel separately. Secondly, the point-wise convolution applies a 1 × 1 convolution on the input. DW-Conv is much faster than the standard convolution as it learns fewer parameters so it is key to the fast processing in our proposed method. Xception also proved to be a good feature extractor in recent research for multiple computer vision tasks, as it proved to be light enough for real-time applications because of the relatively low FLOPs count and a number of other parameters [29,30]; it also proved to be compatible with the pixel-shuffle [11] operation (also employed in our proposed method and is introduced in Section 3.2) as Xception with the pixel-shuffle showed high accuracy in performing the semantic segmentation task in DTS-Net [25]. As our method performs the semantic segmentation as a secondary task to predict the encoded line, we adopted a modified version of Xception for its robustness and high accuracy. We propose Xception-16, which cuts the original Xception architecture at the layer 'block13_sepconv2_act', which is equivalent to the input image scale divided by 16 (i.e., input image of size 448 × 448 produces features of scale 28 × 28 using the proposed Xception-16), then we add two branches, the first with a convolution2D layer followed by the pixel-shuffle operation to construct the line mask in the required scale and the second branch with global average pooling (GAP) followed by a fully connected layer (FC) to predict the per-class object count. The name Xception-16 comes from the final feature scale or the down-scaling obtained from the network, which is 1/16 of the input image size. The Xception-16 architecture is shown in Figure 2a. (c) Xception block-2 is the Xception block that consists of three sequential RELU+ 3 × 3 × N separable convolution2D.
Pixel-Shuffle as a Feature Map Upscaling Algorithm
The pixel-shuffle algorithm is proposed by [11] for real-time image super resolution, as the algorithm is fast and efficient in constructing large-scale images from many small-scale images through pixel-reordering from each small-scale image to form super pixels in the large-scale images, as shown in Figure 3. The pixel-shuffle algorithm can upscale small-scale images of shape W × H × Cr 2 into a large image of scale rW × rH × C through a rearranging operation to map the pixels depending on the location of each pixel according to (1): L(x, y, C) = S(x/r, y/r, C.r.mod(y, r) + C.mod(x, r) + C) (1) where L and S are the large-scale and small-scale images, x and y are the horizontal and the vertical location of a pixel, C is the number of channels, r is the square root of the upscaling factor r 2 , and mod() is the modulus operation. In our proposed method, we add a 1 × 1 × Nr 2 convolutional2D layer after the Xception-16's last layer, which produces a 728 feature map of 1/16 of the input image size to adjust the depth channels so that after the pixel-shuffle, it produces line maps equivalent to the number of the categories at the required scale. We try different upscaling factors to obtain 1/1, 1/2, 1/4, and 1/8 of the input image size in the experiment section to reveal the effect of the line-map scale on the mean average precision. The objective function used for line segmentation is a pixel-wise multilabel classification to allow the existence of multiple lines in the same location but in different masks; the employed function is binary cross-entropy as shown in Equation (2): where C is the number of classes, P is the number of pixels in the line mask, y is the ground truth image label, andŷ is the predicted image label.
Per-Class Object Count Regression
The second output branch from the proposed CNN is used to predict the object count of each class. The object count is used to ensure that the number of the detected objects equals the predicted number of objects per class. In the case of non-matching, a correction technique is followed using pre-defined PPHT parameter cases. The prediction of the per-class object count has to be performed by applying the GAP layer to the output from the Xception-16 backbone to obtain a 1D feature vector, then an FC layer is added to obtain dense predictions of the objects per class. This task is performed through regression with a mean squared error loss as shown in Equation (3): where y andŷ are the ground truth and the predicted object count, respectively, and N is the number of classes. The overall loss is the sum of the two losses in (2) and (3) with equal weights as shown in (4).
Bounding Box Encoding and Line Decoding Algorithms
The PPHT [27] algorithm contains five main parameters that should be carefully tuned in order to achieve the best line detection results. There are three parameters related to the edge points accumulator or line detector (ρ, θ, and t where ρ is the resolution of the distance of the accumulator in pixels, θ is the resolution of the angles of the accumulator in radians, and t is the votes threshold of the accumulator to verify any line detection). Another two parameters, which are the minimum line length (MLL) and the maximum line gap (MLG), are used to define the shortest length of the line to be considered as detection and the maximum gap between any two points to consider them as one line, respectively. A visual illustration of PPHT parameters is shown in Figure 4. In the proposed method, the Xception-16 network is trained to predict line masks using a binary segmentation approach. The ground truth lines are generated from the bounding box annotations provided in each dataset used. Each line is produced depending on each object, whereas the line beginning is the top-left corner and the end of the line is the bottom-right corner of the bounding box. Formally, all the bounding boxes are encoded in a negative-slope line format. Since the objects are encoded in a one-pixel-thick line, that means that each line is unique and easy to be separated from other lines of the same class as there is a line mask for each class category and the lines have different slopes according to the alignment of the objects, which are different for the instances of the same class. We apply PPHT to the predicted lines for each class category obtained from the Xception16+Pixel-shuffle, but the PPHT algorithm can produce a different number of lines based on the selection of the parameters. As such, we use the per-class object count prediction to provide a reference of the number of the lines that should be produced from each line mask of the classes. If the algorithm fails to match the exact per-class object count, it performs distance measurements between the produced lines from PPHT in each case and the count vector to predict, as much as possible, lines close to the true number of lines in the count vector. This proposed algorithm for line detection depends mainly on the scale of the line masks; as such, we apply PPHT three times (three parameter sets are empirically selected to detect small, medium, and large objects) on the masks that have line segments; each time we try three pre-selected different θ resolution, ρ resolution, and threshold
Benchmarks for Training and Validation
For the proposed method training and validation, we employ three common object detection datasets: PASCAL VOC2007 [31], VOC2012 [32], and MS-COCO2017 [33]. PASCAL VOC2007 is a popular dataset for the common objects in the scenes: it consists of 20 classes of objects. The dataset contains 5,011 images for training and 4,952 images for validation. PASCAL VOC2012 has the same classes as PASCAL VOC2007 but with different training and validation images: it consists of 5716 training images and 5823 images for validation. For better model training, we trained the proposed model on both PASCAL VOC2007 and VOC2012 training sets and we tested the model on the PASCAL VOC2007 dataset test set. For training and testing on PASCAL VOC datasets, we used an image size of 448 × 448. The third dataset used for validation was MS-COCO which is a larger dataset for common objects in the scenes and contains 80 class categories. The MS-COCO dataset consists of 118,287 training images and 5000 validation images. We used an image size of 560 × 560 for training and testing on the MS-COCO dataset. The bounding box annotations are provided for the three mentioned datasets.
Evaluation Metrics of Object Detection
For the proposed method of performance measurement, we evaluated the proposed method on PASCAL VOC2007, VOC2012, and COCO minival (validation set of MS-COCO2017) to measure the mean average precision (mAP) at an intersection over union greater than a threshold; 0.5 is used as a threshold in the evaluation of most object detection methods. The average precision (AP) metric is the measure of the average value of the precision for recall values over 0 to 1. The precision and the recall can be defined as in Equation (5): where TP, FP, and FN are the true positive, false positive, and false negative of the predictions, respectively. The precision measures how accurate the predictions are and the recall measures whether the model can predict the positives. The AP is the area under the precision-recall curve. The mAP is the mean value of the AP over all the classes; it is usually measured at an IOU value of 0.5, but in MS-COCO evaluation several IOU values are used (from 0.5 to 0.95 with step of 0.05) and the average of those IOU values is calculated to obtain AP box . Further, the AP for the small, medium, and large objects is calculated according to the annotation of the objects in the image. IOU can be defined as in Equation (6): where Box pred and Box gt are the predicted and the ground truth bounding boxes, respectively.
Training and Test Setup
The proposed method was trained using a desktop computer with Nvidia RTX3090 GPU, Intel Corei7-8700 CPU @3.20 GHz clock speed, and 64 gigabytes of RAM. The training was performed using the Tensorflow Keras environment where the trained models have been trained using Adam's optimization method with an initial learning rate of 0.001 for approximately 250 epochs. A translation and horizontal flipping operations are adopted during training as an augmentation to prevent overfitting and provide more generalization of data. The original Xception model is initialized with ImageNet classification weights then the network is cut to be the modified version "Xception-16" to speed up the training process. The inference was performed using an Nvidia Titan XP GPU with the other configuration mentioned before.
Ablation Study
We performed two main studies: one on the scale of the line mask used for box decoding and the second on the up-sampling techniques used for forming the line mask.
Study on the Scale of the Line Mask
We performed four separate training experiments for the proposed model using four different up-sampling scales of the pixel-shuffle module. We experimented with 1/8, 1/4, 1/2, and full scales to determine which scale has the best performance in terms of mAP on PASCAL VOC2007. The number of channels at the final Conv2D layer before the pixelshuffle is changed so that the obtained line masks are formed at the desired scale. The value of r of the pixel shuffle is also changed according to that too (r = 2 for 1/8 scale, r = 4 for 1/4 scale, r = 8 for 1/2 scale, and r = 16 for full-scale) as shown in Figure 5a. Figure 6 shows the obtained line mask in a sample test image with the corresponding decoded bounding box in each case. The smallest scale (1/8 scale) gives a solid-continuous line in the line mask but due to the small scale, the decoded box using PPHT is too wide and does not exactly fit the object. In the case of the 1/4 scale, the detected line has a few small gaps but still, the PPHT is able to detect the line and merge the line segments. In the case of the 1/2 scale, the PPHT detects multiple segments and cannot easily merge the line segments and fails in many cases, resulting in a big loss in the mAP; the full-scale case also generates many tiny line segments and sparse points, which totally confuses the PPHT algorithms and make it impossible to detect the objects properly. The bad line mask results in larger scales and comes from the fact that the density of the line pixels is very small compared to the scale of the mask as shown in Table 1. As a result of the previous experiments, we selected the 1/4 scale, which has a good trade between the tightness of the bounding box and the relatively high density of line pixels, which is enough for PPHT to detect it without much effort in tuning the PPHT algorithm parameters. The best parameters of PPHT (shown in Table 1) were tuned manually by trial and error to obtain the best possible mAP. We also compared the speed of the model on different scales and, as expected, the lower the scale, the higher the frame rate; we selected the 1/4 scale as the best one based on the mAP sacrificing the better frame rates in the case of the 1/8 scale. Table 1. Comparison between the different scales of the line mask in terms of mAP and speed on PASCAL VOC2007 with the best selection of ρ 1,2,3 , θ 1,2,3 and t 1,2,3 . The best value is shown in bold.
Study on the Up-Sampling Technique for the Line Mask
We experimented with three different up-sampling techniques to form the line masks. The up-sampling techniques up-sample the final features extracted using Xception-16 four times, and then a Conv2D layer is used to reshape the number of the filters to be equal to the number of classes, as shown in Figure 5b. We trained each model using each one of the up-sampling techniques and then compared the pixel-shuffle with the bilinear and nearest neighbor up-sampling techniques at a line mask scale of 1/4 of the input image.
The bilinear up-sampling and the nearest neighbor up-sampling showed a poor performance in obtaining solid lines; they produced many gaps and thick line segments, which result in low mAP values (lower than 10) so we could not produce notable results to compare with the pixel-shuffle-based results. In spite of our effort to tune the PPHT parameters, the performances of the bilinear and the nearest-neighbor-based up-sampling are much lower than that for the pixel-shuffle, which produces a thin solid line with few gaps, as indicated in the sample results shown in Figure 7.
Complexity Analysis of the Proposed Model
We analyze the proposed model including the Xception-16 feature architecture plus the two branches (The pixel-shuffle for line segmentation with 1/4 scale of the input image and the fully connected layer for the per-class regression). The analysis was performed using the TensorFlow profiler [34], specifically the tf.profiler.ProfileOptionBuilder.float_operation() function, to calculate the number of floating point operations (FLOPs) for the different layers, i.e., Convolution2D, Depthwise separable convolution2D, and max pooling2D, and the other operations in the model, i.e., multiplication and addition operations. Table 2 shows a detailed analysis of the proposed CNN models with the two image sizes used for PASCAL VOC and MS-COCO datasets. It is obvious in Table 2 that the convolution2D operations take most of the computations and the depth-wise separable convolution takes fewer computations since the depthwise separable convolution is much less complex than the conventional convolution2D operation.
Experimental Results
The proposed method was trained to predict line masks with a scale of 1/4 of the input image, i.e., for PASCAL VOC2007 and VOC2012, the input RGB images are resized to 448 × 448 and the ground truth line masks (which are binary masks) are made at 112 × 112 and, for MS-COCO, we used a bigger image size of 560 × 560 and the line masks have the size of 140 × 140.
Evaluation of the Per-Class Count Regression
We evaluated the branch of the per-class object counts separately to ensure the ability of the model to predict the number of objects of each class in the image. The obtained values in the predicted count vector are floating numbers; we round the vector values to the nearest integer values first and then we measure the accuracy of predicting the integer value of each object. We could attain a counting accuracy of 97% on the PASCAL VOC2007 test set and of 92% on the MS-COCO minival (MS-COCO val2017). Those accuracies are the basis of the success of our method, as in our algorithm, we force the PPHT to predict lines equivalent to the per-class object count.
Evaluation Results on Pascal Voc2007
While training the PASCAL VOC dataset, we combined the training and validation datasets of PASCAL VOC2007 and PASCAL VOC2012 to increase the training data as much as possible. During the testing and performance evaluation, we evaluated the VOC2007 test set. The proposed method could attain an mAP @ IOU of 0.5 values of 78.8 on PASCAL VOC2007. This high mAP is obtained by tuning the best parameters of PPHT so the model can detect both small and large objects. The tuning of PPHT parameters is very sensitive and needs to choose the best combinations of ρ, θ, and t as each parameter has a great impact on the final detection results; the parameters selected for each one of the datasets, according to Algorithm 1, are shown in Table 1. While tuning the PPHT parameters, we notice that increasing t provides more points for line detection but also has a negative effect when many small boxes are detected. θ is the angular resolution of the accumulator, and when it increases, PPHT can combine very close lines, while ρ, which is the distance resolution of the accumulator, controls the length of the detected line segments. All the parameters should be tuned together to obtain the best detection results. Figure 8 shows the sample results obtained by the proposed method on the PASCAL VOC2007 test set where the results show the ability of the proposed model to detect both small and large objects accurately. The mAP of each class of the two datasets is also reported in Figure 9a. The model could process the frames at a rate of 27 FPS, which is good enough for realtime applications.
Evaluation Results on Ms-COCO Minival
During the evaluation on MS-COCO minival, the proposed method could attain a box average precision (AP box ) of 48.1, which is relatively high for a hard dataset such as MS-COCO. The model could produce accurate detections as shown in Figure 10; however, the model struggles with the very crowded scenes of sports matches with a large number of people; such scenes are common in MS-COCO images. We also used the manual tuning of the PPHT parameters to achieve the best detection results; the PPHT parameters are reported in Table 3. Figure 10 shows sample detection results obtained by the proposed model that was trained on the MS-COCO dataset. The AP box of each class is also shown in Figure 9b. The model could attain a frame rate of 21.3 FPS, which still is an acceptable speed for real-time processing.
Performance Comparison with SOTA Methods
We compared the proposed method (LEOD-Net) with the state-of-the-art (SOTA) methods of object detection. As there are hundreds of object detection methods, we selected the most relevant methods at least in terms of complexity and input image size; we also included a few popular two-stage methods in our comparison to highlight the high accuracy of our method. Regarding our model that was trained on PASCAL VOC2007, it can outperform the other SOTA methods, including the two-stage methods, in terms of AP box , except for YOLOV4, which is a recent method applying multiple techniques to increase the performance. The speed is considered average as it is not as high as YOLOV2 and it is not slow similar to the two-stage methods as reported in Table 4. Regarding our model that was trained on MS-COCO 2017, it ranked as the second-best method (AP box = 48.11) after YOLOV4 (AP box = 50.51) as reported in Table 5. The evaluation on the COCO minival dataset was performed according to a recent study in [35]. Regarding the speed comparison on the MS-COCO dataset, we did not include the comparison of the speed as each method was tested on a different environment and hardware. In general, the proposed method (LEOD-Net) could attain a notable mAP that can work in real-time, which is the best trade-off for any object detection method.
Limitations and Future Work
Although we have attained good performance for our model, the model has a weakness in the optimization of PPHT parameters. To address this weakness, we aim to employ an automatic parameter selection using a search method in a future work instead of the manual tuning of the PPHT parameters. We believe that more tuning for the PPHT may produce better object detection results, so automatic tuning can be very helpful since the parameters have cross relations and also depend on other factors such as image size and quality of the estimated features before the pixel-shuffle operation. In addition, we plan to employ vision transformers (ViT) [36] in a future work to exploit the general context learning, which can be achieved using ViT and can be used to generate richer line features. Since the obtained object detection results are so promising, we aim also to extend the method in the future to perform instance segmentation in collaboration with our previous semantic segmentation method proposed in [25], which is one of the most-difficult high-level computer vision tasks. This method uses the original Xception architecture for semantic segmentation, so the same architecture can be trained to perform both object detection and semantic segmentation simultaneously, which is instance segmentation. In addition, the future method could attain real-time processing since our proposed method and the suggested segmentation method can work at a high processing speed.
Conclusions
We propose an object detection method using line-encoded bounding boxes (LEOD-Net), which proved to be efficient enough for object detection at a high speed (27 fps on PASCAL VOC and 21.3 on MS-COCO) via our experiments. The proposed method exploits the progressive probabilistic Hough transform to refine the initial pseudo-line masks predicted by the proposed CNN model and form the bounding boxes. The parameters of PPHT are so sensitive in the output detection and should be tuned carefully to obtain the best line decoding results. The proposed method outperforms many SOTA methods in terms of accuracy and some methods in terms of frame processing speed. The obtained qualitative results show the high performance of the proposed method in detecting accurate bounding boxes which match the object boundaries. Finally, the mAP values are good enough for accurate object detection tasks.
|
2022-05-15T15:12:45.460Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "0209fc44a5d984a2020d4f3153c0d6546de196c9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/10/3699/pdf?version=1652428845",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7977dfef01b23258ea7cdb725af5c7ebde39410c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17800440
|
pes2o/s2orc
|
v3-fos-license
|
Validation of a smartphone-based EEG among people with epilepsy: A prospective study
Our objective was to assess the ability of a smartphone-based electroencephalography (EEG) application, the Smartphone Brain Scanner-2 (SBS2), to detect epileptiform abnormalities compared to standard clinical EEG. The SBS2 system consists of an Android tablet wirelessly connected to a 14-electrode EasyCap headset (cost ~ 300 USD). SBS2 and standard EEG were performed in people with suspected epilepsy in Bhutan (2014–2015), and recordings were interpreted by neurologists. Among 205 participants (54% female, median age 24 years), epileptiform discharges were detected on 14% of SBS2 and 25% of standard EEGs. The SBS2 had 39.2% sensitivity (95% confidence interval (CI) 25.8%, 53.9%) and 94.8% specificity (95% CI 90.0%, 97.7%) for epileptiform discharges with positive and negative predictive values of 0.71 (95% CI 0.51, 0.87) and 0.82 (95% CI 0.76, 0.89) respectively. 31% of focal and 82% of generalized abnormalities were identified on SBS2 recordings. Cohen’s kappa (κ) for the SBS2 EEG and standard EEG for the epileptiform versus non-epileptiform outcome was κ = 0.40 (95% CI 0.25, 0.55). No safety or tolerability concerns were reported. Despite limitations in sensitivity, the SBS2 may become a viable supportive test for the capture of epileptiform abnormalities, and extend EEG access to new, especially resource-limited, populations at a reduced cost.
Scientific RepoRts | 7:45567 | DOI: 10.1038/srep45567 Smartphone-based EEG could increase the availability of EEG services in remote populations of LMICs. Here, we assess the capacity of a new software application using a 14-lead headset connected wirelessly to an Android device, comparing the detection of electrophysiological abnormalities as recorded by the smartphone-based versus standard EEG among PWE in a lower middle-income country.
Study Site and Participant Enrolment. We recruited participants in the Himalayan Kingdom of Bhutan
(total population 764,000) 8 . Bhutan was selected as a representative lower middle income country with a predominantly rural and remote population, dearth of practicing neurologists, lack of access to routine standard EEG, and endemic neurocysticercosis as a cause of epilepsy 9 . Participant enrolment and data collection took place at the Jigme Dorji Wangchuck National Referral Hospital in Thimphu, the capital city (July 2014-April 2015). PWE or suspected seizures of any age were recruited based on physician, health care worker referral from the National Referral Hospital Departments of Psychiatry and Pediatrics, the Institute of Traditional Medicine Services, and an existing epilepsy patient registry in Bhutan. Referrals also came through the "word of mouth" including advertisement through the local English language newspaper and Bhutanese radio stations. Participants were reimbursed the equivalent of nine United States dollars (USD) for their travel expenses.
Equipment. The Smartphone Brain Scanner-2 (SBS2) is a software and hardware application for EEG that operates on mobile devices 10 . The software is available under Massachusetts Institute of Technology License and the hardware platform is available under CERN Open Hardware License (https://github.com/ SmartphoneBrainScanner). The software framework supports data processing tasks such as data acquisition, filtering, recording, and real-time artifact removal. The code is written in QT C+ + , runs on desktop Operating Systems, including Windows, OSX, Linux as well as the most popular mobile operating system, Android. The software can be extended to work with different hardware configurations. In the present study, combined EPOC+ neuroheadset (www.emotiv.com, Emotiv Systems, Australia) and EasyCap recording cap (http://easycap.brainproducts.com, EasyCap, Germany) hardware was used.
An Android tablet (Nexus 7 2013 model, Google, California) was used for data collection. Head circumference-matched EasyCap headsets with ring electrodes aligned to the International 10-20 system and positioned at F3, C3, P3, O1, F4, C4, P4, O2, Fz, Cz, Pz, Fpz, A1, and A2 ( Fig. 1) were used. FCz and AFz served as the reference and ground electrodes, respectively. Five sizes of EasyCap headsets were available (range 40 cm to 56 cm) with head circumferences matched to ± 2 cm to the cap of choice. These caps are "off-the-shelf " models, originally configured for videogaming rather than medical use. Impedances of all electrodes were below 5 kΩ at the start of each recording. Raw EEG data were obtained with a sampling rate of 128 Hz, and were wirelessly transmitted to a receiver module connected to the Android tablet.
A stationary Xltek EEG system (Xltek, Natus Incorporated, California) was used to record standard EEGs, using 21 scalp leads placed according to the 10-20 system, as well as bilateral electrooculogram and electrocardiogram leads. Data Acquisition. Each participant completed a structured questionnaire for clinician-investigators to obtain the clinical epilepsy history, and underwent consecutive SBS2 EEG and standard EEG recording. Recordings were completed sequentially at the same appointment with minimal interruption between tests whenever possible in the same study area. The order of testing was convenience-based. Each participant was supine on a hospital bed, and given instructions to minimize movements and close his or her eyes for the duration of the recording. Each recording captured wakefulness and, when possible, sleep, and was planned to last a minimum of 20 minutes and not longer than 30 minutes. Recordings were completed during the daytime in a dedicated study room in the Department of Psychiatry at the National Referral Hospital.
Staff without formal EEG training, including Bhutanese research assistants as well as medical students and neurology residents, performed SBS2 EEG recordings. Training in SBS2 EEG was less than one hour in duration and involved observing smartphone EEG being administered and/or watching an instructional video online. Standard EEG recordings were completed on Xltek machines by board-certified EEG technologists or supervised research staff and carried out in accordance with the standards of the American Clinical Neurophysiology Society 11 . EEG files were coded by participant number and stored on encrypted, password-protected computers and external hard drives. Files were securely transferred to readers using the web-based file sharing application, Syncplicity (https://my.syncplicity.com) SBS2 data were analyzed offline using open-source EDFbrowser software (Teunis van Beelen, available at http://www.teuniz.net/edfbrowser/) (ECWL), Profusion 3.0 software from Compumedics (Compumedics, Abbotsford, Australia) (ASPL), and Persyst software (Persyst, Prescott, USA) (ADL, RZ). The software used for SBS2 data interpretation was based on neurologist preference. Standard EEG data were analyzed offline using Natus Neuroworks software (Natus Medical Incorporated, Pleasanton, USA). EEG Interpretation. Board-certified pediatric (ECWL, RT) and adult neurologists (AJC, SSC, ASPL, ADL, LL, RZ, EB) interpreted the SBS2 and standard EEG recordings. EEGs were distributed to readers in no particular order. Readers were masked to clinical data other than age and comments added to standard recordings by the registered EEG technologists (JMC, JM). Readers were instructed to categorize the EEGs and record their interpretation on a standardized spreadsheet. Recordings were classified as normal or abnormal overall, and abnormalities were classified as epileptiform and/or non-epileptiform. Readers were able to enter notes clarifying their interpretation. Each SBS2 recording was read once. Each standard EEG was independently assessed by ≥ 2 neurologists. A third neurologist or a group of neurologists resolved discrepancies in standard EEG interpretation.
Scientific RepoRts | 7:45567 | DOI: 10.1038/srep45567 The individual interpreting a given participant's SBS2 EEG was blinded to the findings from their standard EEG and vice versa. Due to the different electrode arrays, EEG interpreters could not be blinded to the type of study being performed. All EEG interpretation occurred on desktop computers, and viewing montages were selected at the discretion of the interpreting neurologist.
Participants were excluded from the analysis if: (1) they did not undergo both standard and smartphone EEG, or if either recording was unavailable (n = 52), or (2) the smartphone EEG recording was < 50% of the targeted recording time of 20 minutes (n = 11).
Patient Follow up.
Participants were provided the results of their standard EEG as well as the results of ancillary tests performed as part of the broader study, including brain MRI and blood tests for Taenia solium, at follow-up visits. Neurologists made treatment recommendations as necessary based on clinical judgment and test findings. Data Analysis. The final interpretation of the standard EEG was used as the gold standard in the assessment of the SBS2 EEG. The sensitivity, specificity, and positive and negative predictive values for the SBS2 EEG versus the standard EEG were calculated for the detection of all types of electrophysiological abnormalities, epileptiform abnormalities, and non-epileptiform abnormalities. Inter-rater reliability measured by Cohen's kappa (κ ) was used to compare agreement between readings of two independent readers of standard EEG and across EEG types. Cohen's kappa takes into account the possible randomly occurring agreement between readers, hence providing a better measure of agreement than a simple percentage. Post hoc analyses included a comparison of focal versus generalized discharge capture and a description of the SBS2 EEGs excluded for limited duration of recording time. A p-value of < 0.05 with two-tailed probability was considered statistically significant. All analyses were performed using the programming language R (Vienna, Austria). All research activities were performed in accordance with relevant guidelines and regulations outlined in the approved protocol. All participants provided informed consent or, when appropriate, assent with proxy consent from a family member.
Results
205 participants (54% female, median age 24 years) completed both SBS2 and standard EEG (Table 1). There were no safety or tolerability concerns reported. One participant had a tonic seizure during standard EEG recording, while all other recordings were interictal. The mean length of SBS2 EEG recording was 21.3 minutes (median 21.0 minutes, standard deviation 3.2 minutes), and the mean length of the standard EEG recording was 26.0 minutes (median 25.2 minutes, standard deviation 8.1 minutes). A comparison between the outputs of a standard versus SBS2 EEG is provided in Fig. 2. Representative EEG tracings from the SBS2 are provided in Fig. 3.
In post hoc analyses, we assessed whether epileptiform abnormalities captured were focal or generalized. Among the 51 standard EEGs with epileptiform abnormalities, 42 had focal and 9 had generalized abnormalities. On the SBS2 recordings, 31% (13/42) of focal and 82% (7/9) of generalized were identified. When only pediatric participants (n = 49) were considered, the sensitivity for the detection of epileptiform abnormalities by the SBS2 EEG was 0.56 (95% CI 0.31, 0.79) and the specificity was 0.87 (95% CI 0.61, 0.92).
Agreement among EEG readers. The interpretation of the SBS2 EEGs was divided among four neurologists, who each read 7-59% of the recordings. The standard EEG interpretations were divided among nine neurologists, who each read 1-26% of the recordings. All standard EEGs were interpreted by two separate neurologists. Cohen's kappa (κ ) for the two independent interpretations classifying each standard EEG as normal or abnormal was 0.60 (95% CI 0.49, 0.71). Cohen's kappa (κ ) for the two independent interpretations classifying each standard EEG as epileptiform or non-epileptiform was 0.60 (95% CI 0.47, 0.72).
Discussion
In this clinical validation study of the SBS2, we found a low to moderate sensitivity but high specificity for the detection of epileptiform abnormalities, compared to standard EEG, in our study population of PWE. For the combined endpoint of epileptiform and non-epileptiform abnormalities, the SBS2 had low sensitivity but high specificity for detection. Generalized epileptiform abnormalities were more often captured than focal abnormalities on SBS2 when compared to standard EEG. Our results support the SBS2 as a pragmatic option for some PWE in LMICs to receive a confirmation of clinically suspected epilepsy; however, the SBS2 is not presently an ideal screening test for epilepsy. With modifications, the SBS2 may be particularly useful in both high and low-income settings where standard EEG is unavailable, and may have applications for home-based assessment for suspected seizure disorders. Strengths of this study include a relatively large sample size, the inclusion of a community-dwelling sample, the inclusion of all age groups, and the contributions of a large team of academic neurologist EEG readers. The pragmatic design of the study demonstrated the feasibility of acquiring SBS2 EEG using a portable tablet and EasyCap system in a LMIC population. The detection of epileptiform discharges is a practical and clinically relevant endpoint that is valuable to the diagnosis of PWE.
However, there are several limitations to the study design as well. These include the sequential rather than simultaneous recording of EEGs and the assessment of the complete SBS2 system, including software and hardware components as a whole. Given the sequential nature of recordings, intermittent events occurring during the first recording may not be captured on the second and vice versa. This makes direct comparison of sequential at times imperfect. Both the duration of the abnormal discharges and the duration of the recordings are additional variables that may have impacted our results beyond the device characteristics themselves. Our study therefore also highlights the technical challenges of new mobile device testing in the environments where they could ultimately be most beneficial clinically.
We used Xltek, an EEG system similar to one that a patient would encounter at a high-income country's medical center with conventional EEG services as our "gold standard. " Alternatives, including dense array EEG and intracranial electrode monitoring may be optimal gold standard devices in other situations, but are not the standard of care for clinical practice. The interpretation of standard, clinical EEG is also very subjective, and the inter-rater reliability of experienced neurologists interpreting EEG is often poor 12,13 . In one single-center study, in which six board-certified neurophysiologists classified 300 EEGs into seven diagnostic categories, the aggregated Fleiss kappa for interrater agreement was 0.44 12 . Our study had higher interrater agreement for standard EEG. This is likely due to the fewer categories of classification (normal, epileptiform abnormalities, or non-epileptiform abnormalities) in our analyses. One proposed solution is automated interpretation of EEG recordings. Although automated detection of discharges and non-reader dependent quantitative assessment of EEG output is an area of ongoing investigation 14,15 , it is not yet accepted for clinical assessment of PWE. We believe the largest limitation to the use of SBS2 as an alternative to standard EEG equipment is the low to moderate sensitivity of the SBS2 for the capture of EEG abnormalities. Technical limitations, including the 14-lead headset without temporal or complete frontopolar coverage and the inability to monitor headset connectivity in real-time may have contributed to the reduced sensitivity of the SBS2. Some SBS2 recordings were of shorter duration than targeted, and those less than < 50% of the targeted 20 minute duration were removed from analysis. Post hoc analyses found that inclusion of these shorter recordings modestly reduced sensitivity and most of the short recordings were interpreted as normal. Other possibilities for reduced sensitivity include that the software may have less capability to process cortical signals and that neurologists interpreting the SBS2 recordings were less likely to report an epileptiform discharge on an SBS2 compared to a more conventional system. EEGs were distributed based on convenience, but given the relatively small number of SBS2 readers, we cannot rule out reader-dependent differences in the interpretation of the EEGs.
The SBS2 provides distinct advantages, several with particular importance to resource-limited settings. The most notable advantage of the SBS2 is cost savings compared to standard EEG. The SBS2 equipment costs approximately 300 USD per device, making the technology more than twenty-fold less costly than standard EEG equipment. Since medical equipment for the care of patients with brain disorders is often not prioritized in LMICs 16,17 , low-cost technologies at the "point of care" hold great potential for uptake 18 . More than 95% of physicians in training to become neurologists in sub-Saharan Africa in 2015 reported owning a personal smartphone 19 . Moreover, the potential to record without electricity for up to twelve hours (i.e. the battery life of the phone or tablet) allows the SBS2 EEG to be used facilely in remote settings since internet connectivity is required only to transfer files. Operators with only basic training can use the SBS2, as in this study, making the device's use by community healthcare workers possible. Future uptake of such a device could thus be rapid if distribution is properly designed.
Several mobile EEG systems are currently available or in the pipeline, including Mobita (Twente Medical Systems international), TREA (Natus Neurology), Trackit (Lifelines Neurodiagnostic Systems Inc.) and Safiro (Compumedics). An important advantage of the SBS2 EEG is that the open-source software can be downloaded to any Android device; however, the EasyCap and Emotiv components would still need to be purchased. By comparison, most market EEG systems use proprietary software for data capture and viewing. Our study compares a mobile phone-based EEG system to a standard EEG system in a lower income country among people with epilepsy who have not had prior access to EEG. Nonetheless, each of these mobile systems is ripe for future iterative development for PWE, as limitations in implementation and performance are carefully addressed in clinically relevant studies.
As is well-recognized, the feasibility of standard EEG in LMICs and remote settings is limited by cost, reliance on unpredictable electricity, increased preparation time for recording, the production of larger data files produced that cannot be transferred via email, and lack of technical support in the case of equipment failure. These examples have been presented from around the world but may be most relevant to populations in LMICs. In Nigeria, a retrospective study 20 found that EEG resources were under-utilized by 75% due to a lack of recording paper, electrode gel, increasing cost of EEG, and strikes by hospital staff. A case series from a tertiary hospital in Harare, Zimbabwe found that only 4.2% of EEG referrals were from district hospitals, finding that rural dwellers were least able to access EEG services when they existed only in the capital city 21 . Among five southern Caribbean countries, EEG services were available in only two, and the percentage of patients receiving EEG who required one ranged from 10-68% across the five countries 22 .
Future directions for research on the SBS2 EEG include the assessment of new headset devices with customized lead configurations, the assessment of new mobile computing devices including iOS ™ based operating systems, user-friendly updates to the application, and the development of automated and "crowd-sourced" EEG interpretation. Further investigation is needed to identify what populations of PWE might benefit most from access to SBS2, and which epilepsy syndromes and categories are most amenable to evaluation by the SBS2. Based on our findings, we believe a generalized epilepsy syndrome such as absence seizures in childhood (i.e. so-called staring spells) may be an obvious patient group of benefit given the usually high frequency of events, the generalized epileptiform pattern, and the ability to intervene early to prevent school dropouts and delays in learning. Advances in technology must parallel the development of a healthcare workforce skilled in the implementation and assessment of new health technologies, including transferrable technologies with health relevance. On the horizon of bringing EEG acquisition to the smartphone of any interested person, we provide a "proof of concept" that smartphone-based EEG systems are feasible and may soon allow PWE worldwide access to EEG, including in locations where such services have not previously existed.
|
2018-04-03T04:45:00.319Z
|
2017-04-03T00:00:00.000
|
{
"year": 2017,
"sha1": "5b2f8b5a6949a3aa7dc3ac1daa8660e5d9193c1c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep45567.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "39beb95089b6aea874397bce7a0c6d5400acc4fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
159278580
|
pes2o/s2orc
|
v3-fos-license
|
Real Option Analysis versus DCF Valuation-An Application to a Tunisian Oilfield
The most widely used methods of choosing investments are undoubtedly the NPV. This method is often criticized because it does not allow to take into account certain main characteristics of the investment decision, notably the irreversibility, the uncertainty and the possibility of delaying the investment. On the other hand, the real options approach (ROA) is proposed to capture the flexibility associated with an investment project. This article examines whether the value of an undeveloped oil field varies according to whether the ROA or NPV assessment is used. In addition, to value the option to defer, we developed a continuous time model derived from previous work by Brenan and Schwartz (1985), McDonald and Siegel (1986) and Paddock, Siegel, and Smith (1988). The originality of the proposed model gives rise to a simple and uncomplicated method for determining the value of the option. Findings indicate that the two evaluation methods lead to the same decision, the project is economically profitable. In this oil investment project studied, despite the positive value of the option, the importance of projected cash-flows and optimistic forecasts of the price of oil, led us not to exercise the option and to undertake the project immediately.
Introduction
The theory of investment postulate that a firm should invest in a project as long as the present value of the expected stream of profits that this project will generate should exceed or be equal to the present value of expenditure stream required to build the project.This classic method of the net present value (NPV) or discounted cash flow (DCF), is not capable to take account of some main characteristics of the investment decision in particular, the irreversibility, the uncertainty and the possibility of deferring the investment decision and so, ignores flexibility with regard to the timing of the investment decision.
Several previous studies, [ Myers (1977) , Schwartz ,Dixit and Pindyck (1994), Ross (1995), Herder et al. (2010), Haque et al. (2014) )], have shown that, in terms of investment analysis in an uncertain environment, the use of traditional NPV or DCF to make decision, may produce a biased result of an investment program.
Modern finance theory gives us an interested support to value this flexibility and the possibility to defer the investment by using the ideas based on the pricing model of Black and Scholes (1973) and applying them to enhance real world projects.Thinking about how future circumstances affect the value of projects has therefore come to be known as the area of the real options approach (ROA).
A real option (RO) is an option generated by an investment project, the owner of the real option, as is the holder of a financial option, have the right and not the obligation to take a decision on a future date.This RO is generated by the characteristics of an investment project.When a firm makes an irreversible investment it can be assimilated to an exercise of an option to invest.Doing so, the firm can have the possibility to waiting for new information that can arrive and this may affect the decision to invest immediately (expand, defer or abandon).The loss of this option can be considered as an opportunity that must be included as a part of the cost of investment.investment decision.McDonald and Siegel (1986) give proven that with even moderate levels of uncertainty, the value of this opportunity cost can be large, and an investment rule that ignores it will be grossly in error.
Many theoretical models were also developed to test the pertinence of the ROA, Dixit and Pindyck (1995) developed a pricing model of a petroleum investment and show that it may significantly underestimate its value if it ignores the flexibility available to the owner about the timing of the development.Myers and Majd (1987) have incorporated the possibility of abandoning a project through a continuous time model.Brenan and Schwartz (1985) integrated the possibility of stopping production with the use of the temporary shutdown option.Luehrman (1998), based on an application in the pharmaceutical investment, propose a two dimensions model to valuation of real options.Penning and Lint (1997) demonstrated the relevance of using the ROA in R&D area.
In petroleum industry The ROA was first introduced by Paddock et al. (1988), their study extended financial option theory by developing a methodology for the valuation of an offshore petroleum oilfield by combining financial option pricing tools with a model of equilibrium in the market for petroleum reserves.As Paddock et al. (1988), several other studies have tried to proceed with the valorization of a petroleum lease through an ROA [Dickens and Lohrenz (1996), Cortazar and Schwartz (1998), Zettl (2002), Brandã o et al. (2005)] The present paper studied the pertinence of the use of the ROA in the investment decision.More precisely, the paper developed a theoretical model to compare the NPV method and the ROA to investment decision making and then, to explain in which circumstances NPV analysis and ROA differ as basis for decision making about a Tunisian oil field investment.Compared to other models and previous calculation methods, the developed model in this paper gives rise to a simple and uncomplicated method for determining the value of the option to defer.
The remainder of the paper is organized as follows.In section 2 we develop the methodology of ROA and the DCF in energy investment evaluation.Section 3 contains an overview of Tunisian Petroleum legislation and data collected.Section 4 discusses the results of the two methods when applied to the evaluation of the petroleum investment project.Section 5 concludes.
The NPV Evaluation
The DCF method of calculating project net present value is the most widely accepted valuation method.The basic premise of the conventional NPV method is to estimate future cash flows from an investment outlay (revenues and costs) and discount them to a common present time using a hurdle rate (discount rate) or a risk-adjusted rate of return (WACC).The convention is to determine the net difference between estimated discounted revenue and discounted cost such that if the net is greater than zero, the investment is considered economically attractive.The aim of this deterministic method is to find expected present value of future income and costs, and to compare this value with projects" investment costs.The decision rule recommended by NPV method, states that if the current risk adjusted value of expected cash inflows exceed the value of cash outflows, then a project should be immediately undertaken.
The simple formula of NPV can be written as follows: This standard NPV approach implicitly assumes that managers will remain passive if the circumstances or the conditions change.Thus, even if market conditions change, the NPV rule assumes that managers will not alter their level of production or initial decision in response.In other words, the conventional NPV method supposes that the investment decision as a static, one off affair and that there is no scope for managers to react to new information.Furthermore, the NPV method assumes that initial decision is made just now, all cash flow streams are supposed to be fixed in the future and are highly predictable, and all categories of risks are accounted for by the discount rate.
The energy sector is a good example of an industry exposed to high investments and high uncertainty.The level of activity and the investment profitability is affected by many factors representing uncertainty and risk (oil price, technology, costs and inflation, Amount and the quality of the crude oil extracted, the issue of time etc...).
When applied in petroleum investment, the deterministic NPV method has major weaknesses that inhibit correct lease value determination [Schwartz, Dixit and Pindyck (1994)]2 .In particular, (1) the choice of timing for the NPV calculations is chosen in most time arbitrary and then, is subject to error.(2) Petroleum companies, as well as the government, may have different assessments of future statistical distributions, and thus expected paths of hydrocarbon prices, none of which need to conform to the aggregate expectations held by capital markets.This also leads mostly to divergent valuations.(3) The manner to choose the correct set of risk-adjusted discount rates in the presence of the complex statistical structure of the cash flows is a complex task, and it can be subject to a great deal of subjectivity and error.
All these weaknesses can be partially solved by the ROA.The option valuation methodology is supposed to be not subject to these problems, for the simple reason it is purely a financial valuation tool.Moreover, some investment areas, such as the energy sector, can be valued with more flexibility and details by the ROA.Indeed, petroleum investment is often characterized by long-lived irreversible and uncertainty investments and it proved that the use of the ROA can reduce information requirements by eliminating the need to estimate future developed reserve values3 and also eliminating the need to determine the risk-adjusted discount rates4 .
Financial Option and Real Option: A Comparison
A real option is generated by the characteristics of an investment project.In the case of an oil reserve and according to ROA, the investment decision can be taken in a sequential manner, each phase gives the right to carry out the next.As proposed by Brenan and Schwartz (1985), investment can be subdivided in three sequential steps, exploration 5 , development and extraction.The most important phase is probably the development phase.This phase need colossal investment amounts to convert an undeveloped reserve in a developed reserve.Then, to develop this reserve can be considered as a call option to obtain the underlying asset, the developed oil reserve, after payment of the exercise price, the cost of development.
Table 1 summarize a comparison between real and financial options and reports the similarities between the parameters in the Black and Scholes formula and a typical real option model.Following the methodology of Brennan and Schwartz (1985), McDonald and Siegel (1986) and also, Paddock et al. (1988), the rate of return, Rt, is supposed to follow a Geometric Brownian Motion (GBM), as is given by the following equation: The net payoff in petroleum investment comes mainly from two sources: (1) the profits from production; and (2) the capital gain on hold the remaining petroleum.Suppose that production from a developed reserve follows an exponential decline: So, the net payoff can be written as follows: With is the after-tax operating profit from selling a unit of petroleum Substituting (3) and ( 2) in (1) we obtain the stochastic process generating the value of a producing developed reserve: And, : is the payout rate of the producing developed reserve, and is the expected rate of capital gain. 6n order to evaluate the value of the option to defer (F), we consider the following parameters Based on arbitrage argument, the value of the developed reserve can be written as: With r is the risk free rate, assumed to be constant over the entire life of the project.
Following Merton [1973], McDonald and Siegel [1986] and also, Paddock, Siegel, and Smith (1988) With D is the value of development cost (per barrel) of the developed reserve, and V* is the critical value of investment i.e. the value beyond which the investment decision by the ROA is recommended.
To solve the differential equation analytically, under the assumption that of a developed reserve is assimilated to a perpetual option, we will be able to determine the solution of the differential equation, given as follows 7 : Accordingly, to the ROA, the decision rule state that we should only invest if, V > V * and not if V > I. So, as soon as the current value of the developed reserve exceeds a given critical value of investments, the investment decision becomes possible, and investment should be to this date undertaken.
Legal framework
The State owns hydrocarbon deposits.
Type of licenses
The Hydrocarbon Code provides for four types of licenses:(1) A prospecting authorization enables the holder to perform preliminary prospecting works with the exception of any seismic works and drilling wells (1 year) (2) A prospecting permit enables the holder to perform prospecting work (exclusive right) within a defined area (2 years) (3) An exploration permit enables the holder to perform the works set out in the exploration work program and (5 years) (4) An exploitation concession is granted when a commercial discovery is made during the period of the exploration permit ( 30 years with possible renewals).
Contractual regime
There"s mainly two contractual regimes, (1) a joint venture agreement contract (JVAC) signed by the ETAP" company and the Private Oil Company, (2) a production sharing contract (PCC) entered into between ETAP and the contractor.Within the first type contract , the ETAP benefits from an option to take a participating interest in the exploitation concession and The contractor usually bears the expenses and risks of prospecting and exploration activities (but ETAP may choose to participate in the exploration expenses with the consent of the State while in the second form of contract the ETAP" company is entitled to a share of the hydrocarbon production and must reimburse a percentage of the prospecting and exploration costs as agreed under the production sharing contract.
Income Tax rate
The Income tax rate in Tunisia varied between 50 % and 75 % depending to "R" factor which is a ratio between (1) the Accumulated net revenues and (2) Accumulated expenditures.
Royalty tax
An additional tax, as a royalty on production, must be paid and it"s depending also on R factor (< 15%).
Incentives
As a Tax incentives, the company have the right to build up an investment reserve in the limit of 20% of the taxable income intended to finance Prospecting and Exploration activities.
Contractor domestic obligation
The foreign company has a Domestic Market Obligation (DMO) which state that a percentage of 20% of production must be sold in the domestic market at a discount to International prices.
Collected Data 10
We consider, to evaluate the petroleum investment, the following parameters collected mainly from the ETAP Company 1) The exploration permit named "ARAIFA" is accorded by Tunisian state in 2011 and terminate at 2018.
2) The investment must begin, if the profitability of the reserve is proved, on 2019, the contact that will be signed is a production sharing contract (PCC).
3) The size of the reserve or the amount of hydrocarbon is estimated at 17.528 million existing barrels (that can be increased to 40 million of barrels).
8 Source: ETAP company 9 All data as well as the Tunisian petroleum legislation are obtained from ETAP company "experts, and also public information"s available from the website www.etap.com.tn 10 Source : ETAP company 4) The expected amount of oil that will be produced per day is at least of 15 thousand barrels.The quantity of production will peak in the second year of operation.At that time this amount is estimated at 1.940 million barrels (insert figure 1) 5) Exploration costs amounted to 20 US $ Million.
6) The development cost (the value of the investment) is equal to 50.995U.S. dollars Million. The investment will be made sequentially over 5 years (2018-2022) ( insert figure 2) 7) The production, if the project is described as profitable, start at the end of 2019.
8) The initial investment will be amortized from the year 2019 at, 20 the first year, 30 second and third years, and 20 the fourth year.9) All the natural gas present in the oilfield will not be considered in the valuation, since it constitutes a minor part of project value.
10) The average decline rate is estimated to be equal to 6 (approximately).
11) The operation life of the oil reserve is estimated at least 20 years.
12) The present investment will be funded by a credit banking loan in U.S. dollars (65%), the rest (35%) by owned funds.
13) The beta factor in energy and Gas sector (mean) is supposed to be to 1 (insert figure 3)
DCF Valuation
Considering all these data presented before, oilfield characteristics and financial parameters and calculated forecasted cash flows (insert figure 4), by the application of the formula (8) the NPV method gives a positive value (11851.350).The recommended decision rule supposes, then that the investment is profitable.The decision rule given by this method recommended that the project is economically justified and should be then immediately taken.This result found by this deterministic method must be interpreted with caution due to many errors that can be committed in using the NPV method.In particular, (1) the choice of the date of exploration and development is an arbitrary choice, therefore, subject to more errors, (2) The oil company and the government can have different estimates about the distribution of cash flows, and also (3) the oil price and the process of choosing the discount rate can involve highly complex statistical tools and will be mostly guided by subjective data.All these errors can be partially corrected by using the ROA.
Moreover, there are two apparent advantages of the ROA over DCF.First, the ROA approach reduces information requirements by eliminating the need to estimate future developed reserve values.Even using the market value of developed reserves, the DCF analyst would still need to make assumptions about the expected rate of appreciation of the value of a developed reserve.Second, the ROA approach eliminates the need to determine risk-adjusted discount rates, because the optimal investment-timing decision must take account of the feedbacks between the investment-timing rule and the risk of the resulting cash flows.
The Parameters of the Option to Defer
In order to value of the option to defer i.e. the value of the undeveloped reserve, we must determine at first all the financial parameters needed, in particular the volatility of developed reserve, the net convenience yield, the present value of the developed reserve and the investment cost to develop the petroleum reserve into developed reserve.
Volatility of Developed Reserve (Stock Volatility): 𝜎 𝑞 2
To calculate the volatility of reserves developed, we must have the historical data of developed reserves, but these data are not available on the Tunisian context12, and the value of the volatility affects the value of the option.To determine the volatility of developed reserve supposed to obtain past data on market values of developed reserves or the value of volatility for a similar existent reserve that has the same hydrocarbon quality, cost structure, and tax regime.Unfortunately, this information is not available in Tunisian context, we choose then to approximate the volatility of reserves developed, as Paddock et al. (1998), by the volatility of fuel prices.We suppose, that the rate of change in crude prices follows a lognormal distribution.Given this assumption, the variance of the rate of change in crude prices is equal to the variance of the developed reserve.This variance can be developed as follows: With, = log ( +1 ) , P t is Crude oil prices observed on period t and, N is the sample size (number of observations).
Based on a sample of monthly quotations of crude oil during the period 2010-2018 (insert Figure 5), we calculate an annual variance ( 2 = 13.55).The convenience yield, as an alternative cost, suggests that the value of the possibility of waiting and the delaying the investment, decreases as the benefit of holding crude oil increases.On the contrary, when the convenience yield decreases or becomes negative, the value of the option to defer investment increases.The calculation of the convenience yield value is given in the following table (insert Table 4).As Paddock et al. (1988), we applied the hypothesis of Gruy et al. (1982).This assumption, named "one third", state that the value of developed reserve prices tends to be about one third of crude oil price, ( = /3 ), with P t is the annual average price of crude oil.Given this assumption, we can use the variance of the rate of change of crude oil prices as a proxy for the variance of the rate of change of developed reserve prices.
Using monthly data for the period 2000-2018, the annualized variance of crude oil is about (13.55) (insert figure 5).
The present value of the developed reserve (underlying asset) can be given by the following formula: The exercise price is the net present value of the cost of the development which can be calculate as follow: Table ( 5) resume all the parameters needed to evaluate the value of the option, the undeveloped reserve.
ROA Decision Rule
Given these results, the project's NPV is positive (NPV = 11851.530),the oil company can take profit in realizing this oil project as long as the critical value (V* = 137577.956)is less than the current value of the developed reserve (V = 242168), and therefore, under the decision rule recommended by the real options method, the investment must be made immediately and it should not be delayed.In the case of this petroleum investment, we can then easily observe that the two methods lead to the same conclusion.But despite that the conclusion is the same, there is a large difference between the two approaches.
In the case of the new investment rule based on the ROA, we must not invest when V > I, but when V > V*, with V* > I .The difference between the critical value of investment (V*) and the value of investment (I) is the option value of waiting owned by the oil company.This option gives the possibility to defer the investment decision.The error will be committed is therefore, to believe that the development cost of reserves at initial date is equal only to the cost of development, but this cost is more important, it is equal to the cost of development plus the value of the option to defer i.e. the full cost of the investment.
In the case of the investment project we dealt with, although the value of waiting option is high (three times greater than the cost of development), the importance of cash flows generated by the investment leads to realize this investment as soon as possible, even if the act of investment will generate to lose the value of the option to defer (insert figure 6).
Sensitivity Analysis
As long as we don"t have exactly the true value of the volatility and we are obliged to approximate the value by the volatility observed in crude oil, and as the value of RO increases with volatility and risk, we choose in this paragraph to calculate the sensitivity of the value of the option to defer, the critical value of the investment, and also the decision rule given by the ROA, regarding the change in risk i.e. the variation of the volatility of the developed reserve (insert Table 7).
Concluded
The inability of the NPV approach to correctly value the different options embedded in investment strategies can lead to poor or even wrong investment decision making, particularly where the environment is characterized by, irreversibility, uncertainty and managerial flexibility.
The present paper extended financial option theory by developing the ROA to valuing a claim on a real asset: a Tunisian offshore petroleum lease.To value the option to defer, we developed a continuous time model derived from the former studies of Brenan and Schwartz (1985), McDonald and Siegel (1986) and Paddock et al. (1988).Compared to others theoretical models and calculation methods developed in the financial literature, the developed model in this paper, gives rise to a simple and uncomplicated method for determining the value of the option.
The results obtained show that the two methods (NPV and ROA), lead to the same decision: the project is economically profitable and it should be immediately undertaken.But, although the option to defer is very important, the importance of cash flows generated by the investment lead to realize it immediately and then, not to choose the exercise the option to defer.
Finally, and as underlying by Dixit and Pyndick (1995), there are many other real assets with option-like characteristics.The kinds of informational economies, insights, and problems discussed in this paper in relation to valuing petroleum leases may be present in valuing claims on other real assets as well.An interesting extension to this framework would be to include other types of options (abandonment option, for example), as to study the sensitivity of NPV and ROA methods to input variables by using more sophisticated, but more complicated, methods as Monte Carlo simulation.
Net present value of initial investment CF: The cash flows released by the project r: A discount rate With petroleum investment, and given the specificities of the project, the NPV can be expressed by the following formula: = − + ∑ − ,( − (1 − ℎ ) + ℎ -The net present value of initial investment N: The life of the oilfield once production has begun WACC: The risk-adjusted continuous discount rate 1 Q t : The number of barrels of oil to be extracted in year t S t : The estimated Brent spot price C t : The total cost of production year t X t : The forward exchange rate year t h t : The corporate tax rate D t : The planned depreciation year t = + = + (3) With, Bt: The number of units of petroleum in a developed reserve Vt: The unit value (per barrel) of developed reserve Rt: The instantaneous per unit time net payoff from holding the reserve at the time t.V: The market risk-adjusted (expected) return rate : The instantaneous per unit time standard deviation of the rate of return The value of the option to defer i.e. the value of undeveloped reserve V: The unit value (per barrel) of developed reserve Q: The amount of oil existing in developed reserve D (Q): The unit cost of development witch supposed a function of the quantity Q t: The current time T: The expiration date , resolving this differential equation must satisfy the following boundary conditions: (0; ) = 0 ( ; ) = , − ; 0-(10) ( * ; ) = * − ′ ( * ; ) = 1
Figure 2 .
Figure 2. Forecasted project cost of development (ETAP) Figure 1.Forecasted project annual production (ETAP) Given the value of the R factor (*****) Source: www.treasury.gov(******) The WACC has been determined using estimates of the debt ratio, beta and an assumed risk premium to calculate the cost of equity
Figure 4 .
Figure 4. Forecasted all project Cash Flows
Figure 5 .
Figure 5. Crude oil prices between year 2010-2018 (Source: www.boursier.com)4.2.1.2Net Convenience Yield (Stock Dividend Yield): To evaluate to option to defer attached to the petroleum investment, we must determine in the net convenience yield for producing reserve, as given on equation (7) developed above.Given, theGruy et al. (1982) "one-third" assumption mentioned above, the after-tax barrel profit from production ( ) can be calculate as follow: cost (Exercise price of the option)
Figure 6 .
Figure 6.Decision rule according to the ROA
Figure 7 .
Figure 7. Sensitivity analysis of the option to defer to the variance of the underlying asset (developed reserve)
Table 1 .
Analogy between financial and real option
Table 2 .
Table 2 summarize an overview of petroleum fiscal and legislation landscape in Tunisia 9 Overview of petroleum fiscal and legislation landscape in Tunisia
Table 3
reports oilfield characteristics and all financial parameters used in the DCF valuation.
Table 4 .
Determination of the convenience yield value
Table 5 .
Financial parameters needed to evaluate the option to defer Valuing of the Option to Defer All parameters are now available, we are now able to calculate the value of the option to defer attached to the oil reserve.This value can be given by the formula (10) given above.
Table 6 .
The value of the option to defer according to the ROA
|
2019-05-21T13:04:16.707Z
|
2019-01-29T00:00:00.000
|
{
"year": 2019,
"sha1": "30d32e911aa31741202f9dbcf93ec6b59c7b2b13",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/ibr/article/download/0/0/38358/38904",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "30d32e911aa31741202f9dbcf93ec6b59c7b2b13",
"s2fieldsofstudy": [
"Environmental Science",
"Economics",
"Engineering"
],
"extfieldsofstudy": [
"Economics"
]
}
|
135457360
|
pes2o/s2orc
|
v3-fos-license
|
A Study on Economic Prospects and Problems of Terracotta and Pottery Crafts of Assam with Special Reference to Asharikandi Village of Dhubri District
Terracotta and Pottery crafts are found among those people who have strong urge for creativity and apply their artistic mind in this specific field of handicraft giving a life to the soil. This is a kind of human creation which is as old as human civilisation. Once upon a time, it was considered as a poor men craft but in due course of time it has made its access and occupied its distinct identity among all classes of people by its aesthetic value. The study has been done with an attempt to focus the role of terracotta and pottery crafts in entrepreneurship development. Asharikandi, the craft village of Dhubri district is a distinct place where both terracotta and pottery crafts are found and practiced in traditional style. The real thrust of the study is to make the people aware about the scope available in this specific craft as to upsurge booming culture among the potential artisans. The economic viability and problems faced by the artisans in the competitive world is another important objective of this particular study.
Asharikandi, have enabled to retain a separate identity of Asharikandi style of terracotta. HATIMA DOLL, nationally and internationally acclaimed unique masterpiece of late Sarala Bala Devi, has bought much repute for Asharikandi style of terracotta. [Design Clinic Workshop Report 2013] Historical Background: Before the partition of India, a few potter-families from erstwhile East Bengal, at present Bangladesh, migrated to this place of Asharikandi (Madaikhali). It is said that the term Asharikandi deriv ed from the combination of two words 'ASHAR' and 'KANDI'. 'ASHAR' is the third month in Assamese calendar and 'KANDI' (Assamese term) means 'shedding tears'. During ASHAR, heavy rainfall causes flood in this lowlying area. The dwellers of this place shed tears out of misery caused by the havoc of flood. Especially the potters have to suffer a lot. They cannot make, dry up, burn their products and cannot even store safely their earlier produced items during the rainy season ASHAR, the flood-prone month. The senior-most potters say that they selected the place due to some reasons like-the availability of raw materials, cheaper transportation facility, important strategic location etc. HIRAMATI, the soil is the soul of this craft. The reserve of HIRAMATI, lies nearby areas like Silairpar, which is only four kilometres away from the village .If the raw material had to bring through surface transport to the production-place, it would have been very costlier. But owing to the strategic location, the people can avail the cheapest means of transportation by boat etc. through waterways, as the village is just on the bank of the River Gadadhar, a tributary of the mighty Brahmaputra. For selling of finished goods, both the surface transport and water transport can be availed. The connection with the river Brahmaputra gives the place an advantage for marketing network with the major cities. Earlier, the daily utensils needs of the Jamindar (Royal) family were catered from this area since this pottercommunity migrated to this place. The farsightedness of the ancestors of the potters of this place is really laudable and amazing. [Assam eagle 2010] Terracotta and Pottery is a unique application of creativity and entrepreneurial processes. Entrepreneurship is the trouble shooter against economic backwardness of a society. Entrepreneurship is such a process through which it is possible to transform a barren land to a fertile one. Entrepreneurship is such a phenomenon in which people strive to do something new by taking risk and expecting some economic gain. The term entrepreneur originates from the French word "Entreprendre" meaning of which is to undertake. The term entrepreneur first appeared in sixteen century in French language. The word was also applied to the leaders of military expedition. [Gupta & Khanka 1996, Thomas et.al, 2005 According to Richard Cantilon "An entrepreneur is a person who buys factor services at certain prices with a view to selling its product at uncertain prices" as per cantilon entrepreneur is a non insurable risk bearer [Mohanty, 2011] In the words of Schumpeter Entrepreneur is an innovator who carries out new combinations to initiate the process of economic development through introduction of new products, new markets, conquests of new source of raw materials and establishment of a new organisation of industry [Mohanty, 2011]. Behind every great work of our civilised human society, there is a creative human intelligence. Terracotta and pottery crafts are age old creation of human civilisation. Basically entrepreneur is a person who initiates his/her own ideas into an operational activity. It involves a process in which they apply own creativity to fulfil the dream into reality. Entrepreneurship could be considered as one of the key elements of e conomic growth rate of a nation. It creates sources of income generation for the unemployed youth. It is remarkable that the attitudinal aspect of the present generation has been shifted towards self employment through entrepreneurship. India is a rural based country where 68.84 percent people live in rural area and rest 31.16 percent live in urban area (as per 2011 census). Although the country has the enormous scope for growing entrepreneurship, yet the growth rate of the country is not reached at the expected level. Lack of application of the resources and underutilisation of the capabilities are the main hindrance of backwardness. In the context of north east India, Assam is one of the pioneering states in starting entrepreneurship. In 1973, the state conceived the idea of entrepreneurship and initiated its operational activity. In the long run, different business schemes and initiatives by the Govt. could be implemented for the unemployed. In this regard, entrepreneurship covers all the aspects of business oriented both in service and manufacturing sector. There is sizeable no. of business enterprises starting from micro level till the large enterprises in the state. Assam where, a large mass live in rural areas can be set up rural based industry dependi ng on its identical activity. Terracotta and pottery craft are very old aged enterprises where a group of creative people are engaged from different corners of the state.
OBJECTIVES OF THE STUDY:
i. To create awareness about terracotta and pottery craft and its economic viability. ii. To analyse socio economic condition of the artisans. iii. To identify the problems faced by the artisans of terracotta and pottery crafts.
METHODOLOGY:
This particular study is conducted in the month of September, 2017 in Asharikandi village of Dhubri district of Assam. In order to meet the objects of the study the proposed study has been done collecting data both primary and secondary sources. To bring authenticity to the study a well planned questionnaire have been designed and served for data collection from the populous on random basis. Structured and unstructured interview was also used as an important tool to interact with Government officials and other targeted respondents from non government organisations. Secondary data from government website, relevant books and journals were considered for the purpose of the study. This study is based on 80 artisans that have been surveyed in the study period out of 250-300 (approx) artisans of 80 families.
Research Questions Investigated:
i. Whether their income solely comes from their practised craft or they indulge in other activities?
ii. Is there any training programme for further development of their craft? iii. Whether Support from Governmental agencies are sufficient enough for the progress of the art? iv. Whether appropriate action taken by concerned authorities to counter their problem?
LIMITATIONS OF THE STUDY:
i. Personal judgement of the researcher played an important role in selecting samples of the study which may not be exact to the situation. ii. Some respondents were reluctant to divulge personal information which can hide the validity of all responds. iii. Another difficulty of the study is that most of the respondents are not educated.
LITERATURE REVIEW:
Phukan (1987) in his study reported that Hira, Kumars, Blacksmith and Goldsmith are the professional artisans of Assam. Hira and Kumar are mainly practise pottery craft. There is a distinct difference in the technique, use of raw material and also shape of various earthen wares made by these two groups. He mentioned that Hiras manufacture potteries for domestic as well as other day to day use whereas the Kumar manufacture potteries which are generally used for religious and other social ceremonies. A far-reaching ethnographic documentation of the terracotta craft tradition of Bihar and Eastern Uttar Pradesh was carried out by Jayaswal (1984); Jayaswal and Krishna (1986). In their in-depth work they examined the distribution of terracotta forms and styles, geographical factors, and location of the centers. They described the manufacturing technique of these terracotta"s produced by the contemporary potter communities from the middle Ganga valley in detail. They also classified the terracotta figurines from the area into three categories: ritualistic, decorative and toys for children. Elephant, horse and tiger are the predominant animal figure produced in that area. These were offered in different ritual ceremonies though the use of tiger was limited to the small area of Gorakhpur. They referred to the terracotta of Nauranga style" which is represented by stylized horse, elephant, tiger, camel and rhinoceros. [Jayaswal, et. al. 1986] Gupta (1988) provides insight into the progress and prospect of pottery industry as developed in India and various problems faced by it. He studied in detail the important aspects of the growth and development of pottery industry in Phulpur of UP. He has examined at length the organisational structure, administrative set up, capital involved, techniques of manufacturing, and packing, transportation and marketing problems. The production trends and the cost analysis as well as the problems faced by the workers engaged in the industry, including their wage structure, working condition and social welfare have also been exhaustively examined. Bhattacharya (2008), in her work discussed about the conditions of idol makers of West Bengal. Her study mainly concentrated on the potters engaged in religious idol making. Through her study it can be understand that these potters are emerged as occupational group and rural potters have a tendency to move towards urban areas as urban areas provide better earning opportunities. In her study she also raised concerns regarding transformation of society and existence of traditional system of idol making. Individual effort to Asharikandi style of terracotta and pottery: Sri Dhirendra Nath Paul, nationally & internationally acclaimed senior most master craftsman of this craft, young talent of this craft Mr. Devdas Paul and Gokul ch. Paul along with Sri Binoy Bhattacharjee, the Coordinator of ATAPDC and the Director of NECARDO, has been working for the preservation, promotion and development of Asharikandi style of Terracotta for the last twelve years. His active role in the formation of ATAPDC, and the "Assam Handicraft Artisans' Welfare Fund Board" is worth mentioning.
Present Status:
Once all the people of the Paul Para, the name of the cluster of potters of Asharikandi, used to practice pottery craft. But in course of time, they had to discontinue pottery, their traditional work, due to many problems. Twelve years back only two families had been practicing terracotta and few families had been doing pottery works. But, at present, altogether eighty families of this village are engaged in terracotta and pottery practice. The artisans now work round the year. Terracotta and pottery work is now their main profession. Few years back it was their part-time job. Earlier they used to sell their potteries likepitchers and other utensils in the nearby towns and villages, and terracotta products-like Hatima Doll, Ainar Horse, Elephant, Rhino, and other idols of God and Goddesses on the occasions of local festivals and fairs. Now they go out for selling their terracotta products in selected occasions like trade fair, and sale cum exhibitions organised by the various Govt. Departments and NGOs. Resellers of terracotta and pottery items come to the artisans' cottages and purchase the goods direct from the village. The selling part of the products is also run by the Scheduled Caste, fisherman Barman community people, who are also the residents of Asharikandi village and five hundred families in number. This fishing community, due to the lacking of fishing opportunities, had to left their ancestral-work and has been shifting to terracotta and pottery. In spite of having enormous opportunities the Terracotta and Pottery craft industry of Asharikandi cluster is facing some threats. The imminent threats are from raw materials. The Hiramati is the principal raw material. It is generally found in the low-lying areas. It is a special type of soil having more elastic and more water containing capacity. It is not available everywhere. The Hiramati which is used in Asharikandi style of Terracotta is found in an area of more than one hundred bighas of land at a place known as Silairpar. The area is under Government Khas land. It is just five kilometres away from the cluster. The artisans used to bring the required Hiramati from the said area for a long period of time but during the recent years, the area is encroaching by some illegal occupants. As a result, the procurement of Hiramati is gradually becoming rare and costlier. Eviction of the encroachers and allotment of the Hiramati reserve khas land to the artisans can save the magnificent traditional art and craft from the verge of extinction. Lack of quality education of artisans and unawareness about the potentiality of Asharikandi style of terracotta and pottery is also a route cause for which it is remaining isolated from the global exposure. Unavailability of modern equipment in production procedure is a hindrance of this ethnic cluster which results in time consuming production procedure and unfulfilment of demand of customers. It is worth mentioning that dignitaries like his excellence Lt. Gen (Retd.) AJAI SINGH Ex-Governor of Assam also intimated some concrete step to develop Asharikandi style of terracotta and pottery during his regime. To bring more authenticity to the study an analysis has been made by representation of tables from the data collected through field survey. In the bar diagram above, it is seen that earning of people lies in four segment i.e. 8 of artisans income is upto 24000, 34 artisans income falls in the category of 24000-60000, where in the range of 60000-100000 there are 25 artisans and only 13 of the artisans falls above 100000 income per year. The above diagram shows that 79% of artisans solely depend on terracotta and pottery for their livelihood and only 21% of artisans" reliability extends to some other source including the craft. As per the table for selling of pottery product 8 people adopt indirect selling where 72 people sale their product directly. It is found that artisans use to promote their product through trade fair (30%), exhibition (65%) and media (5%). The above table shows that only 33% of artisans are able to receive govt. support where 67% is still to receive govt. support. It is seen that only 24% of total artisans are gainer of loan facility from different organisations and 76% is not cover under any loan scheme of any organisation.
FINDINGS:
i. Most of the artisans don"t practice both the art i.e., terracotta and pottery though the entrepreneurial culture were found as inherent in nature. ii. Vulnerability of their economic condition is due to heavy reliability on this ethnic craft with traditional tools and techniques.
iii. In Terracotta and pottery products there are two to three channels of distribution and govt. organisation plays a vital role in this aspect. iv. There is a provision of D.A. for the artisans when they participate in any state or national level exhibition, trade fair and expo. This D.A. is provided by D.
RECOMMENDATIONS:
On the basis of the study the recommendations are as follows:i. They can improve their income levels by understanding the marketability of their beautiful creations.
ii. The railway connectivity from Gauripur with other parts of the country have been modernised through broad gauge system. So they can use this means of travelling of their product because it is a more economic one as compared to roadways. iii. Government should take initiative to increase the number of showroom inside and outside the craft village for better exposure of the product. iv. Government need to take some concrete step for the development of infrastructure facility of Asharikandi style of Terracotta and pottery so that the artisans can better utilise their potentialities. v. There is a requirement of modern tools and technology for first production of goods so that timely demand can be fulfilled. vi. Government should organise exhibitions inside and outside the state which enables better publicity and it will also help in improvement of tourism. vii. These earthen products are eco friendly and cost effective too so promotions of these products are utmost important for uplifment of rural economy and for a better environment.
CONCLUSION:
In this study, an effort has been made to introduce Terracotta and Pottery craft of the Asharikandi village of Dhubri district of Assam. This is an age old art contributing to the need of the society by providing utensils specially to the Jaminder (royal) families to cater various aspects, in this era of modernisation they have diversified their product range. So, an attempt is made to observe the socio economic condition of the artisans.
Here also an effort made to justify the government machineries involved in this cluster and their involvement and encouragement to upgrade this beautiful art to compete with modern world. Prospect of tourism in nearby areas out of terracotta and pottery craft is another concern of this study. Finally it is necessary to mention here that artisans are facing acute financial problem to cope up with the modern competitive environment for which they remain in the stage of under development. Overall it is a knowledge enhancement experience from the researcher point of view with an intention to familiarise this beautiful creation in other parts of the society.
|
2019-04-27T13:09:44.539Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "1875ef76e86b916ae578398427df8ddba37625e4",
"oa_license": null,
"oa_url": "https://doi.org/10.18843/ijms/v5i2(6)/07",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "97ce2bf7d0b7d33b68992741be7473c1538a1cc4",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
}
|
221355513
|
pes2o/s2orc
|
v3-fos-license
|
A Generalist Lifestyle Allows Rare Gardnerella spp. to Persist at Low Levels in the Vaginal Microbiome
Gardnerella spp. are considered a hallmark of bacterial vaginosis, a dysbiosis of the vaginal microbiome. There are four cpn60 sequence-based subgroups within the genus (A, B, C and D), and thirteen genome species have been defined recently. Gardnerella spp. co-occur in the vaginal microbiome with varying abundance, and these patterns are shaped by a resource-dependent, exploitative competition, which affects the growth rate of subgroups A, B and C negatively. The growth rate of rarely abundant subgroup D, however, increases with the increasing number of competitors, negatively affecting the growth rate of others. We hypothesized that a nutritional generalist lifestyle and minimal niche overlap with the other more abundant Gardnerella spp. facilitate the maintenance of subgroup D in the vaginal microbiome through negative frequency-dependent selection. Using 40 whole-genome sequences from isolates representing all four subgroups, we found that they could be distinguished based on the content of their predicted proteomes. Proteins associated with carbohydrate and amino acid uptake and metabolism were significant contributors to the separation of subgroups. Subgroup D isolates had significantly more of their proteins assigned to amino acid metabolism than the other subgroups. Subgroup D isolates were also significantly different from others in terms of number and type of carbon sources utilized in a phenotypic assay, while the other three could not be distinguished. Overall, the results suggest that a generalist lifestyle and lack of niche overlap with other Gardnerella spp. leads to subgroup D being favoured by negative frequency-dependent selection in the vaginal microbiome. Supplementary Information The online version contains supplementary material available at 10.1007/s00248-020-01643-1.
Introduction
Gardnerella spp. are an important diagnostic marker of bacterial vaginosis (BV), a dysbiosis of the vaginal microbiome characterized by a shift from the lactobacilli-dominated vaginal microbiome to a more diverse microbiome, containing many aerobic and anaerobic bacterial species, including Gardnerella spp. Gardnerella is a diverse genus and at least four subgroups (A, B, C and D) have been identified using cpn60 universal target barcode sequencing [1], which correspond to four clades defined by Ahmed et al. [2]. Recently, Gardnerella subgroups have been reclassified into thirteen genome species, of which four are now named as G. vaginalis (subgroup C/clade 1), G. swidsinskii and G. leopoldii (subgroup A/clade 4) and G. piotii (subgroup B/ clade 2) [3,4]. These Gardnerella species differ in their phenotypic traits, including sialidase activity and vaginolysin production, which may render some of the subgroups more pathogenic than the others [5][6][7].
Women with vaginal microbiomes dominated by Gardnerella are usually colonized by at least two Gardnerella spp. [4,8]. The relative abundances of these co-occurring species, however, are not equal. Subgroup A (G. swidsinskii and G. leopoldii) and subgroup C (G. vaginalis) are most frequently dominant in reproductive-aged women [4,8]. These two subgroups are also often associated with the clinical symptoms of bacterial vaginosis [4,9,10]. Subgroup B has been suggested to be associated with intermediate microbiota [7,9,10]. Subgroup D, comprised of several unnamed "genome species", has only been detected at low prevalence and abundance [4,10].
Several factors can affect the abundance and co-occurrence of Gardnerella spp. in the vaginal microbiome, including host physiology, host-microbiota interactions, nutrient availability and ecological interactions among bacteria [11,12]. Ecological interactions are perhaps the most important factors which may affect the co-occurrence and ecological succession of Gardnerella species in the vaginal microbiome. Recently, we demonstrated that an indirect, exploitative competition between subgroups of Gardnerella is prevalent in co-cultures in vitro. While the growth rates of isolates in subgroups A, B and C were negatively affected by competition, growth rates of Gardnerella subgroup D isolates increased with the increasing number of competing subgroups in co-culture communities [12].
The strength of microbial interactions between bacterial species can be affected by niche overlap [13,14], and species with similar nutritional requirements will naturally compete over the same resources [15]. In addition to competition for nutritional resources, bacteria may also compete for resources essential for colonizing a specific site. Since isolates from Gardnerella subgroups A, B and C are negatively affected by competition, and subgroup D isolates experienced a boost in growth rate, the degree of niche overlap between subgroups A, B and C is presumably higher than between subgroup D and any of the others.
Although the growth rate of subgroup D increases in cocultures, it does not have an intrinsically high growth rate. In fact, the in vitro growth rate of subgroup D is half of that of subgroup C, which may contribute to its low abundance in the vaginal microbiome [12]. Low-abundance species are often favoured by negative frequency-dependent selection [16,17], which can be governed by nutritional requirements [18]. Bacteria capable of utilizing relatively few, abundantly available nutrients in a particular environment are nutritional specialists in the context of that environment. Generalists, on the contrary, are bacteria capable of utilizing more nutrient sources than their specialist counterparts. In negative frequency-dependent selection, the resources accessible to rapidly growing specialists will dwindle, reducing the fitness of the specialists as their population increases. As a result, the population of more generalist bacteria capable of utilizing a wider range of nutrient sources will expand in a densitydependent manner [18,19]. Generalists can also negatively affect the growth of specialists by competing for the resources that can be utilized by both of them [14].
Although the growth of the rarely abundant subgroup D is facilitated in co-cultures, the degree of overlap in nutrient utilization among the subgroups and the range of nutrient utilization by individual subgroups are yet unknown. The objective of our present study was, therefore, to evaluate the amount of genomic and phenotypic overlap in nutrient utilization among the subgroups of Gardnerella and to determine if subgroup D is a nutritional generalist relative to the three other subgroups. Findings are interpreted in relation to the hypothesis that subgroup D is maintained in the vaginal microbiome through negative frequency-dependent selection.
Bacterial Isolates
Thirty-nine Gardnerella isolates from our culture collection representing all four subgroups (based on cpn60 barcode sequencing) were selected for the study (n = 12 subgroup A, 12 subgroup B, 8 subgroup C and 7 subgroup D isolates) (Table S1). Isolates were streaked on Columbia agar plates with 5% (v/v) sheep blood and were incubated anaerobically at 37°C for 48 h. For broth culture, colonies from blood agar plates were suspended in BHI broth supplemented with 10% horse serum and 0.25% (w/v) maltose.
Whole-Genome Sequencing
Whole-genome sequences for 10 of the study isolates had been published previously, and the remaining 29 were sequenced as part of the current study (Table S1). DNA was extracted from isolates using a modified salting-out protocol [20] and was stored at −20°C. DNA was quantified using Qubit dsDNA BR assay kit (Invitrogen, Burlington, Ontario) and the quality of the extracts was assessed by the A260/A280 ratio. Isolate identity was confirmed by cpn60 barcode sequencing as follows. cpn60 barcode sequences were amplified from extracted DNA with the primers JH0665 (CGC CAG GGT TTT CCC AGT CAC GAC GAY GTT GCA GGY GAY GGH CHA CAA C) and JH0667 (AGC GGA TAA CAA TTT CAC ACA GGA GGR CGA TCR CCR AAK CCT GGA GCY TT). The reaction contained 2-μL template DNA in 1× PCR buffer (0.2 M Tris-HCl at pH 8.4, 0.5 M KCl), 2.5 mM MgCl 2 , 200 μM dNTP mixture, 400 nM of each primer, 2 U AccuStart Taq DNA polymerase and water to bring to a final volume of 50 μL. PCR was carried out with incubation at 94°C for 30 s, 40 cycles of 94°C 30 s, 60°C for 1 min, 72°C for 1 min and a final extension at 72°C for 10 min. PCR products were purified and sequenced by Sanger sequencing and compared with the chaperonin sequence database cpnDB [21] to confirm identity.
Following confirmation of the identity of isolates, sequencing libraries were prepared using the Nextera XT DNA library preparation kit according to the manufacturer's instructions (Illumina, Inc., San Diego, CA). PhiX DNA (15% [vol/vol]) was added to the indexed libraries before loading onto the flow cell. The 500 cycle V2 reagent kit was used for the Illumina MiSeq platform (Illumina, Inc.).
Raw sequences were trimmed using Trimmomatic [22] with a minimum quality score of 20 over a sliding window of 4 and a minimum read length of 40. Trimmed sequences were assembled using SOAPdenovo2 [23] or SPAdes (NR002, NR043, NR044) [24]. Assembled genomes were annotated using the National Center for Biotechnology Information Prokaryotic Genome Annotation Pipeline [25].
Pangenome Analysis
Pangenome analysis of the 39 study isolates and the published genome of G. vaginalis strain ATCC 14019 (Accession number: PRJNA55487) was performed using the micropan R package [26]. We used a "complete" linkage for clustering, and the cutoff value for the generation of clusters was set to 0.75. For initial visualization of the results, the Jaccard index was used to calculate the similarity of patterns of presence and absence of protein clusters among all isolates and a dendrogram was constructed from the results by the unweighted pair group method with arithmetic mean (UPGMA) using DendroUPGMA (http://genomes.urv.cat/UPGMA/).
COG Analysis
Predicted protein sequences from individual genomes were classified into Clusters of Orthologous Groups (COG) categories using WebMEGA (http://weizhonglab.ucsd.edu/ webMEGA). Based on the output from this process, the proportion of proteins in each of the COG categories was calculated for each genome. The distributions of proportional abundances of each category were then used to assess the relationships of the four subgroups in terms of COG category representation.
Carbon Source Utilization Assay
Bacterial isolates from freezer stocks were streaked on 5% sheep blood agar plates and were grown for 48 h anaerobically, prior to the inoculation of AN microplates (Biolog Inc, Hayward, CA). Each plate contained 95 carbon sources and one blank well. Colonies of Gardnerella isolates were harvested using a sterile swab and suspended in 14 mL of inoculating fluid supplied by the manufacturer. The cell density was adjusted to 55% T (OD 595 approximately 0.25) using a turbidimeter. Each well was filled with 100 μL of culture suspension and was incubated at 35°C anaerobically for 48 h. All inoculations and incubations were performed in an anaerobic chamber containing 10% CO 2 , 5% hydrogen and 85% nitrogen. All plates were read visually after 48 h of incubation.
If there was no carbon source utilization, the wells remained colourless. A visual change from colourless to purple indicated carbon source utilization. To avoid bias in interpretation, a subset of the plates was read by a second observer who was blinded to the identity of the isolates. There was no disagreement between independent observers. The entire experiment was performed in two biological replicates.
Carbon Source Profiling of Co-cultures
Representative isolates (VN003 of subgroup A, VN002 of subgroup B, NR001 of subgroup C and WP012 of subgroup D) from the four subgroups were co-cultured in the Biolog AN Microplate in a pairwise fashion (n = 6, AB, AC, AD, BC, BD, CD), by combining 50 μL of each isolate suspended in inoculation fluid in each well. The co-cultured AN microplates were incubated at 35°C for 48 h before being assessed visually for colour change. The experiment was repeated on separate days.
Statistical Analysis
The degree of similarity between the isolates in terms of presence/absence of protein clusters generated in the pangenome analysis, proportional abundance of proteins in various COG categories and carbon source utilization patterns was calculated using the Bray-Curtis dissimilarity matrix. Principle components analysis (PCA) was performed on the distance matrices and the significance of relationships was tested using PERMANOVA with the ADONIS function in the vegan package [27]. The SIMPER function was used to identify variables driving the differences between groups.
One-way ANOVA, student's t test and chi-square tests were applied to determine if the utilization of particular carbon sources was associated with specific subgroups.
All statistical analyses were performed in RStudio (version 3.5.2). Figures were generated using GraphPad Prism 8.0 and RStudio (version 3.5.2).
Overlap Between the Subgroups Based on Pangenome and COG Analysis
The purpose of our pangenome analysis was to estimate the degree of niche overlap between Gardnerella subgroups based on comparisons of their predicted proteomes. Hierarchical clustering using complete linkage produced 4868 clusters or predicted proteins in the pangenome of the 40 isolates included (rarefaction curve is shown in Fig. S1). The strict core (defined as the protein clusters present in all isolates) included 176 clusters. Most of these core proteins were related to metabolism, transcriptional control, DNA replication and protein synthesis. The clustering of the genomes by subgroup was apparent in a UPGMA dendrogram based on the presence/absence patterns of the 4868 protein clusters (Fig. 1a). PCA was performed to determine the extent of overlap between the subgroups. The amount of variance explained by the two principal components was 19.4%, based on which, the four subgroups were separable (Fig. 1b). The dissimilarity between the four subgroups was significant (pairwise ADONIS, Bonferroni adjusted, p < 0.05, A vs B, R 2 = 0.45; A vs C, R 2 = 0.48; A vs D, R 2 = 0.26; B vs C, R 2 = 0.34; B vs D, R 2 = 0.45 and C vs D, R 2 = 0.55).
Following the identification of core and accessory proteins, we investigated the distribution of functional classifications of proteins encoded by isolates in the four subgroups. COG analysis resulted in the assignment of predicted proteins into 23 functional categories. As expected, the hierarchical clustering of the COG distribution patterns corresponded to subgroup affiliation (Fig. 2a). PCA was performed on the Bray-Curtis dissimilarity matrix and the differences between all subgroups Fig. 1 Comparison of predicted proteomes of study isolates. a UPGMA dendrogram based on the presence/absence of protein clusters in the predicted proteomes of Gardnerella isolates. b Principle components analysis (PCA) of Bray-Curtis dissimilarity matrices calculated from protein cluster distributions. The dissimilarity between the four subgroups is significant (pairwise ADONIS p < 0.05, Bonferroni adjusted). Subgroup affiliations of isolates are indicated by colour as shown in the legend between the panels were found to be significant (pairwise ADONIS, Bonferroni adjusted, p < 0.05, A vs B, R 2 = 0.31; A vs C, R 2 = 0.71; A vs D, R 2 = 0.26; B vs C, R 2 = 0.48; B vs D, R 2 = 0.20 and C vs D, R 2 = 0.74) (Fig. 2b).
We identified the variables which caused the four subgroups to diverge in terms of abundance of different COG categories in a multivariate analysis using SIMPER [28]. SIMPER calculates the contribution of each variable to the dissimilarity observed between two groups and relies on the Bray-Curtis dissimilarity matrix for calculating the proportion of the contribution of each variable being tested. Thirty-six percent of the differences between subgroup A and subgroup B were accounted for by amino acid transport and metabolism (COG category E), inorganic ion transport metabolism (category P), translation, ribosomal structure and biogenesis proteins (category J). The proportion of proteins with functions related to carbohydrate transport and metabolism (category G) was the major factor that differentiated subgroup A from C, contributing to 34% of the dissimilarity observed. Carbohydrate transport and metabolism also accounted for 31% of the dissimilarity observed between subgroups B and C and 36% of the dissimilarity between subgroups C and D. The major contributing factors that differentiated subgroups A and D were proportional abundance of proteins assigned to Fig. 2 COG analysis of predicted proteomes. a Hierarchical clustering of the study isolates based on the proportional abundance of COG categories in their predicted proteomes. Abundance is indicated by the blue colour intensity in the heat map. b PCA of Bray-Curtis dissimilarity matrices calculated from the proportional abundance data. Dissimilarity between the four subgroups is significant (pairwise ADONIS, p < 0.05, Bonferroni adjusted). Subgroup affiliations of isolates are indicated by colour as shown in the legend between the panels functional categories H (co-enzyme transport and metabolism) and E (amino acids transport and metabolism) (23%). Subgroups B and D were separated primarily based on functional categories P (inorganic ion transport and metabolism), J (translation, ribosomal structure and biogenesis), G (carbohydrate transport and metabolism proteins) and E (amino acid transport and metabolism proteins), which together accounted for 37% of the dissimilarity observed.
Functional Categories of Proteins Differentiating Subgroups of Gardnerella
We tested if the proportions of individual functional categories of proteins that drive the overall separation of the four subgroups in multivariate analysis were significantly different between pairs of Gardnerella subgroups. This analysis revealed that subgroup C has a significantly higher proportion of its encoded proteins associated with carbohydrate transport and metabolism and transport and transcriptional regulation than the other subgroups (unpaired t test, p ≤ 0.01, Bonferroni adjusted, Fig. 3a and d). The proportion of proteins associated with amino acid transport and metabolism is significantly higher in subgroup D than subgroups A, B and C (unpaired t test, p ≤ 0.01, Bonferroni adjusted, Fig. 3b). Proteins involved in co-enzyme transport and metabolism were found in significantly higher proportional abundance in subgroup A than in subgroups B, C and D (unpaired t test, p ≤ 0.0001, Bonferroni adjusted, Fig. 3c). Subgroup B has a significantly higher abundance of proteins associated with inorganic ion transport and metabolism than subgroups A and D (unpaired t test, p ≤ 0.0001, Fig. 3e), but the difference between subgroup B and C was not significant. Subgroup B also has a significantly higher proportion of translation, ribosomal structure and biogenesis proteins (unpaired t test, p ≤ 0.001, Fig. 3f) compared to subgroup C.
Carbon Source Utilization Phenotypes
We hypothesized that subgroup D, a slow growing, rarely detected Gardnerella subgroup is maintained in the vaginal microbiome at a low level and avoids competitive exclusion through negative frequency-dependent selection, made possible by being a nutritional generalist. We performed carbon source utilization profiling of thirty-six representative isolates (n = 12, subgroup A; n = 9, subgroup B; n = 8, subgroup C (including type strain G. vaginalis ATCC 14018) and n = 7, subgroup D). The number of carbon sources utilized by any Gardnerella strain ranged from 5 to 24. Only 25% (9/36) of the isolates utilized more than 17 carbon sources, including two subgroup C (NR001, NR038) and all subgroup D isolates. Twenty isolates utilized at least 13 carbon sources, including three subgroup A (3/12, 25%), four subgroup B (4/8, 50%), six subgroup C (6/7, 86%) and all seven isolates of subgroup D (100%). The average number of carbon sources utilized by isolates in subgroups A, B, C and D was 10.4 ± 3.1, 11.8 ± 1.75, 13.9 ± 3.6 and 20.3 ± 1.9, respectively (Fig. 4). A oneway ANOVA was performed to compare the overall difference in carbon sources utilization among the four subgroups showed significant difference among the subgroups (F(3,32) = 18.15, p < 0.05). A post hoc comparison between the subgroups revealed that the number of carbon sources utilized by subgroup D was significantly higher than subgroups A, B and C (Tukey's HSD, p < 0.05). All of the tested Gardnerella isolates were able to utilize pyruvic acid, palatinose and Lrhamnose. The next most frequently utilized carbon sources were D-fructose (32/36, 97%) and L-fucose (32/36, 97%).
Overall, 31/95 carbon sources were utilized by at least one isolate, and the majority (20/31) of these were sugars (monoor oligosaccharides). Together, subgroup D isolates (n = 7) utilized more of the sugar substrates (18/37 available) than any other subgroup, including subgroup C (n = 8), which utilized 15/37 available sugars. The utilization of any of the 11 available amino acids was rarely observed, with only two of the subgroup C isolates positive for L-methionine or L-valine utilization. Fig. 4 Comparison of numbers of carbon sources utilized by isolates in each subgroup. The number of carbon sources utilized by isolates in subgroup D was significantly higher than those in subgroups A, B and C (Tukey's HSD, p ≤ 0.001). Only one-fourth (9/36) of the tested isolates utilized more than 17 carbon sources, including all seven tested isolates of subgroup D Fig. 3 Differential abundance of six COG categories that were identified by SIMPER analysis as main drivers of subgroup separation. a Carbohydrate metabolism and transport proteins (category G), b amino acid transport and metabolism proteins (category E), c co-enzyme transport and metabolism proteins (category H), d transcription proteins (category K), e inorganic ion transport and metabolism proteins (category P) and f translation, ribosomal structure and biogenesis proteins (category J). Results of unpaired t tests are indicated where * is p ≤ 0.05, ** is p ≤ 0.01, *** is p ≤ 0.001, **** is p ≤ 0.0001 and ns is not significant
Overlap in Carbon Sources Utilization Among the Subgroups
To determine if subgroups could be distinguished based on carbon source utilization profiles, a principal component analysis was performed (Fig. 5). The overlap between the representative isolates of subgroups A, B and C was significant. Subgroup D was significantly dissimilar to subgroups A and B (Fig. 5, pairwise ADONIS, A vs D, R 2 = 0.55; B vs D, R 2 = 0.55, p < 0.05). Although the dissimilarity between subgroups C and D was not statistically significant after Bonferroni adjustment, 39% (pairwise ADONIS, C vs D, R 2 = 0.39) of the variation in carbon source utilization could be explained by subgroup affiliation of the tested isolates, which was higher than between subgroups A, B and C (A vs B: 13%, A vs. C: 21% and B vs. C: 8%).
Association of Carbon Source Utilization Pattern with Subgroups
To identify carbon sources that differentiate the subgroups, we selected twelve substrates that were utilized by more than five isolates but fewer than thirty isolates. Chi-square tests were performed to determine if the subgroups significantly differ in the utilization of those twelve carbon sources. The four Gardnerella subgroups differed in their use of 3 of the 12 carbon sources: turanose, inosine and uridine 5-monophosphate (chi-square test, p < 0.05, Bonferroni adjusted) (Fig. 6). For each of these three carbon sources, subgroups A and B had a low frequency of use (9.5% = 2/21, 0.0% = 0/21, 0.0% = 0/21; subgroups A and B combined) and subgroup C had low or intermediate frequency of use (25.0% = 2/8, 50.0% = 4/8 and 62.5% = 5/8), whereas subgroup D had a high frequency of use (100.0% = 7/7, 100.0% = 7/7, 100.0% = 7/7).
Carbon Source Utilization by Co-cultured Isolates
Since the four subgroups co-exist in the same ecosystem, it is possible that mixing them might facilitate the utilization of certain carbon sources. To detect any such facilitation in carbon sources utilization, we co-cultured isolates from all four subgroups in six pairwise combinations (A+B, A+C, A+D, B+C, B+D and C+D). The representative isolates of subgroups A-D utilized 11, 13, 19 and 24 carbon sources, respectively, when grown alone while co-cultures utilized from 12 to a maximum of 22 carbon sources (Table 1). In every case, the co-culture utilized fewer carbon sources than the isolate that utilized the most carbon sources on its own.
Discussion
Rarely abundant species can be maintained in the human microbiome through a variety of mechanisms, which include but are not limited to sequestration of essential nutrients from competing species, diversification of phenotype [29], social cheating [30] and negative frequency-dependent selection [17]. Differences in nutrient utilization among community members can be a key factor that sets the stage for negative frequency-dependent selection [31]. The reproductive fitness of nutritional specialist species will remain high as long as the supply of nutrients usable by the specialists is abundant. As soon as the supply of these nutrients drops, slower-growing generalists, by virtue of their greater Fig. 5 Subgroup D has minimal overlap with the other subgroups in carbon source utilization. The degree of variation based on carbon source utilization between subgroup D and subgroups A and B was significant (pairwise ADONIS, p < 0.05, after Bonferroni adjustment). The variation in carbon source utilization between subgroups C and D can be explained by subgroup affiliation in 39% of cases. Overall, 42% of differences in carbon sources utilization between subgroups can be explained by their subgroup affiliation (Adonis, R 2 = 0.42, p < 0.05) utilization capacity, will have increased fitness, which will eventually lead to their dominance in the absence of any other negative influences.
Among the four subgroups of Gardnerella spp. that colonize the vaginal microbiome of reproductive-aged women, subgroup D is the rarest in terms of abundance and prevalence Fig. 6 Subgroup D can be differentiated from the other subgroups based on its capacity to utilize inosine (a), uridine 5-monophosphate (b) and turanose (c). The percentage of isolates in each subgroup that utilize the indicated carbon source is shown. The utilization of these three carbon sources is significantly associated with subgroup D (chi-square test, p ≤ 0.001) 24 19 among women [4]. Subgroup D is also relatively slow growing, yet shows an increased growth rate when the number of competitors in an in vitro community increases [12]. We have reported previously that resource-based competition is common among Gardnerella spp. and no evidence for contactdependent interaction was observed. Therefore, we set out to investigate if the negative frequency-dependent selection is responsible for the persistence of subgroup D, which would require relatively small niche overlap and a more generalist lifestyle than the other Gardnerella spp. in the vaginal microbiome.
Predicted Niche Overlap Between the Four Subgroups
Niche overlap may lead to competition for nutrients and space [31][32][33][34] and it has been reported that competition is prevalent among metabolically similar bacterial species [14].
Occupying distinct niches can therefore help bacterial species avoid competition for space, growth factors and nutrients, resulting in increased reproductive fitness. Since subgroup D isolates have higher growth rates in vitro in the presence of competitors compared to when grown alone, these isolates presumably occupy a distinct niche. The pangenome analysis showed that the four subgroups differ significantly based on the composition of their predicted proteomes (Fig. 1), with only 176 proteins comprising the strict core of proteins found in all isolates. This finding is not surprising since the genetic diversity among Gardnerella is well established, and genome sequence comparisons formed the basis for the recent reclassification of Gardnerella into 13 genome species [1][2][3].
Comparisons of the entire predicted proteomes do not, however, focus on the key factors for a resource-based competition: nutrient utilization potential. Proteins involved in nutrient uptake and metabolism account for only a fraction of the 4868 protein clusters comprising the pangenome. Analysis of the distribution of various functional (COG) categories of proteins revealed significant differences among subgroups in their predicted capacity to utilize carbohydrates and amino acids ( Fig. 3a and b), with subgroup D having significantly more of its proteome dedicated to amino acid transport and metabolism than any of the other subgroups. Since a resourcebased competition encapsulates competition for space, growth factors and nutrients, our findings from the pangenome and COG analyses suggest that the competition among the four subgroups is not spatial but may be primarily for nutrients, a speculation supported by the previous observation that Gardnerella spp. form multi-subgroup biofilms [12].
Subgroup D Is a Nutritional Generalist Relative to Subgroups A, B and C
The diversity of nutrients available to microbiota in the vaginal microbiome is less than in the gastrointestinal microbiome, where food intake provides a constant source of diverse nutrients that affect the assembly of gut microbiota [35,36]. Vaginal microbiota, on the contrary, are largely dependent upon host-derived nutrients, the most abundant of which is glycogen. Glycogen is deposited in the vaginal lumen by epithelial cells under the influence of estrogen [37] and is digested into maltooligosaccharides, maltodextrins and glucose by the combined activities of host and microbial enzymes prior to uptake and metabolism by the microbiota [38][39][40]. Given the relatively narrow range of nutrients available in the vaginal microbiome, it is expected that the resident microbiota, including the four subgroups of Gardnerella, overlap to a considerable extent in their nutrient utilization capacity, resulting in some level of competition among them [32,36,41]. As discussed earlier, subgroup D is an exception since the growth of these isolates was actually facilitated in co-cultures, suggesting that while it may compete with other Gardnerella spp. over common nutrients like the breakdown products of glycogen, it may be able to utilize a greater overall diversity of nutrients (i.e. it is a generalist).
The AN microplate assay results showed that subgroup D isolates utilized more of the provided carbon sources than isolates in the three other subgroups (Fig. 4). Furthermore, when the patterns of substrate use were considered, subgroups A, B and C were not separable from each other, but subgroup D was significantly different (Fig. 5). The distinct pattern observed in subgroup D was partially driven by the utilization of three particular substrates: turanose, inosine and uridine 5monophosphate (Fig. 6). Turanose is an isomer of sucrose, known as a non-accumulative osmoprotectant, aiding bacterial growth at high osmolarity [42]. The importance of turanose utilization in the vaginal environment is not known yet, but our observation is an indication that subgroup D isolates can metabolize sucrose-like sugars. The two other carbon sources: inosine and uridine 5-monophosphate are probably used in purine and pyrimidine biosynthesis in Gardnerella spp.
Some findings of the pangenome and COG analyses could not be reconciled with the phenotypic carbon source utilization assay. For example, although subgroups C and D have higher proportions of their proteomes predicted to be involved in transport and metabolism of carbohydrates (category G) and amino acids (category E), respectively, than the other subgroups, subgroup C isolates did not utilize the greatest number of available sugar substrates in the AN microplate and subgroup D isolates did not utilize any of the amino acid substrates available. It is, however, important to consider that the carbon source utilization assay was performed in a plastic environment and included only 95 substrates, many of which are not relevant to the vaginal microbiome. More relevant amino acid sources available in the vagina, including those whose abundance is altered in bacterial vaginosis, such as isoleucine, leucine, proline and tryptophan, are not included [43][44][45]. Ideally, this study would have involved a vaginal-microbiome specific nutrient panel, but such reagents were not available. Even with this limitation, our results suggest that subgroup D is a nutritional generalist relative to other Gardnerella spp. Most of the ecological studies that have been performed to date to elucidate mechanisms shaping the assembly of bacterial communities have included either environmental bacterial species or well-characterized model organisms [14,16,29,36,41,[46][47][48][49][50]. There are understandably fewer studies that focus on interactions among host-associated microbiota [51,52].
Negative Frequency-Dependent Selection in the Vaginal Microbiome
The genomic and phenotypic differences we observed between subgroup D and the three other subgroups, including the potential to utilize more amino acids, use of a greater number of carbon sources and a distinct pattern of substrate utilization, suggest that subgroup D is a candidate for negative frequency-dependent selection. Why then are these Gardnerella spp. only observed rarely and in low abundance in reproductive-aged women? Among 413 vaginal samples from reproductive-aged Canadian women, genome species comprising subgroup D of Gardnerella were detected in <10% of samples and never accounted for more than 5% of the microbiota [4]. Vaginal environmental dynamics and related host factors, such as menstruation, sloughing of epithelial cells and fluctuating pH, contribute to the turnover of bacterial species, shifting the bacterial population density and changing the nutrients available [53]. A decline in population density would reshuffle the vaginal ecosystem, increasing the supply of abundant nutrients accessible to faster growing, specialists, and checking the growth of slower-growing generalist subgroup D.
Although subgroup D is likely rare due to the factors described above, it could still be a major player in ecological succession and transition of vaginal microbiota between a Lactobacillus dominated community and the overgrowth of anaerobes characteristic of bacterial vaginosis. These organisms may also play a particular role in biofilm formation or competition for occupancy of the vaginal mucosa. Rarely abundant species often act as keystone species helping colonization by other bacterial species, which are also essential to maintain homeostasis of an ecosystem [54][55][56][57]. The resolution of the role of low abundant Gardnerella spp. will depend on the development and application of experimental systems that more closely model the human vaginal microbiome. Rodent models have shown some promise, especially for studies of specific combinations of organisms [58], but there is also potential in bioreactors [59], and cell and tissue culture systems that attempt to recapitulate many of the environmental and physiological aspects of the vaginal microbiome [60].
Further study of rare Gardnerella spp. will likely also result in the definition of additional species within this diverse genus.
|
2020-08-27T09:07:23.032Z
|
2020-08-25T00:00:00.000
|
{
"year": 2020,
"sha1": "593faf340fdc55a1d5a01ae240e6730d59759faa",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00248-020-01643-1.pdf",
"oa_status": "BRONZE",
"pdf_src": "SpringerNature",
"pdf_hash": "c31a9d164adb21412e122adfd118e21d81b23c94",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
249065103
|
pes2o/s2orc
|
v3-fos-license
|
Liver transplantation in the Intensive Care Unit: twenty years experience in a center medium income on Peru
Liver transplantation is the major treatment for end-stage liver disease. Postoperative care is a great challenge to reduce morbidity and mortality in patients. In this sense, management in the liver ICU allows hemodynamic management, coagulation monitoring, renal support, electrolyte disturbances, respiratory support and early weaning from mechanical ventilation and evaluation of the liver graft. Objective: The present study shows the results of the management of liver transplant patients in 20 years of experience in a transplant center in a low- to middle-income country. Materials and methods: The medical records of 273 adult patients in the ICU in the immediate postoperative liver transplant were reviewed, from March 20, 2000 to November 30, 2020, including the effect of the pandemic caused by COVID-19. Liver-kidney, retransplanted, SPLIT, and domino transplant patients were excluded. Results: The most frequent etiology for LTx was NASH (35%), the mean age was 49 years, MELD Score ranged 15 - 20 (47.5%), 21 - 30 (46%) > 30 (6.2%). ICU pre transplant stay 7%, average ICU stay: 7.8 days. APACHE average admission: 14.9 points. Weaning extubation of 91.8% patients in ICU and Fast Track in 8.2%. The most frequent respiratory complication was atelectasis 56.3%, pneumonia (31.3%); AKI 1 (60.9%), and 11.1% with hemodyalisis support (AKI3). Immunosuppression: Tacrolimus (8.9%). Post-operative ICU mortality was 6.2%. Conclusions: The management of
INTRODUCTION
Liver transplantation is the most complex medicalsurgical process in contemporary medicine today, due to the fact that it involves the care of a multi-organically compromised patient, who requires advanced and quality intensive support with the participation of a multidisciplinary team. Multi-organ monitoring in a transplant center is essential since this is part of the edge that will affect morbidity and mortality of the liver graft. In the late 1950s, transplantation models in dogs were developed for all intra-abdominal organs. The most fruitful of these efforts involved the liver (1) . Liver transplantation remains the only definitive treatment for patients with fulminant liver failure and irreversible liver injury. In 1963 Dr. Thomas Starzl performed the first liver transplant in humans, in Denver -USA (2) ; in Peru, Dr. Jose Chaman; successfully started liver transplantation activity in March 2000 at the Guillermo Almenara National Hospital (EsSalud).
The key factors affecting post-liver transplant survival are the severity of the recipient's pre-transplant disease and the quality of the graft used (3) . The more severe encephalopathy there is at the time of surgery or the severity of multisystem organ failure, the less likely the surgery will be successful. There are several risk factors that have been associated with a decrease in the probability of survival of the patient after liver transplantation, including a history of life support, recipient age > 50 years, recipient BMI >= 30 kg/m 2 , and serum creatinine > 2.0 mg/dL (4) . The main causes of death in the post-transplant period are sepsis and multi-organ failure (5) .
The different phases of transplantation, from the search and qualification of the recipient to subsequent management, immunosuppression and its complications, have been notably systematized, achieving progressively better results. However, the low organ donation rate in Peru (2.28 pmp) continues to be a challenge, so an uninterrupted commitment is necessary.
Liver transplantation at the beginning was associated with large volumes of bleeding and the need for blood products with long and difficult post-transplant ICU course, however, advances in surgical and anesthetic techniques have streamlined the intraoperative course, now being a "routine", allowing modifications of the practice to reduce perioperative complications and costs. The MELD score established for organ distribution in the USA in 2002, as well as in most liver transplant centers and in Peru in 2008, as a disease severity score, with a greater acceptance of donors with expanded criteria and, the acuity of the receptors, more patients are admitted to the operating room from the ICU, where they have required support in the face of respiratory failure, hemodynamic instability or severity of portosystemic encephalopathy.
More than 20 years of experience have passed with favorable results, and with different types of medicalsurgical complications. In this sense, the present study aims to present the results of our experience in the Intensive Care Unit since March 23, 2000 to October 31, 2020.
Management in the intensive care unit requires specialists in intensive medicine that provide evidencebased therapeutic interventions, providing support for multiple organs, with a holistic view of the patient. The intensivist with training in transplant complications management and immunosuppression is ideally trained in an international or national transplant center and accredited by government entities.
MATERIALS AND METHODS
A descriptive, retrospective, cross-sectional study was conducted that included 243 adult patients with orthotopic liver transplantation. The inclusion criteria were all patients over 18 years of age, liver transplants who were treated in the Intensive Care Unit in the immediate postoperative period, the exclusion criteria were those under 18 years of age, double transplants (liver -kidney), retransplants, SPLIT or domino transplants. The data were obtained from the medical records and the database of the Transplant Department at the Guillermo Almenara National Hospital.
The statistical analysis consists of the description of the different sociodemographic, clinical and surgical variables of the patients under study, mentioning the pre-transplant situation, intra-and post-operative variables in ICU and hospitalization. The Microsoft Office Excel and SPSS (version 25) programs were used.
RESULTS
During the study period, 243 patients were included out of a total of 302 transplants.
DISCUSSION
Liver transplantation is the standard treatment for end-stage liver disease, that affects all organs in different forms and intensity and, after an optimal postoperative period, multisystem functionality should be resolved. The optimal monitoring and management in the operating room and in the Intensive Care Unit is essential and transcendental since the use of marginal donors of our country is increasing, in addition to the multi-organ commitment of the recipient that is not reflected in the MELD score. An early evaluation and recognition of the different organ complications is essential in the ICU since it has an impact on the structure and functionality of the graft and, therefore, on the morbidity and mortality of the patient.
Although no single factor is responsible, advances in surgical and anesthetic techniques, a better understanding of liver physiology, and improved perioperative management have contributed (6) . Most of the postoperative management problems after liver transplantation are unique and require a broader understanding of liver metabolism and the pathophysiology of liver disease (6) .
The etiology of liver transplantation in our experience was predominantly NASH (35%), autoimmune hepatitis (15.3%) and ASH (11.5%), in comparison with other experiences (7) , a North American study of 5 years where it shows that its most frequent incidence in recent years was ASH (40.3%), NASH (33.9%) in patients without hepatocarcinoma and that the most frequent incidence in patients with hepatocarcinoma was HCV infection (35.9%, although decreasing due to the new regimens of direct-acting antiviral agents), followed by NASH (34.7%). In this same study, the mean age was 56.8 years and the female sex represented 34.8%, in comparison with our study that the average age was 53 years and the female sex had a similar percentage of 35.4%. In a similar Brazilian study (8) , the most frequent aetiology of transplantation was ASH (31%), HCV (20.2%), NASH (14%), in this same study CHILD -B was the most frequent with 44.2%, compared to ours, which was CHILD -C with 53.1%. Regarding the MELD on the waiting list, they had an average of 21.6 points and, in our study, it ranged between 15 -20 points (47.8%). The incidence of hepatopulmonary syndrome was 8.23% in our entire series, a similar percentage to the study carried out in the same series but with a lower number of transplants (9) , where the incidence was 9.45%. Severe hepatopulmonary syndrome was the most frequent in our study (50%) compared to another study where it was only 16.7% (10) .
Regarding the intraoperative parameters, the mean blood pack transfusion was 9.3 units, compared to the mean of 8.1 blood pack units (11) .
The average APACHE in our study was 14.9%, similar to other study (6) , who had an average of 13.8%, which may or may not be associated with the mortality rate according to the studies reviewed.
In our study, 69.9% of the patients were extubated in the first 24 hours, compared to 75.5% in early extubation (12) . According to other center (13) , the average number of hours on mechanical ventilation was 22 hours and the percentage of patients without postoperative respiratory failure was 20% (13) . In addition, 6% of the patients required mechanical ventilation equal to or greater than 8 days, a figure compared with ventilatory support between 7 -30 days after transplantation, in 7% (14) .
A round a quarter (25.7%) of the transplanted patients presented post reperfusion syndrome, a result consistent with other studies (15) . This syndrome was defined as a decrease in mean arterial pressure greater than 30% of the value observed in the anhepatic phase, for more than 1 minute during the first 5 minutes after reperfusion of the graft (16) , using the Piggy Back Technique, the factors associated with this syndrome are not entirely clear, but intraoperative hemodynamic changes influence the postoperative morbidity of liver transplantation (16,17) .
Nosocomial pathogens include gram negative and gram positive bacteria, the intensity, exposure time and virulence of the pathogen affect mortality and morbidity (18) . There are no recent data to know the incidence of colonization and infection by ESBLproducing gram-negative organisms. It seems likely that there is a significant variation worldwide in its prevalence, multidrug-resistant bacteria and Burkhodelia cepacia have been less studied and their incidence varies according to the geographical area (19) . Given the potential for serious infection-related complications in this population, it is essential to identify patients at risk and provide appropriate treatment in a timely manner.
In a main center of France (20) , found a two times higher risk of fungal infection in liver transplant recipients who had a MELD score between 20 and 30 and a four times higher risk in those with a MELD score > 30. Although the spectrum of Infection varies by region and between centers, infections caused by gram-negative bacteria are generally the most common, causing 50-70% of bacterial infections, although nearly a third of infections can be polymicrobial (21) . with higher mortality than drug-susceptible infections. With targeted prevention, prophylaxis, and monitoring before and after transplantation, many infections can be prevented or identified early, allowing for the rapid initiation of appropriate therapy.
The length of stay in the ICU was 7.8 days in our study, compared with 8.8 days in other study (22) , related to arterial lactate of 4.1 mmol/L. In our study lactate was 2.7 mmol/L related to bleeding in the operating room, in our study 6393 ml, compared with 4550 ml in the aforementioned study.
Our results show the experience of the most important liver transplant center in Peru. Many factors will determine the final outcome of a patient after a liver transplant, be it a short or long stay in the ICU, the need for intensive support in the pre-transplant, medical surgical complications, as well as the provision of high quality care in the ICU, a key element of the hospital infrastructure necessary to deliver excellent results.
|
2022-05-27T06:22:16.533Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "46b71735d24d8b28c0786a9df97ec0bdfcb66391",
"oa_license": "CCBY",
"oa_url": "http://www.revistagastroperu.com/index.php/rgp/article/download/1231/1106",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "82b6426f902692e26566a3c435e9ce26812bf012",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
168352303
|
pes2o/s2orc
|
v3-fos-license
|
The Genesis of Supply Chain Risk
Supply chain officers may feel with Edward Smith. After years of uneventful supply chain management and after years of striving after more efficient processes, unexpected and sometimes even devastating events have derogated supply chains. A series of major disruptions like Hurricane Katrina, piracy attacks offshore Somalia, global financial crisis, flooding in Thailand, European ash-cloud, Japanese earthquake and tsunami among others have revealed a missing preparedness within today’s supply and distribution networks [248]. Thus, the management of so-called supply chain risks became an issue.
ern supply chains, it revises different classes of disruptive triggers and it traces the path from the consideration of disruptions to supply chain risk management and the need for quantification.
Logistics Innovations -A Blessing and a Curse
The strategic influence of supply chain management on business performance -including not only overall logistics costs, but also customer satisfaction -is well confirmed by companies of almost all industries. Domestic firms can offer products worldwide, and hence, they compete not only with local companies on the domestic market, but also with international competitors they encounter on the world market. In order to remain competitive, while benefiting from logistics potentials, companies strive for improving and streamlining their operational processes [329,334]. As revealed by Computer Science Corporation in 2004, companies availed themselves of the implementation of efficiency-increasing logistics innovations. While 52% of the considered companies registered an increase in their revenues, 72% stated to benefit from the new developments implemented in their supply chains. Besides the positive effects American AMR among others concluded that these new trends and strategies have a negative counterpart, which is highlighted by the increasing number of disrupted supply chains [66,129,149,334,348]. Highly efficient operations [329] expose supply chains to an environmental crossfire of different volatile influences. While disruptions do not occur everyday, supply chain strategies -also called risk drivers -that lead to disruptions do. When a firm takes a pure cost minimization approach in order to increase overall efficiency, it reduces excess capacity and inventory, which could make up for production losses caused by disruptions. Formerly isolated events within the supply chain can today "escalate to wide scale network disruptions" [71]. Popular logistics improvements, which can be derived from three main sources: globalization, lean management principles, and increased information availability, are presented in Table 2. 1 [2, 63, 249, 329].
Global sourcing enables companies to follow strategies like low-cost country sourcing (LCCS), or outsourcing and off-shoring, all of which enable companies to implement cost reduction actions and to focus on their core activities. Stretched lead times, limited visibility, and difficult communications can, however, decrease flexibility and response time in case of supply chain failures. Lean management principles like Just-in-Time or Just-in-Sequence and supply base rationalization allow companies to synchronize supply with production demand more efficiently. When efficiency is the sole objective, there is very little buffer to enable recovery after a supply chain disruption has occurred. New technologies like e-business, vehicle telematics, inter-modal systems, tracking systems, and automated handling have improved information availability as well as facilitated purchasing activities for both consumers and procurement managers. However, these developments increased customer expectations. Nowadays, consumers buy more products from websites rather than visiting shops [334]. Furthermore, they want immediate gratification. Late orders caused by supply chain disruptions often result in lost customers. Additionally, information technology provides transparent and consistent support in planning, monitoring and controlling material as well as financial and information flows. As companies in the meantime rely on planning and monitoring tools, a small ITfailure could have a tremendous and immediate impact on the whole supply chain performance [249].
Logistics innovations have shifted supply and distribution processes from more or less straight chains to globally acting, complex and highly interrelated networks, most-often operating beyond consecutive company boundaries. Supply chains consequently operate in a volatile environment and are exposed to different types of potential disruptive triggers.
Supply Chain Disruptions
Disruptions like power outages, labor strikes, supplier glitches, epidemic or terrorist attacks were always considered as geographically isolated events, which executives heard about in news, but did not need to manage in their own companies nor within logistics operations. However, due to increased complexity of world-wide spread supply chains, a medical crisis in Asia became a problem for the producer of high-tech goods in the middle east. A labor strike in harbor facilities of the US affected exports coming from South Korea. The hurricane Katrina hit oil refineries in New Mexico, but oil prices increased all over the world [78]. The nature of disruption triggers varies among economic, environmental, geopolitical (as well as societal) and technological categories. Due to the ongoing change of the risk landscape all over the world, the number of triggers resulting in disruptions especially for supply networks increases and evolves constantly. Figure 2.1 presents the global risk landscape changes from 2012 to 2013 including aspects especially valid for supply chains.
According to the Supply Chain Risk Initiative of the World Economic Forum the top five external triggers among these categories in 2012 were natural disasters, extreme weather conditions, conflict and political unrest, terrorism as well as sudden demand shocks [249].
Although not yet top-ranked, IT-disruptions like IT-failure or cyber attacks can be regarded as an emerging trigger, which needs more attention. A study of the University of Minnesota concluded that 90% of companies encountering a ten day lasting IT-breakdown had to declare bankruptcy after two years at the latest. In the following paragraphs main types of disruptive triggers are further discussed. Additionally, Likelihood to occur in the next ten years Impact to occur in the next ten years
Name Incident Description Year
Great East Japan earthquake seaquake and tsunami An undersea earthquake off the Pacific coast of Tohoku occurred in March 2011. The earthquake triggered tsunami waves that reached heights of up to 40.5 meters and which flooded inland up to 10 km. Several industries such as high-tech and automotive suffered from enormous production shortages.
2011
Thailand flooding flooding The Thailand flooding in 2011 affected production capacity and inventory stock-out and price increase, similarly. As Thailand is the world's second major exporter of hard disk drives (HDD), an entire industrial sector was hit by the flood. According to the e-commerce tracking site Dynamite Data the price of HDD increased by 50% to 150% shortly after the flooding until 300% in the weeks afterwards. A short circuit provoked a fire at a factory of Toyota's major p-valve supplier Aisin. P-valves are a low priced product, but essential for the production. Production was expected to halt for several weeks. Alternative valve suppliers were not immediately available and Toyota faced an increased demand, which already has led production levels reaching 115%. 1997
Environmental Disruptions
The recent series of natural and epidemic catastrophes affecting companies worldwide -like the Kobe earthquake in Japan in 1995, SARS Figure 2.2. Certainly, population growth, the spread of valuable assets and improved reporting influenced this development. The exponential growth of frequency and magnitude, however, cannot be explained solely by these factors [59]. The outlook on the future of supply networks is evident: supply chains remain exposed to deviations evoked by natural and epidemic disruptions.
Economic Disruptions
Currency exchange rate fluctuations, commodity price volatility, global financial crises, sudden demand shocks, supplier glitches and export/import restrictions are examples of economic disruptions.
Globally-spread supply chains connect companies among diverse countries, belonging to different currency areas. Fluctuations in currency rates between procurement, production and distribution locations potentially offset margins. Competitive devaluation -sometimes referred to as currency war -like for example in 2010 has a great impact on supply chain profitability when procurement or distributions are concentrated overseas [248].
Additionally, cross-border movements are vulnerable to export and import restrictions or border delays evoked by customs regimes, tariff and non-tariff barriers, quota systems, security concerns and infrastructure bottlenecks [248]. Economic system shifts or disruptions are difficult to predict but unfortunately remain influencing globally-spread supply chains as well.
Socio-Geopolitical Disruptions
Geopolitical areas -where political stability is missing, where terrorism is prevalent or where law enforcement is restricted -impose a threat to supply chain performance [248]. Socio-political turmoils, like the "Arabic Spring" for example, decreased political stability in North Africa, increased the fear of interrupted oil supply and hence raised the volatility of oil prices. Similarly, the Iraq War impaired global transport flows: Singapore Airlines -flying over the Middle East -had to choose more southerly routes, which caused a decrease of their cargo capacity for products coming from Singapore [66]. Likewise, the Ukraine conflict forced air carriers to change routes. Terrorist attacks threaten democratic societies, western citizens as well as public institutions. Additionally, companies fear attacks on production and distributions hubs. Compared to physical attacks on infrastructure, legislative responses like the safety precautions after the 9/11 attacks are considered to be more realistic. Strong restrictions at the Mexican and Canadian border and the aircraft grounding lasting several days intercepted the continuous procurement of production companies. In the fourth quarter of 2001 Ford, for example, suffered from a decline in car production of almost 13% [198]. There is no reported evidence that indicates an upcoming world-wide, perseverative, political stability. With regard to the current unrest and warlike disturbances in Iraq, Syria, Gaza, and the Ukraine one may safely assume that the uncertainty originating from the socio-geopolitical environment will (unfortunately) continue to exist -also for globally-spread supply chains.
Technological Disruptions
Several surveys point out that technological disruptions induced by accidental Information Technology (IT) failure, cyber attacks or corporate espionage are increasingly threatening the efficient operation of supply chains [240,249].
Innovations like automatic identification and data acquisition, mobile devices, and cloud computing resulted in new systems such as tracking and tracing, real-time control applications, or software-asa-service. Supply chains, therefore, evolved into interdependent systems not only with respect to material, but also information and financial flows [249]. In the meantime, cyber criminals may utilize these technologies to attack different types of flows within a supply chain. Additionally, decision and analysis tools offer planning support for production, transportation, and supply networks. EDI systems are used for the exchange of documents between suppliers and manufacturers.
Coping with Risk
Innovations focusing on improving efficiency turned supply networks into globally operating, interrelated and complex systems, whose boundaries cross corporate entities. Formerly geographically isolated disruptions and volatilities became an issue. Both, scientific community [270,274,334,349] and practitioners [129,140] became aware of the increased number of perils that have the potential to disrupt a supply chain. The raise of decision makers' awareness, led to the application of a well-known (but not well-understood) concept: risk. The transition from disruptions to risks passed off quickly and apparently arbitrarily. Since then the management of supply chain risk is defined as one of the five fundamental challenges of supply chain management [140]. Quantitative tools are needed that support decision makers in managing those supply chain risks.
Enterprise Risk
Contrary to targeted holistic supply chain risk management, the treatment of perils that impede the efficient execution of atomic (logistics) sub-systems is well-established. For example, on the level of small-scale operations such as production tasks, risk is perceived as technical failures or human errors. In this context, event-related deviations from expected parameter levels are addressed by the means of scheduling systems. Scheduling tools work on an operational planning level and are executed whenever the decision maker wants to integrate new information. Managerial techniques such as Failure Mode Event Analysis (FMEA) [161,257], Event Tree Analysis (ETA) [76] or Fault Tree Analysis (FTA) [156,161] support decision makers in identifying potential triggering events and their related consequences. These techniques are also used for larger logistics systems crossing single locations. However, interruptions within these systems, for example in distribution networks, are quantified by frequency and estimated damage.
On an enterprise level risk management tools became necessary and were established, after major corporate and accounting scandals in the US resulted in economic crises. In the 1990, for example, companies like Enron or Tyco were struck by accounting scandals which led the US government to enact the so called Sarbanes-Oaxley Act (SOX) [242]. SOX asks for the integration of risk management in a proposed Internal Control System (ICS). The Committee of Sponsoring Organizations of the Tradeway Commission (COSO) developed an Enterprise Risk Management Integrated Framework including risk elements like event identification, risk evaluation and risk reaction.
In accordance with the situation in the United States the European Union enacted several EU Directives (EU Directive 4., 7. and 8.), which hold for the implementation and control of corporate ICS [175] and led to extensions of national laws, e.g. the German Accounting Law Modernization Act (BilMoG). The compliance with these regulations impose challenges to enterprises. The occurrence of disruptive triggers is regarded as a threat that affects logistics figures which in turn influence overall corporate financial figures and need to be governed appropriately.
Today, the management of perils is even more difficult, because enterprises are parts of complex and interrelated supply chains. Disruptions occurring upstream the supply chain may affect product quality or reputation of supply chain partners downstream. Along with the growing complexity of modern supply chains, the impact of any willful action or unintended change have become hard or even impossible to predict. thoughts and visualizes the increasing complexity of logistics system from small-scale operations over subsystems crossing single locations and business units up to the concept of a supply chain and their related perception of risk.
Following the footsteps of Management
Out of the literature of supply chain risk management, different streams of interest and focus developed concentrating on managing and reducing supply chain risk. Much attention has been devoted to the management of supply chain disruptions, which is referred to as supply chain disruption risk management [28]. Supply chain disruption management subsumes approaches that identify and assess short-term alternatives that limit the exposure to disruptive triggers and/or return supply chain execution back to a normal and/or acceptable status. The relatedness between supply chain disruption management and supply chain risk management is well elaborated by several authors [28,66,192,204,266,282].
A generic managerial approach especially tailored for supply chain risk can be traced from diverse authors [28,100,109,120,133,216,334,345,349]. The approach is arranged as a process cycle, which encompasses the following core management steps intended to be applied for each potential disruptive trigger that evolves out of the aforementioned main types of disruption, see Figure 2.4: • Awareness. The acknowledgment and establishment of a corporate risk culture is considered to be essential for the success of supply chain risk management implementations. Establishing culture implies creating risk awareness and information transparency over different business units, management levels, and supply chain partners [133,149,334].
• Identification. To install selective risk countermeasures it is indispensable to know about the threats in the environment of supply chains. Gathering and evaluating information about prospective trends or level shifts are the objectives of the identification step. Therein the supply chain, its boundaries, its processes, and its objectives are defined and described.
• Assessment. Then activities within the specified system are examined in order to identify potential points of weaknesses as well as threats [334]. Traditionally, the result of this step is a catalog of potential relevant threats [345]. Upon the predominant understanding of risk, identified supply chain risks are classified, assessed and ranked in accordance to their relevance. The relevance is usually quantified by the probability of a threat and its related impact. • Learn. It is emphasized by numerous authors that the course of supply chain risk management is an ongoing process, steps interact and new insights need to be considered for the execution.
Although this cycle provides a work flow of risk conduction, it lacks a quantitative treatment of risk that can be adjusted to any logistical subsystem within a supply chain, whether it is a production task or a corporate subnetwork.
Identification needs Quantification -Quantification needs Definition
Besides logistics operations, the performance of related business units such as finance, sales, purchasing or IT affect and depend on an effective supply chain risk management. System-inherent uncertainties that can hardly be allocated to a specific event need to be identified as well. The delimitation of different types of risks is, however, vague and hinders an efficient execution of supply chain risk management.
With the intention to enhance the supply network with the capability to appropriately respond to threats, supply chain risk management has to provide reliable instruments that support the understanding and mitigation of supply chain risks. These instruments include qualitative indicators and quantitative metrics. The expertise of supply chain officers or the evaluation of real cases are valuable with regard to the understanding of risks. The need for quantitative instruments tailored for supply chains is even higher and continuously increasing, because it allows a systematic approach for risk consideration and surveillance. Consequently, the development of risk quantification metrics represents the most prioritized improvement need for supply chains [248]. It is due to the very nature of the term risk and the development of its understanding that results in diverse perceptions and definitions of supply chain risk and its management. As threats originate from diverse environmental categories, may affect various parts of the supply chain and have to be handled by different corporate units, this discipline attracts the attention from multiple domains and research fields. It follows that definitions of concepts related to supply chain risk depend on the methodological background and interest of research scientist as well as on cultural, industrial or geographical differences, rather than on what is actually needed or reliable.
In the following Chapter we work on the contemporary understanding of risk and re-define supply chain risk with the intention to provide the basis for what is needed most: supply chain risk quantification.
|
2019-05-30T13:13:43.762Z
|
2016-06-24T00:00:00.000
|
{
"year": 2016,
"sha1": "145a3806c814c77ffc67a907c9a51192008be073",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c55fed98d6ea02351b6d4bea07beb26520209db7",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
53783147
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the Electrostatic Contribution to the Retention of Modified Nucleosides and Nucleobases by Zwitterionic Hydrophilic Interaction Chromatography
This work explores the benefits and limitations, on a quantitative basis, of using zwitterionic hydrophilic interaction chromatography (ZIC-HILIC) for the separation of several modified nucleosides and nucleobases of clinical interest. The target compounds were hydroxylated and methylated derivatives: 8-hydroxy-guanine, 8-hydroxy-guanosine, 8-hydroxy-2′-deoxyguanosine, 1-methyl-guanine, 7-methyl-guanine, and 9-methyl-guanine. A quantitative evaluation of the electrostatic interaction based on a systematic study of the nature and concentration of the salts in the mobile phase has been carried out. From the obtained results, it may be concluded that separation is based on a mechanism of partition and interaction through weak electrostatic forces: the contribution of the electrostatic interaction to the retention of the charged analytes reaching values between 25 and 52% at low salt concentration. However, the electrostatic contribution decreased progressively as the salt concentration rose.
Introduction
Hydrophilic interaction chromatography (HILIC) is a mode of chromatography in which polar compounds are retained using hydrophilic stationary phases combined with mobile phases with a high concentration of organic solvent and a small amount of water [1].A partition mechanism between the aqueous layer associated with the stationary phase and the organic component of the mobile phase has been proposed to explain the type of separation that occurs in HILIC [1].However, later studies have shown that the mechanism of retention involves more complex equilibria [2].Moreover, the presence of charged sites in the stationary phase in zwitterionic hydrophilic chromatography (ZIC-HILIC) would propitiate the appearance of other retention mechanisms [3,4].
The chromatographic behaviour of several modified nucleosides and nucleobases (MNNs) with different stationary phases in HILIC mode has been described previously [15], the zwitterionic stationary phase being the most suitable one for their separation.The MNNs selected were hydroxylated and methylated derivatives, used as biological markers of several diseases: 8-hydroxy-guanine (8OHGua), 8hydroxy-guanosine (8OHG), 8-hydroxy-2 -deoxyguanosine (8OH2dG), 1-methyl-guanine (1mGua), 7-methyl-guanine (7mGua), and 9-methyl-guanine (9mGua).The results obtained from the study of the different parameters affecting separation-content of organic solvent in the mobile phase, pH, salt concentration, and temperature inside the columnshowed that separation in the ZIC-HILIC column was based on a mechanism of partition between the aqueous phase bound to stationary phase and the organic compound of mobile phase.However, the mechanism is complex, and it presumably involves other processes such as retention of the MNN with hydroxyl groups by hydrogen bonding or interaction through weak electrostatic forces for charged analytes.
In this work, we describe a quantitative evaluation of these electrostatic interactions in ZIC-HILIC in order to elucidate the contribution of the interactions to the retention ISRN Analytical Chemistry of the target analytes.The evaluation was based on a detailed study of the influence of the type and concentration of the salt used.
Thus, the aim of the present work was to contribute to a better understanding of the retention mechanism of these compounds in ZIC-HILIC mode.Working solutions were daily prepared at 5 µg mL −1 for 1mGua, 7mGua, and 9mGua, and at 3 µg mL −1 for 8OH2dG, 8OHG, and 8OHGua.
Experimental
Acetonitrile (ACN) was of HPLC grade (Merck, Darmstadt, Germany).Ultrahigh-quality (UHQ) water was obtained with a Wasserlab (Noain, Spain) Ultramatic water purification system.All other chemicals were of analytical reagent grade.
Instrumentation.
HPLC analyses were performed on a HP 1100 Series chromatograph from Agilent (Waldbronn, Germany).The Diode Array Detector (DAD) recorded the spectra in the 190-400 nm range.The analytical column was a ZIC-HILIC packed with 3.5 µm particles from Merck (Darmstadt, Germany).Analyses were performed in isocratic elution: 80% ACN: 20% formic acid (2.6 mM, w w pH = 3.1) with variable concentration of ammonium or potassium perchlorate.Column temperature was set to 20 • C, flow rate at 0.5 mL min −1 , and an injection volume of 50 µL in ACN was selected.
Results and Discussion
Application of the ZIC-HILIC mode to the separation of MNN has been described previously [15].A mobile phase containing 80% ACN and 2.5 mM formic acid ( w w pH 3.1) allowed appropriate resolution to be achieved.At that pH value, only methylated MNNs were in the protonated form, and hence electrostatic interactions with the sulfobetaine groups in the stationary phase could be expected for these analytes.Hydroxylated MNNs were not charged at w w pH 3.1, and therefore different behavior that for methylated MNN would be expected.
Effect of the Nature and Salt Concentration in the Mobile
Phase.Owing to the high organic content of the mobile phase in ZIC-HILIC, the number of salts available is limited to those showing acceptable solubility in organic medium, such as perchlorate or organic salts.
In HILIC mode, increases in salt concentrations usually lead to an increase in analyte retention [16].Additionally, other effects may appear in ZIC-HILIC: for eluents with low salt concentration, ion exchange equilibrium should be established between the cations of the mobile phase in the stationary phase and the charged analytes.
We have previously reported the behavior of the target compounds in ZIC-HILIC mode for salt concentrations in the 0.25-200 mM range for ammonium perchlorate and 0.25-5.0mM for potassium perchlorate (because of its poor solubility) [15].Briefly, two zones showing different behaviour were observed: at low salt concentrations (0-2.5 mM), a strong decrease occurred in the retention of all the charged MNNs studied (methylated MNN at w w pH 3.1), indicating that a decrease in electrostatic retention had taken place (Figure 1(a)).The electrostatic nature of this decrease was confirmed by the behaviour of hydroxylated MNNs, which were not charged at w w pH: 3.1 (Figure 1(b)), and by carrying out the same experiment at w w pH: 6.7, where both the methylated and hydroxylated MNN were uncharged; both experiments resulted in the disappearance of this zone, confirming the electrostatic nature of the decrease.The other zone appeared at high salt concentrations (15-200 mM), pointing to an increase in retention and in agreement with the characteristics observed for a partition mechanism.
Quantitative Evaluation of the Electrostatic Contribution
to Retention.The retention of a charged analyte (BH + ) on the sulfonic group (R-SO 3 − ) of the stationary phase can be described according to where M + represents the cation of the salt present in the eluent.
It has been reported [2,4] that electrostatic interactions fit (2): where A is a constant similar to that described by McCalley [2,4]: K being the equilibrium constant of (1), and K a , the acidbase ionization constant of the analyte (BH + ).Accordingly, a plot of the experimental retention factors (k exp ) against the inverse of the salt concentration ([M + ] −1 ) should be a straight line, proportional to A, with the ordinate at the origin being zero if electrostatic interactions are the only retention mechanism (2).Experimentally, the values found for salt concentrations in the 0.25-2.5
mM range (values of [M + ]
−1 ranging from 4000 to 400 M −1 ) fit a secondorder polynomial equation, with r 2 values between 0.945 and 0.989 for NH 4 + and between 0.913 and 0.957 for K + , indicating the existence of a minor additional effect (Table 1).The presence of other nonelectrostatic retention mechanisms . ISRN Analytical Chemistry (e.g., partition mechanism) is indicated by the fact that values for the ordinate at the origin are different from zero (Table 1) and a straight line was not obtained.
Modified nucleosides and nucleobases
Therefore, bearing in mind that the experimental retention factors (k exp ) must be the sum of the contribution of the electrostatic (k EI ) and nonelectrostatic (k NEI ) interactions: and substituting (2) in ( 4), we have According to ( 4) and ( 5), extrapolation of the secondorder polynomial fittings to the ordinate at the origin ([M + ] −1 → 0) allows the retention factor due to any mechanism other than electrostatic interactions (k NEI ) to be calculated, since when [M + ] −1 → 0, then k EI = 0.For a hypothetical infinite concentration of salts (when [M + ] −1 → 0 then [M + ] → ∞), there would be "infinite competition" (of the salts against the target analytes) for the electrostatically active sites of the stationary phase; this would completely eliminate the electrostatic contribution to the retention mechanism of the analytes.
From these calculated k NEI values, the percentage of contribution of the electrostatic interaction (%EI) to the overall retention was calculated, according to (6), for each analyte and for the different of salt concentrations studied (Table 2): For the lowest salt concentration, 0.25 mM, the contribution of the electrostatic interactions to the retention mechanism varied between 25% and 52% for NH 4 + , and between 26% and 49% for K + .These decreased progressively as the salt concentration rose, observing, for a salt concentration of 2.5 mM, contributions between 6% and 23% for NH 4 + and between 6% and 16% for K + .These results are in good agreement with those found by McCalley working with four basic compounds [2] and by Kumar et al. for catecholamines [4].
The retention factors due to electrostatic interactions at a salt concentration of 0.25 mM (k EI ) 0.25 mM were also calculated (Table 2).These values can be determined for (4) as the difference between the experimental retention factors at 0.25 mM (k exp ) and the retention factors for an infinite concentration of salts (k NEI ).Here, it should be recalled that these k NEI values were estimated by extrapolation to zero of the experimental data (Table 1).Accordingly, the percentage values (Table 2) should be seen as a quantitative approximation to the electrostatic contribution to the overall retention.
Conclusions
The results obtained in the present work show that for charged MNNs separation in the ZIC-HILIC column is based on a shared mechanism of partition and interaction through weak electrostatic forces.The electrostatic contribution to retention was about 25-50% at low salt concentration in the eluent, although it should be noted that this contribution decreased significantly as the salt concentration rose.In order to exploit these interactions with a view to enhancing selectivity and resolution in ZIC-HILIC separations, the use of mobile phases at low salt concentrations should be considered.
Table 1 :
Parameters of the equation for the second-order polynomial fitting of the experimental retention factors (k exp ) versus the inverse of the salt concentration ([M + ] −1 )
Table 2 :
Contribution of electrostatic interactions to the ZIC-HILIC retention of the positively charged MNNs, at different salt concentrations(a).
|
2018-12-02T16:55:09.776Z
|
2012-03-15T00:00:00.000
|
{
"year": 2012,
"sha1": "da31ece0cadbb5660b134aa15f8b1d2dfcabba44",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2012/308062.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da31ece0cadbb5660b134aa15f8b1d2dfcabba44",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
2067398
|
pes2o/s2orc
|
v3-fos-license
|
Activation of transcription factors by extracellular nucleotides in immune and related cell types
Extracellular nucleotides, acting through P2 receptors, can regulate gene expression via intracellular signaling pathways that control the activity of transcription factors. Relatively little is known about the activation of transcription factors by nucleotides in immune cells. The NF-κB family of transcription factors is critical for many immune and inflammatory responses. Nucleotides released from damaged or stressed cells can act alone through certain P2 receptors to alter NF-κB activity or they can enhance responses induced by pathogen-associated molecules such as LPS. Nucleotides have also been shown to regulate the activity of other transcription factors (AP-1, NFAT, CREB and STAT) in immune and related cell types. Here, we provide an overview of transcription factors shown to be activated by nucleotides in immune cells, and describe what is known about their mechanisms of activation and potential functions. Furthermore, we propose areas for future work in this new and expanding field.
Introduction
Extracellular stimuli regulate gene expression via intracellular signaling cascades that control the activity of transcription factors. Nucleotides such as ATP are now considered essential autocrine/paracrine mediators that influence cell behavior through activation of P2 receptors. Release of nucleotides from injured or stressed cells may serve as 'danger signals' to cells of the immune system [1]. In addition, nucleotides acting through P2 receptors often potentiate responses to other mediators, such as neurotransmitters, growth factors or cytokines [2,3]. P2 receptors are expressed in many cell types, including cells of the immune system. There are two distinct subclasses of P2 receptors: P2X are ligand-gated ion channels permeable to Na + , K + and in many cases Ca 2+ ; whereas P2Y are G protein-coupled receptors [4]. Multiple P2X and P2Y receptors are often expressed on the same cell type and couple to diverse signaling pathways. Gene expression is regulated principally at the level of initiation of transcription. Mammalian genes consist of a protein coding sequence, a proximal upstream promoter (sequence to which basal transcription factors bind) and distant enhancer and/or silencer elements (sequences to which inducible transcription factors bind). Basal transcription factors are proteins that assemble with RNA polymerase to form an initiation complex. Binding of inducible transcription factors to their response elements enhances or sometimes represses formation of this initiation complex. In addition, gene expression can be regulated indirectly via interactions among transcription factors. Some transcription factors recruit chromatin remodeling and modification proteins such as histone acetylases. Acetylation of histones weakens the interaction between the nucleosome and DNA permitting assembly of the initiation complex. Alternately, recruitment of histone deacetylases leads to transcriptional repression [5]. The activity of inducible transcription factors can be regulated by several mechanisms, such as phosphorylation or dephosphorylation, binding of activating or inhibitory factors, or de novo synthesis.
Transcription factors play critical roles in the development and function of the immune system [reviewed in [6][7][8][9]. Although the regulation of transcription factors by nucleotides has been studied in detail in several neuronal and muscle cell types, relatively little is known about the responses of immune cells. 1 Here, we review evidence for the regulation of transcription factors by nucleotides in immune and related cell types, as summarized in Table 1. We begin with a discussion of the role of P2 receptors in the regulation of NF-κB in macrophages and macrophagelike cell types-perhaps the best understood system. This is followed by brief descriptions of the regulation of AP-1, NFAT, ATF/CREB and STATs.
Methods for assessing activation of inducible transcription factors
Several approaches can be used to detect the activation of transcription factors. DNA binding can be directly monitored using the electrophoretic mobility shift assay (EMSA), also known as a gel shift assay. The basis of EMSA is that protein-DNA complexes migrate more slowly than free DNA when subjected to nondenaturing gel electrophoresis. To assess transcription factor activity, nuclear or whole-cell extracts are combined with labeled double-stranded DNA sequences containing one or more specific response elements. Retardation of DNA mobility indicates binding of protein. Specificity can be confirmed by including an antibody specific for that transcription factor, which will either block DNA binding or 'super-shift' the complex. Recently, protein-DNA array technology has been developed that permits profiling of the DNA binding activities of large numbers of transcription factors in one assay [21].
The chromatin immunoprecipitation (ChIP) technique allows analysis of protein-DNA interactions in living cells. The technique involves in vivo cross-linking of chromatinassociated proteins to DNA, its fragmentation, immunoprecipitation with specific antibodies and analysis of the DNA sequences obtained. Thus, ChIP permits the characterization of protein interactions with chromatin in its native conformation and reveals the effects of protein-protein interactions that may alter DNA binding [22]. In contrast, EMSA assesses the binding of proteins to short fragments of DNA in vitro. Reporter gene assays are also used to assess transcription factor activity in living cells. Vectors are constructed encoding the appropriate response elements upstream of a gene for an indicator protein, such as βgalactosidase or luciferase. Constructs are then expressed in the cell system of interest. Activation of the transcription factor results in quantifiable changes in the expression of the indicator protein.
There are also several indirect approaches for monitoring the activation of inducible transcription factors. These include: (a) monitoring phosphorylation of transcription factors by immunoblotting using phospho-specific antibodies; (b) assessment of translocation of factors from the cytosol to the nucleus using various techniques, such as immunoblots of cytosolic and nuclear fractions, or in situ immunolabeling; or (c) quantification of levels of transcription factors (for those regulated by de novo synthesis) or regulatory molecules.
Nuclear factor κB (NF-κB) family of transcription factors
The transcription factor nuclear factor κB (NF-κB) plays an important role in many types of immune cells [reviewed in 23,24]. NF-κB was first identified as a protein that binds to a specific DNA sequence (κB element) within the enhancer region of the immunoglobulin κ light chain gene in mature B cells [25]. NF-κB regulates the expression of a wide variety of genes that are involved in the regulation of immune and inflammatory responses, proliferation, tumor growth and cell survival. This family of transcription factors consists of five members in mammals: p65 (RelA), c-Rel, RelB, NF-κB (p50/p105) and NF-κB2 (p52/p100) [23]. They all contain an N-terminal DNA binding region known as the Rel homology domain and bind as homo-or heterodimers to κB elements. The Rel homology domain is 300 amino acids in length and, in addition to DNA binding, is responsible for dimerization and interaction with inhibitory IκB proteins. Transcriptional activation domains are found in the C-terminal region of p65, c-Rel, and RelB, and are important for inducing target gene expression. Due to lack of these domains, homodimers of p50 and p52 have no intrinsic ability to drive transcription. Both p50 and p52 are synthesized as longer precursors-p105 and p100, respectively-which are cleaved by the proteasome [26]. The other three members of the NF-κB family are synthesized as mature proteins.
In the classical activation pathway, NF-κB is maintained as an inactive complex in the cytoplasm when bound to inhibitory IκB proteins (which include IκBα, IκBβ, IκBε and IκBγ). Activation of NF-κB involves phosphorylation of IκB by an IκB kinase in response to a variety of stimuli, such as tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β) or lipopolysaccharide (LPS) [23]. Phosphorylation of IκB is followed by its ubiquitination and degradation, unmasking a nuclear import signal on NF-κB and hence permitting its translocation to the nucleus, where it regulates expression of target genes. In the alternative pathway for NF-κB activation, an IκB kinase phosphorylates inactive p100, triggering its processing and generating active p52/RelB heterodimers.
Ferrari and coworkers showed that extracellular ATP led to the activation of NF-κB in the N9 and N13 murine microglial cell lines [10]. Activation induced by ATP or LPS (used as a positive control) was monitored by EMSA to detect binding activity of total cell extracts to κB-specific oligonucleotide probes (Fig. 1a). Interestingly, relatively high concentrations of ATP were required to activate NF-κB, with maximal effects observed at 3 mM (Fig. 1b), consistent with involvement of the P2X7 receptor, which has a relatively low affinity for ATP. In contrast to its effects on NF-κB, ATP suppressed DNA binding activity of the transcription factor AP-1 (Fig. 1c), indicating that ATP has specific effects on different transcription factors.
Involvement of the P2X7 receptor in mediating ATPinduced activation of NF-κB was supported by the agonist specificity of the response. Responses were elicited by ATP, ATPγS or BzATP (a relatively potent P2X7 agonist), but not by the P2Y receptor agonists ADP or UTP. Moreover, activation of NF-κB was inhibited by the P2X7 receptor antagonist, oxidized ATP [10]. However, it is now known that oxidized ATP is not specific for the P2X7 receptor and can suppress NF-κB activation in cells lacking the P2X7 receptor [27,28]. The mechanism through which ATP causes activation of NF-κB is not clear. However, Ferrari and coworkers did show that activation was sensitive to inhibitors of proteasomal degradation and caspase activity. Furthermore, scavengers of reactive oxygen species were found to inhibit ATP-induced activation of NF-κB [10].
It is well established that many inflammatory cells, including microglia respond to LPS through activation of NF-κB. LPS, which was used by Ferrari and coworkers as a positive control, induced NF-κB activation within 15 min. In contrast, activation of NF-κB was not observed until 3 h following stimulation with ATP [10]. This delay suggested an indirect effect of ATP, perhaps mediated via release of IL-1β or other cytokines. However, the possible role of soluble factors such as IL-1β was excluded using a series of approaches including media transfer experiments. Activation of NF-κB by LPS and ATP also differed in that LPS led to formation of the classical p65/p50 heterodimers and p50 homodimers, whereas ATP induced formation of p65 homodimers. Different NF-κB dimers have distinct actions a Change refers to whether transcription factor is activated (↑) or suppressed (↓) on gene expression. Thus, activation of NF-κB by ATP in microglial cells likely has different transcriptional effects than NF-κB activation by LPS or other established proinflammatory mediators. BzATP has also been shown to regulate NF-κB signaling in the murine macrophage cell line RAW 264.7 [11]. Individually, LPS and BzATP were found to increase NF-κB DNA binding activity of nuclear extracts (measured using EMSA), while treatment with both agonists resulted in cooperative activation of NF-κB. To examine the mechanism, Aga and coworkers monitored levels of the NF-κB inhibitory protein IκBα by immunoblot (Fig. 2). Within 15 min, LPS alone rapidly induced degradation of IκBα, levels of which returned to normal within 30-45 min. Concomitant stimulation with BzATP delayed the reappearance of IκBα, consistent with more sustained NF-κB activity. The mechanism underlying these effects of nucleotides was thought to involve cross-talk with the Ras/ MEK/ERK pathway. Although responses were attributed to P2X7 activation, it is possible that other P2 receptors were involved, since BzATP can activate a number of P2 receptor subtypes in addition to P2X7 [29]. In bacterial infections where both LPS and nucleotides are present, cooperative activation of NF-κB in macrophages may enhance inflammatory responses.
Recent work has shown direct involvement of the P2X7 receptor in mediating ATP-induced activation of NF-κB in osteoclasts. Osteoclasts, the large multinucleated cells responsible for bone resorption, form by the fusion of precursors of the monocyte-macrophage lineage [30]. The essential role of NF-κB in osteoclast development was discovered in genetically modified mice lacking both the p50 and p52 subunits of NF-κB, in which osteoclasts failed to develop, resulting in severe osteopetrosis [31]. Osteoclasts are terminally differentiated cells that cannot be isolated in sufficient numbers or purity to permit assess- were either left untreated (Control) or treated with ATP (3 mM) or LPS (100 ng/ml) as a positive control. After 3 h, total cell extracts were prepared and analyzed with an NF-κB-specific oligonucleotide by EMSA. The NF-κB-DNA complex is indicated by filled arrowheads in a and b. A faster migrating nonspecific complex is marked by circles. b Dependence of NF-κB activation on the concentration of ATP or LPS. N9 cells were treated for 3 h with the indicated concentrations of ATP or LPS and analyzed by EMSA. c The effect of ATP and LPS on the DNA-binding activity of AP-1. The same cellular extracts as in a were analyzed with an AP-1-specific oligonucleotide by EMSA. The AP-1-DNA complex is indicated by the filled arrowhead. In contrast to its effects on NF-κB, ATP suppressed the DNA binding activity of AP-1. Reproduced from [10], with copyright permission of The Rockefeller University Press ment of transcription factor activation using conventional assays such as EMSA. To overcome this limitation, Korcok and coworkers used immunofluorescence to monitor localization of the p65 subunit of NF-κB, which upon activation translocates from the cytoplasm to the nucleus [12]. BzATP was applied to osteoclasts isolated from wildtype and P2X7 receptor knockout mice. In wild-type osteoclasts, BzATP increased the proportion of cells with nuclear localization of NF-κB within 30 min (Fig. 3a,d); whereas, BzATP had no effect on osteoclasts lacking the P2X7 receptor (Fig. 3c,e). This finding confirmed the involvement of P2X7 receptors in mediating the actions of ATP on NF-κB. Interestingly, in contrast to mouse osteoclasts, which showed maximal effects 30 min following exposure to BzATP, rabbit osteoclasts showed maximal nuclear localization of NF-κB 3 h after exposure to nucleotide [12], similar to the delayed response observed by Ferrari and coworkers in microglial cells [10]. The reason for these differences in activation kinetics among cell types and species of origin is not clear. Ferrari and coworkers found that ATP induced formation of p65 homodimers in microglial cells [10]; whether p65 translocated as a homodimer or as heterodimer in osteoclasts was not investigated [12].
In addition to P2X7, P2Y6 receptors have been implicated in the activation of NF-κB in osteoclasts and macrophages. To examine P2Y receptor-mediated activation of NF-κB in osteoclasts, immunofluorescence was again used by Korcok and coworkers [13]. Rabbit osteoclasts were exposed to various P2Y receptor agonists. UDP, an agonist at P2Y6 receptors, increased the proportion of osteoclasts displaying nuclear localization of NF-κB with maximal effects at 3 h (Fig. 4a,d). Control osteoclasts showed low levels of nuclear NF-κB for up to 4 h (Fig. 4b).
The extent of NF-κB translocation was dependent on UDP concentration, with maximal effect observed at 10-100 μM UDP (Fig. 4e). Lactacystin, a proteasome inhibitor, abolished the UDP-induced activation of NF-κB. In contrast to its action on osteoclasts, UDP did not induce NF-κB translocation in bone marrow stromal cells (Fig. 4c), indicating specificity for cell type.
In addition to UDP, significant translocation of NF-κB was induced by INS48823 [13], a selective agonist at the P2Y6 receptor [32]. In contrast, no response was elicited by 2-methylthio-ADP (P2Y1 receptor agonist), UTP (P2Y2 receptor agonist) or low concentrations of ATP sufficient to activate multiple P2 receptors including P2Y2 and P2X4, but not P2X7. Thus, the agonist specificity for NF-κB activation indicates the involvement of P2Y6, but not other P2Y receptor subtypes known to be present on osteoclasts.
UDP and INS48823 enhanced survival of osteoclasts in vitro [13], presumably by inhibiting apoptosis-the predominant form of cell death for osteoclasts [33]. NF-κB has been implicated in controlling the survival of a number of cell types [34], including osteoclasts [35]. In this regard, a cell-permeable peptide inhibitor of NF-κB was found to block the effect of P2Y6 activation on osteoclast survival [13]. Enhancement of osteoclast survival through the P2Y6 receptor is consistent with data from 1321N1 human astrocytes, in which activation of P2Y6 receptors prevented TNF-α-induced apoptosis [36].
The predominant P2Y receptor expressed by the J774 murine macrophage cell line has been shown to be P2Y6 [15]. Exposure of these cells to UTP alone, which presumably leads to activation of P2Y6 receptors, does not activate NF-κB; however, UTP does potentiate LPSinduced activation of NF-κB as well as expression of inducible nitric oxide synthase (iNOS) [14]. In a series of studies, Chen and coworkers found that UTP increased LPS-induced phosphorylation and degradation of IκBα, by enhancing activation of IκB kinase [16]. Enhancement of IκB kinase activity appeared to involve several signaling pathways, including P2Y6-mediated elevation of cytosolic free Ca 2+ and subsequent activation of Ca 2+ /calmodulindependent protein kinase (CaM kinase) [14,16].
In summary, NF-κB is critical for immune and inflammatory responses by regulating the expression of key cytokines, growth factors and effector enzymes. The NF-κB pathway is highly conserved among species [37] and plays crucial roles in the development, activity and survival of osteoclasts [31,35,38]. Nucleotides released from damaged or stressed cells can alter gene expression on their own or regulate responses induced by LPS, thus playing potentially important roles in innate immunity. It will be of interest to examine whether nucleotide signaling through P2 receptors modulates NF-κB activation induced by other inflammatory mediators in addition to LPS. can form homodimers as well as heterodimers with Fos proteins. Some members of the ATF/CREB family of transcription factors can also participate in AP-1 complexes. With many possible combinations of heterodimers and homodimers comprising the AP-1 complex, a broad diversity in gene regulation can be achieved [39,40].
AP-1 translates extracellular signals in immune and related cell types into changes in the expression of specific target genes, which in turn control proliferation, differentiation and apoptosis [39]. AP-1 activity can be regulated by de novo synthesis of subunits, dimer composition, posttranslational modifications and interactions with other proteins. Two of the components of AP-1 (Jun and Fos) were first identified as viral oncoproteins, so their role in tumorigenesis is also well established.
Budagian and coworkers examined the effects of ATP on AP-1 activity in the Jurkat human T-lymphoblastoid cell line [17]. Using immunoblots, they found that expression of Jun and Fos increased 30-60 min following exposure to ATP. Effects were observed only at ATP concentrations of 1 mM or higher, consistent with involvement of the P2X7 receptor. In keeping with increased expression of Jun and Fos, EMSA of nuclear extracts indicated that AP-1 DNA binding activity increased within 60 min following exposure to ATP and continued to intensify for at least 3 h. Supershift analysis confirmed that the AP-1 complex contained Jun and Fos. Surprisingly, ATP transiently suppressed NF-κB DNA Fig. 3a-e BzATP acts through P2X7 receptors to induce nuclear translocation of NF-κB in murine osteoclasts. Osteoclasts isolated from wild-type (WT) and P2X7 receptor knockout (KO) mice were treated with BzATP (300 μM) or vehicle for 0-4 h. a-c The p65 subunit of NF-κB was visualized by immunofluorescence (green, left). All nuclei were stained with TOTO-3 (red, middle), with superimposed images of NF-κB and TOTO-3-stained nuclei at right. a BzATP-treated WT osteoclasts showed nuclear localization of NF-κB at 30 min (evident as yellow staining in aiii). b, c In contrast, vehicletreated WT and BzATP-treated KO osteoclasts showed cytoplasmic localization of NF-κB. d WT osteoclasts treated with BzATP exhibited a significant increase in nuclear translocation of NF-κB at 30 min compared with time 0 (*P<0.05). e In contrast, KO osteoclasts did not show a significant change in NF-κB translocation at any time point after BzATP treatment. Data are the percentage of osteoclasts with nuclear localization of NF-κB. Reproduced from [12], with permission of the American Society for Bone and Mineral Research binding activity within 1 h of exposure, with recovery by 3 h. Thus, ATP has opposite effects in Jurkat cells compared with its effects in microglial cells, in which ATP activates NF-κB and suppresses AP-1 [10]. Nevertheless, the effects of ATP were presumably mediated by P2X7 receptors in both cell types [10,17].
In Jurkat cells, the mechanism mediating the effect of ATP on AP-1 was shown to involve the tyrosine phosphorylation and activation of p56 lck [17] (a tyrosine kinase essential for signal transduction through the T-cell receptor [41]). In Jurkat cells, but not JCaM1 cells (a mutant Jurkat cell line that lacks p56 lck ), ATP activated ERK and Jun N-terminal kinase and enhanced AP-1 activity, indicating an essential role for p56 lck in regulating AP-1. It is known that AP-1 stimulates expression of IL-2 in T lymphocytes [42,43]. In this regard, Budagian and coworkers found that ATP stimulated IL-2 expression in Jurkat cells but not in JCaM1 cells, consistent with a role for AP-1 in mediating this response to ATP.
In addition to examining the effects of UTP on NF-κB signaling, Chen and coworkers assessed the effects of P2Y6 receptor activation on AP-1 activity in the J774 macrophage cell line [14]. Using EMSA, they found that UTP and LPS each induced AP-1 activation and that their effects were additive. In summary, the ability of nucleotides to activate AP-1 likely depends upon the subtypes of P2 receptors involved as well as cell type. Furthermore, cross-talk with other signaling pathways and transcription factors likely influences the ultimate effects of AP-1 activation on gene expression.
Nuclear factor of activated T-cells (NFAT) family of transcription factors
NFAT is a family of transcription factors that regulate the differentiation and activation of a number of immune and related cell types, including T-cells and osteoclasts [7,8]. This family of proteins consists of five members (NFAT1-5), of which four are regulated by calcium signaling [7]. All NFAT proteins contain a highly conserved Rel-homology region that confers common DNA-binding specificity. Inactive NFAT is maintained in the cytoplasm in a hyperphosphorylated state. Elevation of cytosolic free Ca 2+ stimulates the phosphatase calcineurin, which in turn dephosphorylates multiple serine residues, exposing a nuclear localization signal that permits translocation of NFAT to the nucleus.
Ferrari and coworkers examined the effect of ATP on NFAT activation in the N9 murine microglial cell line [18]. A high concentration of ATP (3 mM) activated NFAT, as determined by assaying nuclear extracts using EMSA (Fig. 5). In contrast to the delayed effect of ATP on activation of NF-κB in these cells, NFAT activity was evident within 1 min, reached a maximum after 15 min and diminished in the following 60 min. As shown previously for the effects of ATP on NF-κB [10], activation of NFAT was mediated by the P2X7 receptor [18]. This conclusion was based on agonist specificity, with responses induced by high concentrations of ATP or BzATP, but not by other nucleotides. In addition, ATP did not activate NFAT in a P2X7 receptor-deficient cell clone N9R17, although NFAT could still be activated in these cells by Ca 2+ ionophore. In this regard, ATP-induced NFAT activation in N9 cells was dependent on the presence of extracellular Ca 2+ and was prevented by the calcineurin inhibitors cyclosporin A and FK506, consistent with NFAT activation through the canonical signaling pathway.
Using competition and supershift assays in N9 cells, Ferrari and coworkers demonstrated that ATP induced activation of NFAT-1 and NFAT-2, but not NFAT-3 or NFAT-4 [18]. These findings were confirmed by immunoblot analyses of nuclear extracts from cells stimulated with ATP. The functional role of ATP-induced NFAT activation in microglial cells is poorly understood; however, it was proposed that microglial NFAT may play a role in inducible expression of cytokines and, thus, mediate proinflammatory responses to ATP in the central nervous system [18].
Most isoforms of NFAT are activated by Ca 2+ elevation, and signaling through many P2X and P2Y receptors involves Ca 2+ increase (by influx or release from stores, respectively). Thus, future studies may reveal NFAT to be downstream of many P2 receptors. The versatility of NFAT as a regulator of gene expression is thought to be due to different binding partners within the nucleus. Thus, the effects of extracellular nucleotides on the activation of NFAT and its various binding partners will be an important area for future research.
Activating transcription factor (ATF)/cyclic AMP response element binding protein (CREB) family of transcription factors The ATF/CREB family plays important roles in the regulation of a number of cell functions, including proliferation and apoptosis. Members of the ATF/CREB family dimerize with themselves or other family members and bind to the cyclic AMP response element (CRE) on target genes. Activation of CREB involves phosphorylation of a single serine residue (Ser 133 ), which in turn promotes association of CREB with the coactivator CREB-binding protein (CBP), resulting in transcriptional activation [44]. CREB can be activated by multiple kinases, including: protein kinase A, regulated by cyclic AMP; CaM kinase, regulated by cytosolic Ca 2+ ; and ribosomal S6 kinase 2 (RSK2), regulated by MAP kinase pathways [45,46]. CREB controls the expression of a number of genes, including members of the AP-1 family of transcription factors.
Brautigam and coworkers have examined signaling pathways activated by extracellular nucleotides in the BV-2 murine microglial cell line [19]. LPS induces expression of iNOS and COX-2 in these cells, a process that was inhibited by ATP and 2-chloro-ATP (P2Y1 agonist). ATP also suppressed LPS-induced NO production in this microglial cell line, an effect that was reversed by SB 203580, an inhibitor of p38 MAP kinase. CREB activation was assessed indirectly by monitoring Ser 133 CREB phosphorylation by immunoblot of whole-cell lysates (Fig. 6). Both ATP (500 μM) and 2-chloro-ATP (10 μM) rapidly induced phosphorylation of CREB and ATF-1 (phosphorylated ATF-1 was also recognized by the antibody used to detect phosphorylated CREB). SB 203580 inhibited ATP-induced phosphorylation of CREB and ATF-1, implicating p38 MAP kinase in their activation. The potential role of CREB in the suppression of iNOS and COX-2 expression is not immediately obvious [47]; however, one possibility considered by Brautigam and coworkers is that activation of CREB sequesters the transcriptional coactivator CBP, preventing its interaction with p65 and thereby inhibiting the expression NF-κB-dependent genes such as iNOS [19].
Thus, Brautigam and coworkers found that activation of P2Y1 and possibly other P2 receptors suppresses LPSinduced expression of iNOS and COX-2 in a microglial cell line [19]. This is in contrast to the findings of Chen and coworkers that P2Y6 receptors potentiate LPS-induced expression of these inflammatory genes in murine J774 macrophages [14,15]. Further studies of the effects of nucleotides on CREB activation may be informative. In T lymphocytes, engagement of the T-cell receptor leads to phosphorylation of CREB on Ser 133 by a pathway that involves activation of p56 lck , protein kinase C, Ras, Raf-1, MEK and RSK2 [45]. Since ATP has been shown to activate p56 lck in Jurkat cells [17], future studies should examine the possibility that ATP also induces activation of CREB in these cells.
Signal transducer and activator of transcription (STAT) family
The Janus kinase (JAK)/STAT pathway transduces signals from cytokines, growth factors or cellular stress to regulate a number of processes including proliferation, differentiation, cell migration and apoptosis [48]. Binding of a cytokine or growth factor induces dimerization of its receptor. JAK tyrosine kinases are then recruited to these dimerized receptor complexes. JAKs are activated by transphosphorylation and subsequently phosphorylate and activate STATs (latent transcription factors that reside in the cytoplasm until activated). There are seven mammalian STATs that, upon phosphorylation, dimerize and enter the nucleus, where they bind specific regulatory sequences to activate or repress the transcription of target genes [49].
Bulanova and coworkers have recently reported the responses to extracellular ATP of murine bone marrowderived mast cells and two mast cell lines, MC/9 and P815 [20]. These cells express multiple subtypes of P2X and P2Y receptors. Apoptosis of these cells was induced by ATP (1-3 mM) or BzATP (100 μM), consistent with involvement of the P2X7 receptor. In addition, immunoblotting of whole-cell lysates with phospho-specific antibodies established that ATP (3 mM) induced phosphorylation of STAT6 and weak phosphorylation of JAK2. These effects were rapid and transient with maximal phosphorylation evident 15 min following exposure to ATP. In contrast, there was no apparent change in the other members of the JAK family (JAK1, JAK3, Tyk2) or in STAT1, STAT3 or STAT5, indicating specificity of activation. ATP-induced phosphorylation of JAK2 and STAT6 was abolished by the P2X7 antagonists KN-62 or oxidized ATP. However, due to lack of specificity of these antagonists, it is difficult to interpret these findings. First, although KN-62 is a potent antagonist of the human and mouse P2X7 receptor [50], it is also an inhibitor of CaM kinase II [51]. Second, as mentioned above, oxidized ATP is not a specific P2X7 antagonist [27,28].
Additional findings of Bulanova and coworkers show that high concentrations of ATP or BzATP enhance the production of several cytokines in mast cells including IL-6 and IL-13, which are known to signal through the JAK/ STAT pathway. Therefore, ATP may stimulate the rapid activation and release of cytokines or other signaling molecules that bind to receptors on mast cells in an autocrine fashion to activate JAK/STAT signaling. [19], with permission of Elsevier Fig. 5 Kinetics of ATP-induced NFAT and NF-κB activation in a microglial cell line. N9 cells were stimulated for the indicated times with ATP (3 mM) and nuclear extracts were analyzed by EMSA. Filled arrowheads indicate positions of the NFAT and NF-κB DNA complexes. Faster-migrating nonspecific complexes are indicated by circles. NFAT activity was evident within 1 min following exposure to ATP, reached a maximum after 15 min and diminished following 60 min. In contrast, NF-κB activation was relatively delayed and sustained. Reproduced from [18], with permission of the American Society for Biochemistry and Molecular Biology
Concluding remarks
Evidence is accumulating that specific P2 receptors are involved in the regulation of proliferation, differentiation, activity and survival of immune and related cell types through the activation of distinct transcription factors. The lack of selective agonists and antagonists for many subtypes of P2 receptors has made it difficult to assess their specific roles. Development of genetically modified mouse models (in which receptor subtypes are overexpressed or deleted) will provide critical information on the transcriptional effects of specific subtypes of P2 receptors. Such information will be instrumental in determining the roles of these receptors in inflammation and immunity, as well as related processes such as osteoclastic bone resorption [52]. This knowledge may lead to the identification of P2 receptors or components of their downstream signaling pathways as targets for therapeutic intervention in inflammatory and immune disorders.
Leukocytes express multiple subtypes of P2X and P2Y receptors that play diverse roles in regulating the development of the immune system and in modulating inflammatory and immune responses. Although great progress has been made in sorting out many of the initial signaling events triggered by activation of P2 receptors, relatively little is understood about their role in transcriptional regulation. Given the availability of high throughput transcription factor arrays, ChIP and reporter assay screens, rapid progress in this field is expected. An important question to be answered is whether the effects of nucleotides on transcription factor activation are direct-mediated through P2 receptors-or indirect-mediated by autocrine/paracrine factors whose release is stimulated by P2 receptor activation.
Further studies are clearly needed to elucidate the effects of nucleotides on the transcriptional machinery of various immune cell types, and interactions with pathways activated by cytokines, growth factors, extracellular matrix and other stimuli. The transcriptional effects of nucleotides will undoubtedly depend on the nature of the stimulus, P2 receptor expression, cell type and cellular environment. Therefore, the actions of nucleotides on processes such as proliferation and differentiation must eventually be considered within the context of a complex dynamic network of signaling pathways that are activated in spatially and temporally distinct patterns.
|
2014-10-01T00:00:00.000Z
|
2007-01-03T00:00:00.000
|
{
"year": 2007,
"sha1": "1bd4cfb66c7a61bfcf2c9146944284b688979c8f",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11302-006-9037-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1bd4cfb66c7a61bfcf2c9146944284b688979c8f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
259312387
|
pes2o/s2orc
|
v3-fos-license
|
Isotherms and Kinetics Study for Adsorption of Nitrogen from Air using Zeolite Li-LSX to Produce Medical Oxygen
This research investigates the adsorption isotherm and adsorption kinetics of nitrogen from air using packed bed of Li-LSX zeolite to get medical oxygen. Experiments were carried out to estimate the produced oxygen purity under different operating conditions: input pressure of 0.5 – 2.5 bar, feed flow rate of air of 2 – 10 L.min -1 and packing height of 9-16 cm. The adsorption isotherm was studied at the best conditions of input pressure of 2.5 bar, the height of packing 16 cm, and flow rate 6 Lmin -1 at ambient temperature, at these conditions the highest purity of oxygen by this system 73.15 vol % of outlet gas was produced. Langmuir isotherm was the best models representing the experimental data., and the model parameters were the maximum monolayer coverage (qm) 200 mg. g -1 and Kl 0.00234 L.mg -1 . Also, from the Freundlich isotherm model, the sorption intensity (n) indicated favorable sorption of 1.435. The average free energy estimated from the DRK isotherm model was 0.02 KJ.mol -1 , which proved the adsorption process to follow physical nature. The results got from experiments showed a coincidence to the pseudo-first-order kinetic model.
Pressure -Vacuum Swing Adsorption Process
COVID-19 outbreak stimulates researchers to investigate another source of medical oxygen as an alternative to the conventional method of the cryogenic process, which consumes a large amount of energy. That is because of the huge increase in the need for portable medical oxygen concentrators (MOCs) because of pulmonary failure caused by COVID-19 as well as chronic bronchitis, pneumonia and chronic obstructive pulmonary disease (COPD) to avoid problems caused by hypoxemia [1][2][3]. Adsorption, membranes, and cryogenic separation are the three main ways to separate the different parts of air [1]. Adsorption is the most promising method for its simplicity, appropriateness in terms of moderate conditions, and low energy consumption, moreover, the separation method by adsorption is used to make very pure oxygen [1]. A lot of equipment that produce medical oxygen are made to get pure oxygen [2]. Different adsorption methods were experimented with to adsorb nitrogen gas selectively from air to produce pure oxygen gas. Pressure swing as well as vacuum swing, or pressure/vacuum swing adsorption processes, which is symbolized by PVSA, were tried using N2-selective adsorbents [3]. Also, Temperature Swing Adsorption (TSA) process was experienced [6].
It was demonstrated that PSA was more feasible than TSA process [2]. That is because PSA process cycle has a time of between one to several seconds, unlike the TSA process cycle which has a time extends to hours [6].
PVSA process is the common way to separate gases since the adsorption step is carried out at above atmospheric pressure and the step of regeneration is conducted under a vacuum pressure [7]. There are three points that make PVSA excel than PSA. The first is that the O2/N2 recovery rate and purity of the PVSA process is better than those of the PSA process [3]. The gas exits from heavy-reflux and light-reflux streams is used and that is why PVSA yield is more than PSA. This leads to an increase in both PVSA process productivity and capacity of adsorbents relative to those of PSA. The second point is in terms of total energy use. PSA process uses more energy than PVSA process. This is because PVSA includes vacuum pump which uses less energy in comparison with a conventional pump. The third point is that the mass transfer zones for the light and heavy components often get in a common way in a formal dualreflux PSA process with has an intermediate feed. This problem can be solved with these newly integrated PVSA processes. Zeolites are likely to be used to separate air into its components because N2 gas molecules' dipole and quadrupole moments interact with extra cations of the zeolites' frame. Lithium (Li) forms an attractive cation for adsorbing N2 molecule better than O2 molecule on LSX zeolite when it is mixed with air [8]. Talking about adsorbents' capacity and productivity leads to study the adsorption isotherms of the system used in this research since it gives information about the full benefit of the adsorbent. Also, the study of adsorption kinetics makes a clear image of the rate of adsorption and the mechanism it followed.
Adsorption Isotherm Models
Adsorption isotherms can be defined as suitable mathematical models to describe the distribution of the adsorbed ions between the adsorbent material and the solution at equilibrium. Langmuir, Freundlich, Dubinin-Radushkevich, and others [8,9] came up with models of adsorption isotherms. Langmuir model, which was made to represent gas/solid phase interaction, was used in comparing, and measuring the ability of different adsorbents to take up molecules [8]. Langmuir's isotherm takes surface coverage into account by balancing the relative rates of adsorption and desorption (dynamic equilibrium). While adsorption depends on how much of the adsorbent surface is open, desorption depends on how much of the surface is covered [6]. One of several forms of the Langmuir equation is Eq. 1 [6]: Where: qe = the amount of adsorbate adsorbed per gram of the adsorbent at equilibrium (mg. g -1 ), qm = maximum monolayer coverage capacity (mg. g -1 ), Kl = Langmuir isotherm constant (L.mg -1 ), Ce = the equilibrium concentration of adsorbate (mg. L -1 ).
Freundlich model assumed non-ideal adsorption; the adsorbent has a heterogeneous surface and the adsorption is not restricted to the formation of a monolayer. The model can be linearized as Eq. 2 [9, 10]: Where: qe = the amount of adsorbate adsorbed per gram of the adsorbent at equilibrium (mg. g -1 ), Kf = Freundlich isotherm constant (mg. g -1 ), n = adsorption intensity, Ce = the equilibrium concentration of adsorbate (mg. L -1 ).
Generally, the Dubinin-Radushkevich isotherm is used to describe how a Gaussian energy distribution on a heterogeneous surface lead to adsorption [11]. The model has often done a good job of fitting data for high solute activities and the middle range of concentrations. The equation describes this model is Eq. 3 [11]: Where: qe = the amount of adsorbate adsorbed per gram of the adsorbent at equilibrium (mg. g -1 ), qm = theoretical isotherm saturation capacity (mg. g -1 ), KDR = Dubinin-Radushkevich isotherm constant (mol 2 . KJ 2 ), = is the polyanion potential (KJ. mol -1 ).
can be estimated by the following Eq. 4 [11]: Where: is the gas constant (8.31 J mol −1 K −1 ), T is the absolute temperature and Ce is the equilibrium concentration of adsorbate (mg. L -1 ).
The model is applied to indicate physical and chemical adsorption of metal ions as well as the free energy, E, which is defined per molecule of adsorbate (that is removing a molecule from its position in the sorption layer to the infinity) may be calculated by Eq. 5 [11]: is the mean adsorption energy less than 8 KJ.mol -1 in physical adsorption but for the chemical adsorption the energy is between 8 to 16 KJ.mol -1 [12].
This study focused on the adsorption isotherm models of N2 gas from ambient air on Li-LSX zeolite to get medical oxygen as a necessary step in scalingup the system.
Adsorption kinetics
Kinetics studies the adsorption rates to explain the mechanism that dominates in a certain system. Studying adsorption kinetics means investigating the experimental conditions that affect the rate of adsorption and, in turn, finding the factors that affect reaching equilibrium. These kinds of studies tell us about the possible way adsorption works and the different steps that lead to the final adsorbate-adsorbent complex. They also help come up with the right mathematical models to describe how things work together. Once the rates and factors that affect them are clear, they can be used to make adsorbent materials for use in industry and to figure out how the dynamics of the adsorption process is complex [7]. Adsorption kinetics are very important for figuring out the equilibrium adsorption capacity and the rate constants. The most common types of kinetic models are pseudofirst order and pseudo-second order. [13]. In pseudo-first order model, the adsorption capacity is related to the rate of adsorption as follows in Eq. 8 [7]: Where: qe= adsorption capacity at equilibrium, mg. g -1 , qt= adsorption capacity at any time, mg. g -1 , k1 = rate constant for pseudo 1st order adsorption process, min -1 .
This model is found to be fit for the initial 20 to 30 min of interaction between the adsorbate and the adsorbent and not fit the overall extent of contacting [14].
The pseudo-second-order kinetic model can be solved to take the linearized form as in Eq. 8 [8]: The initial phases of the adsorption process are assumed to be described by the pseudo-first-order model but for the whole range of adsorption it is likely that the adsorption process follows a non-linear model that represents the complex mechanism of interaction between the absorbate and the adsorbent.
For porous adsorbents, the diffusion of the adsorbate molecules into the pores needs to be considered when looking for a good kinetic model for the process. In many cases, the rate at which an adsorbate is taken in may be controlled by intra-particle diffusion. This is presented by the following well-known expression [16]. q = ⋅t 0.5 (9) The crucial aspect of this formula is that the linear plot of q t vs t 0.5 must pass through the origin (zero intercepts). Consequently, the intra-particle diffusion model is easily testable, demonstrating that the diffusion process dominates the kinetics. The slope of the graph can be utilized to calculate the rate coefficient ki (mg/ (g. min 0.5 ) [17].
Materials
Zeolite Li-LSX was used in the experiments. The technical specification of it is listed in Table 1.
Equipment
The experimental setup is shown in Fig. 1. Fig. 2 shows photos of the experimental setup.
All the equipment used is listed in Table 2. At the beginning, the adsorbent (zeolite Li-LSX) was heated for 45 minutes in an oven at 110 ℃ to eliminate moisture and other impurities. Then zeolite was packed randomly in the column. Helium gas was let to flow passing the packing for refreshing it and prepare it to adsorb N2. Air was compressed to a specific pressure through the drum to maintain the flow of air to be stable in the column during the experiments. Air was also passed through a filter filled with silica gel to get rid of moisture and impurities. The flow was set at a certain value of flow rate by a flow meter. Nitrogen gas was adsorbed on the zeolite and oxygen-rich gas was produced at the outlet. The oxygen produced was split into two streams: one to the sensor for measuring oxygen purity one to the storage cylinder and the remainder as a volume percentage of the generated gas. Nitrogen volume percent in the inlet and outlet was determined by subtracting the oxygen percent from total 100% complete air components. There was also a purge stream to manage any unusual increase in the outlet pressure. After each adsorption experiment, there was a desorption operation to regenerate the zeolite. The desorption operation was carried out under vacuum pressure of -0.9 bar for 2 minutes. The amount of adsorbed nitrogen was estimated by the difference between the inlet and outlet concentrations after converting the partial pressure to concentration considering nitrogen as an ideal gas at the experimental conditions.
Response Surface Methodology
The response surface approach was investigated. to investigate the interactions between the variables that affected the purity of O2. Also, it was used for optimizing and scaling up the current laboratory setup. In this respect, experiments were designed using the Box-Behnken Design (BBD). Fifteen experiments were carried out with various combinations of the studied variables which were: inlet pressure, packing height, and flow rate to determine which factors and their interactions had the major influence on the purity of the generated oxygen [18].
Isotherm Model
The Langmuir, Dubinin -Radushkevich, and Freundlich isotherm models were used to examine the adsorption data. The adsorption isotherms were applied at the best operating conditions obtained in this study: 2.5 bar pressure, and 16 cm height of packing which gives the highest purity of oxygen (73.15 vol % of outlet gas) which was the basis for optimizing the results to reach the utmost purity required for medical purposes [18]. These models relate the quantity of nitrogen adsorbed on a solid surface Fig. 3 to Fig. 5 below explain the adsorption isotherm model (Freundlich, Dubinin-Radushkevich, and Langmuir) with the results in Table 3. The results in Table 3 show that all the models fit well the experimental data, but Langmuir model has the maximum value of the correlation coefficient (R 2 =0.917). This led to the conclusion that the adsorption process followed Langmuir hypotheses of monolayer of adsorbed molecules and that no forces of interaction existed between them [15]. Langmuir parameters were the maximum capacity of adsorption 200 mg. g -1 which indicated a large capacity for adsorption and Kl 0.00234 L.mg -1 which indicated a favorable adsorption [16]. Also, the apparent energy (E) was found to be +0.02 KJ.mol -1 from DRK isotherm model. The low positive value of energy agrees well with the heat of physical adsorption in the gas phase with n equal to 1.435 from the Freundlich model [10,12].
Adsorption Kinetics
The kinetic data for the adsorption of nitrogen on zeolite Li-LSX adsorbent was Three fundamental kinetic models were investigated: the pseudo-first-order model, the pseudo-second-order model, and the intra-particle kinetic model. Fig. 6 through Fig. 8 show the three models. Table 4 lists the three kinetic models' correlation coefficients and other characteristics. The three models showed acceptable values of correlation coefficient (R 2 ) with the pseudo-first order model (0.9834) somewhat higher than the others. The equilibrium capacity of pseudo-first order 63.79 mg /g was close enough to the experimental value 58.5 mg/g to make the decision that the pseudofirst order model represented well the kinetics of this system. Therefore, the mechanism of which the adsorption followed as well as the speed of adsorption depicted nonlinear interaction between nitrogen gas molecules and Li-LSX zeolite.
4-Conclusions
The equilibrium sorption was investigated in a packed bed of zeolite Li-LSX at pressure 2.5 bar, flow rate of 2-10 L.min -1 , and height of packing of 16 cm for the adsorption of N2 from air to produce medical oxygen. The adsorption isotherm models were studied to get the necessary information of the adsorbent material that it confirmed its feasibility in industrial application. The sorption data were applied into Langmuir, Freundlich, and Dubunin -Radushkevich isotherms. Langmuir adsorption model was the best correlation represented the experimental data. All the models proved that the adsorption was physical in nature and the concept of one layer was applicable. The experimental data correlated well with the pseudo -first-order kinetic model which indicated the nonlinear interaction between nitrogen molecules and Li-LSX zeolite.
|
2023-07-03T15:03:18.734Z
|
2023-06-29T00:00:00.000
|
{
"year": 2023,
"sha1": "b50306d9e1b130fb020d2ce0ede9694759ff1f1f",
"oa_license": "CCBYNC",
"oa_url": "https://ijcpe.uobaghdad.edu.iq/index.php/ijcpe/article/download/958/846",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6f12b283cbac3a28337c1d07498ce81f240c75e1",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
18735648
|
pes2o/s2orc
|
v3-fos-license
|
Stability of blocked replication forks in vivo
Replication of chromosomal DNA must be carried out to completion in order for a cell to proliferate. However, replication forks can stall during this process for a variety of reasons, including nucleoprotein ‘roadblocks’ and DNA lesions. In these circumstances the replisome copying the DNA may disengage from the chromosome to allow various repair processes to restore DNA integrity and enable replication to continue. Here, we report the in vivo stability of the replication fork when it encounters a nucleoprotein blockage in Escherichia coli. Using a site-specific and reversible protein block system in conjunction with the temperature sensitive DnaC helicase loader and DnaB replicative helicase, we monitored the disappearance of the Y-shaped DNA replication fork structures using neutral-neutral 2D agarose gels. We show the replication fork collapses within 5 min of encountering the roadblock. Therefore, the stalled replication fork does not pause at a block in a stable confirmation for an extended period of time as previously postulated.
INTRODUCTION
Cell viability requires the complete and precise duplication of the entire genome in a timely manner.Replication of chromosomes can be impeded during cell growth by the presence of DNA lesions, excessive or tightly bound proteins on the DNA or unusual DNA structures that obstruct the progression of the replisome (1).If a replication fork encounters any of these roadblocks, the replisome may disengage, at least partially, from the DNA, allowing processing of the DNA into a structure that facilitates reloading of the replication proteins and restart of replication.This process may allow access of DNA repair factors, accessory helicases and homologous recombination proteins which can repair or bypass the blocking lesions.In bacteria, the regularity of the replication fork encountering these impediments that lead to dissociation can be inferred from the key role that the PriA protein plays in the survival of the cell (2), with the most frequent cause of dissociation thought to be nucleoprotein blocks (3).The fate of the replication proteins when encountering such impediments is uncertain, however the replisome is thought to remain stable for an extended period of time at protein roadblocks before it is removed from the DNA (4)(5)(6).Similarly, evidence suggests replisomes that have stalled owing to head-on collisions with transcription complexes remain stable for 60 min or more (7).
Initiation of replication of the Escherichia coli chromosome occurs at a unique origin of replication, oriC (5).The initiator protein DnaA melts an AT-rich region within oriC allowing binding of a DnaB-DnaC complex onto each of the separated DNA strands (8).DnaC is essential for the loading of the replicative helicase DnaB onto the DNA but subsequently dissociates when the primase DnaG interacts with DnaB (9,10).The hexameric DnaB encircles the DNA and separates the strands to allow synthesis of the first RNA primer resulting in the assembling of the DNA polymerase III holoenzyme (PolIII).The core polymerases within the holoenzyme are tethered to the separated DNA strands by the -sliding clamp, a processivity factor, and synthesise the DNA in either a continuous (leading strand) or discontinuous (lagging strand) manner.Subsequently, the circular chromosome is replicated by the two independent replisomes moving bidirectionally from oriC (11).While the lagging strand polymerase in each replisome dissociates from DNA upon completion of an Okazaki fragment (12), overall the complex remains bound to the DNA because of the long half-life of the -sliding clamp (13) and the multiple interactions between polymerases, the clamp loader complex and the DnaB helicase.However, the evidence for the fate of the typically stable replisome upon meeting a roadblock is conflicting.Previously, it has been shown using an in vitro nucleoprotein roadblock formed from multiple copies of the lacI/lacO repressor/operator that a paused replisome has a half-life of 6 min (14).This is in line with earlier in vitro data that a stalled replisome blocked by torsional strain in the DNA has a half-life of 4 min (15).Conversely, in vivo data has suggested that a stalled replisome may be stable for hours, suggesting that in vivo external factors colocalise with a stalled replisome to prevent this rapid dissociation (4,5).Using a transcriptional repressor protein bound to an array of operator sites in the E. coli chromosome, it was seen that replication forks could be efficiently blocked throughout a cell population.When the DNA was examined it was found that Y-shaped DNA was abundant at the array representing a site-specific replication fork block, and the level of the Y-shaped signal remained constant over hours.Furthermore, it was found that within 5 min of addition of the gratuitous inducer for the repressor protein, the replication forks had restarted and replication had moved through the array.It was, therefore, proposed that the replisome remained bound and stable over this period, allowing for the rapid restart of replication.
Here, we have investigated the in vivo stability of the replisome at a site-specific protein roadblock created in E. coli.A temperature sensitive allele of the dnaC gene (dnaC2) was used to prevent reloading of the replisome once dissociation occurred.A temperature sensitive allele of the DnaB replicative helicase was also used to rapidly inactivate the replisome.The timing of DNA replication fork collapse and subsequent processing of the DNA in these mutants and in a wild-type strain was visualised by neutral-neutral 2D agarose gels.Our results show that the replication fork collapses rapidly upon encountering the roadblock with a halflife of <5 min, suggesting the arrested replisome at a nucleoprotein roadblock in vivo is more transient than previously supposed, and is more similar to the in vitro situation.
Cells were transformed with a plasmid (pKM1) which encodes the TetR-YFP repressor under control of the Para promoter.To produce pKM1, the psi site from pSC101 was amplified by PCR and inserted into the HindIII restriction site of the previously published pLau53 (17).
Production of the fluorescent repressor TetR-YFP was induced by addition of 0.1% arabinose when cells reached above OD 600 nm = 0.05.Cells were then incubated for 1 h and examined using a fluorescence microscope to confirm the extent of replication blockage throughout the population.At least 100 cells of each strain were examined and foci enumerated.Cells were then shifted to 42 • C to induce replisome collapse in the temperature sensitive strain dnaBts and prevent new rounds of replication in the temperature sensitive strain dnaCts.The gratuitous inducer anhydrotetracycline (AT; 100 ng ml −1 ) was used to relieve tight repressor binding.To determine viability, a ten-fold serial dilution was generated and 5 l of each dilution spotted onto agar containing appropriate antibiotics and anhydrotetracyline if required.The same dilutions were spread to determine CFU ml −1 .All plates were grown at 30 • C overnight.
Microscopy
For microscopy, cells were transferred to a slide mounted with 1% (w/v) agarose layer and visualised with a 100× NA 1.4 objective on a Zeiss Axioskop2 equipped with a Hamamatsu Orca-AG CCD camera.eYFP was observed through Chroma filter set 41028.The images were taken, analysed and processed by MetaMorph R (Molecular Devices R ) and Adobe R Photoshop R CS6.
2D DNA gels and Southern hybridisation
Samples of cells were taken at the indicated time points, 0.1% (final) sodium azide was added and cells were put on ice.Cells were harvested, embedded in 0.4% agarose plugs and subsequently incubated in EC lysis solution (10 mM Tris-HCl [pH 8], 1 M NaCl, 100 mM EDTA, 0.2% sodium deoxycholate, 0.5% Sarkosyl, 100 g ml −1 lysozyme, 50 g ml −1 RNase A) at 37 • C for 2 h.The EC lysis solution was replaced with ESP (0.5 M EDTA, 1% sarcosyl, 1 mg ml −1 of proteinase K) and incubation was continued overnight.Following extensive washing, DNA was digested with either EcoRV for visualisation of the array region, or EcoRI for visualisation of the 4.6 kb region directly upstream of the array.2D gel conditions were as described previously (20).DNA was subsequently transferred to Zeta-Probe nylon membranes (Bio-Rad) and detected using either radiolabelled tetO array or a PCR product amplifying the region immediately upstream of the array as probe.Blots of at least two independent experiments were analysed by phosphor imaging with a Typhoon TRIO Variable Mode Imager (Amersham Science) and Adobe R Photoshop R CS6.Replication intermediate DNA was quantified by area and intensity using MetaMorph R (Molecular Devices R ).
A protein roadblock causes replication forks to collapse
To assess the stability of the replisome on the DNA when it encounters an obstruction to replication, a system to create a protein roadblock in vivo was utilised.In a strain carrying 240 copies of the tetO sequence 15 kb counterclockwise of oriC, the arabinose-induced overproduction of TetR-YFP generates a site-specific obstruction that the replisome cannot proceed through (4).The replication blockage was confirmed using 2-D gels that demonstrated a Y-shaped DNA structure resulting from replication being blocked within the first 500 bases of the array.This signal was observed to be stable over 4 h (4).Upon addition of anhydrotetracycline the replication fork blockage was released within 5 min, allowing all the blocked forks to resume replication.Based on this evidence it was previously proposed that the replisome was intact and stable over the 4 h time period.
A temperature sensitive allele of the replicative helicase (dnaBts) was introduced into the strain carrying the operator array.Previous studies have shown that at the nonpermissive temperature this allele leads to replisome collapse and subsequent fork reversal and processing (21).A strain was also made by addition of a dnaCts allele.This allele was initially identified as a 'slow-stop' mutant that is able to continue replicating at a non-permissive temperature until the DnaB is required to be reloaded onto the DNA (19,22).In fact, it has been shown that strains carrying dn-aCts alleles that were initially characterised as 'fast-stop' are actually able to continue replicating at non-permissive temperatures, and behave similarly to the 'slow-stop' mutants (23,24).These studies indicate DnaC is not necessary for an active replisome, and evidence suggests DnaC dissociates from the DNA once DnaB interacts with DnaG (10); indeed active priming complexes do not contain DnaC (25) and replication has been found to be able to proceed in vitro in the absence of DnaC (7,26,27).Therefore, in the dnaCts strain used here the replisome is able to continue ongoing replication at non-permissive temperatures if it is not otherwise impeded.However, under these conditions DnaB cannot be re-loaded once it dissociates from the DNA (22).
The three strains carrying the replication blocking array were grown and TetR-YFP production was induced for an hour with arabinose.Upon microscopic examination, an average of 73% of the population was deemed to have replication blocked by the presence of one focus per cell (Figure 1).The focus is formed by TetR-YFP binding to the tandem tetO sequences within the array.When replication is able to proceed, multiple copies of the array will exist within the cell and multiple foci will be visualised.The proportion of the population with one focus is a comparable fraction to what has been seen previously (5).The remaining population had two foci per cell that were well segregated and the cells were elongated.This suggests the array was already replicated upon induction with arabinose and the round of replication would have completed but the cells have yet to divide.If so, then these cells would not be able to replicate in the next round (see Supplementary Figure S1 for representative images).At this stage the population were deemed to have replication sufficiently blocked to continue the analysis of the effects of the block on foci count and viability.To test whether the replication block could be reversed with the addition of anhydrotetracycline, a sample was taken 10 min after the gratuitous inducer was added and the number of cells having one focus, two foci, or more than two foci were counted.For the replisome to proceed, sufficient repressor has to have been removed from the DNA.Multiple foci within a cell signify the array has been successfully replicated and sufficient time has passed to allow the loci to move apart overcoming any sister chromosome cohesion that was present.The majority of cells in all three strains at 30 • C in the presence of anhydrotetracycline were shown to have successfully restarted replication: >80% of cells in each of the strains showed two foci or more.
The cells that had been deemed sufficiently blocked (+ ara only) were shifted to 42 • C.After 30 min, the foci number within the cells was determined.The size of the population with one focus was increased in all three strains in comparison to the 30 • C sample, confirming the cells with two foci in the former population had indeed been unable to replicate in the next round.The addition of anhydrotetracycline to the cells after they had been at 42 • C for 30 min (Figure 1) or 1 h (Supplementary Figure S2) only enabled the restart of replication in the wild-type strain suggesting the replisome was no longer functional in either the dnaBts or dn-aCts strains.Despite the inability of these strains to restart replication at 42 • C, when the blocked cells were shifted back to permissive temperature and anhydrotetracycline added, replication was able to restart within 10 min in all three strains (Figure 1).There was a slightly increased percentage of the population with single foci and a slight reduction in the number of cells with >2 foci in the temperature sensitive strains that had undergone a temperature shift in comparison to the corresponding sample that had only been grown at 30 • C.This indicates that either replication restart was not able to occur as rapidly after the temperature shift, or possibly at all, in some of the ts mutant cells.
The effect of replication blockage and restart on cell viability in these populations was also determined.Cells that had been blocked and released at 30 • C as well as those subjected to 42 • C prior to release of the replication block were spread onto arabinose-free agar and the colonies counted after being incubated overnight at 30 • C (Figure 2).Cells with arabinose added (+ara) had considerably decreased viability (2 to 3 orders of magnitude lower) compared to cells that either had never had arabinose added or those that had subsequently been treated with anhydrotetracycline (+AT), due to the replication blockage present.The cells to which anhydrotetracycline was added showed recovery of viability that was nearly equivalent to the non-treated sample (compare -ara to +ara/AT) for all three strains (Figure 2).Cells that had undergone a temperature shift and had the block subsequently relieved did not have viability significantly different to the cells that had not been temperature shifted suggesting that despite a larger population of cells still having 1 focus after 10 min (Figure 1), these cells were still able to restart replication, and no loss of the number of colony forming units occurred.
These results confirm that the reversible replication roadblock was fully functional in all three strains, and that the replisome could be inactivated by temperature shift to 42 • C in the ts strains.However, once these cells were returned to permissive temperature a full recovery of viability was observed, with the majority of cells showing replicated and segregated foci within 10 min of return to 30 • C in the presence of anhydrotetracycline.
The structure of the DNA at the roadblock within these cell populations was subsequently visualised using 2D neutral-neutral gel electrophoresis and Southern hybridisation.Digestion of the DNA with EcoRV yields a 5.5 kb and a 6.7 kb fragment of the array region (Figure 3A).At 30 • C, the absence of a roadblock means that replication passes through the region unimpeded, and the DNA is almost exclusively seen as linear, visualised as a distinct spot for each of the fragments (Figure 3B).The lower spot represents the 5.5 kb section of the array that is closest to the origin.The presence of arabinose results in the population of the cells becoming blocked at a similar position within the array (4).This is visualised as an elongated spot on the Y-arc.The 6.5 kb fragment remains constant as a spot corresponding to linear DNA as the replication fork cannot progress into this fragment, whereas the intensity of the 5.5 kb spot decreases concomitantly with the increase in Y signal.Replication forks in the wild type strain that had been transferred to growth at 42 • C for 30 or 60 min remain blocked at approximately the same proportion at both time points, as shown by the remaining signal on the Y-arc.The DNA signals for the blocked Y and the linear spots were quantified, and the proportion of the signal contained in the Y-shaped structure was calculated (Figure 3C).There is no significant difference between the proportion of Y-signals at the different time points.Therefore, in this strain the temperature shift to 42 • C did not appear to affect replication fork stability.
When the same analysis was carried out for a strain carrying dnaBts, the prolonged blocked structure was not seen at 42 • C (Figure 3B).The elongated spot of Y-shaped DNA indicative of a replication fork blockage is present prominently at 30 • C (∼60% of the DNA is in the Y-shaped signal), but disappears within 30 min of the shift to 42 • C.This suggests that the forked DNA structure is being processed in some way that leads to the Y-signal being converted back to a linear signal.One possible processing event that could be occurring is replication fork reversal (RFR) which migrates the branch point out of the restriction fragment being examined leaving only linear DNA upon restriction enzyme digestion (Figure 3A).Presumably DnaBts dissociates from the DNA upon shift to non-permissive temperature and the other replisome components may also disengage as a consequence.This leaves the Y-shaped DNA open to processing by other enzymes leading to loss of that signal (RFR or nu- clease digestion).The absence of the Y-arc signal at 30 min indicates this happens in all blocked cells within the population within that timeframe.
DnaC does not associate with the replicating replisome (28) and, therefore, its deactivation at 42 • C in the dnaCts strain should not cause replisome dissociation.However, replication forks that do collapse in this strain should not be able to re-load the DnaB helicase at the non-permissive temperature.Furthermore, new rounds of replication from oriC should not be able to able to initiate due to the lack of functional DnaC.Consequently, this variant gives an indication of the stability of replisomes that run into the block in an otherwise wild type strain.It has previously been assumed that the stalled replisome remains associated with the DNA in this type of impediment over the course of several hours (4).Although the dnaCts strain produced a level of replication blockage equivalent to the wild type at 30 • C (∼68% Y-shaped DNA) (Figure 3B), the Y-shaped structures were seen to disappear at the non-permissive temperature, within 30 min.
Replication fork collapse leads to replication fork reversal
To address whether RFR was occurring when the temperature sensitive mutants were shifted to non-permissive temperature, the structure of the DNA upstream of the array region was visualised.Duplicate samples of those analysed in Figure 3B were digested with EcoRI and the DNA subsequently analysed by 2D gel electrophoresis and Southern blot (Figure 4).This digest yields a 4.6 kb fragment, 0.9 kb upstream of the array (Figure 4).The EcoRI site closest to the array is 300 bp downstream of the EcoRV fragment, and this overlap of the fragments ensures that all DNA directly upstream of the array is visualised over the two blots.In the unblocked (-ara) samples, only linear DNA was seen for all three strains.In the blocked (+ ara) samples of all three strains, a Y-arc is visualised along with an adjacent cone signal/spike.This signal is indicative of Holliday junction (HJ) formation (29), the expected outcome of RFR; the fork has regressed towards oriC and the nascent DNA strands have annealed to form the four-arm HJ.The presence of the Y-arc could be due to the degradation of the 4 th arm of this HJ by RecBCD to reform a Y-shaped DNA structure, or may be due to replication that has restarted and the forks are proceeding through the region.The HJ signal is present at times where the replication fork block has been established (compare Figures 3B and 4), indicating the nucleoprotein block causes Holliday junction formation upstream.The Y-arc and HJ signals are also present in the wild type samples taken after 30 min and 60 min at 42 • C; either HJs are formed and not processed or the signal represents a steady-state of turnover and re-formation of HJs.Faint cone signals adjacent to the Y-arc are visible at 30 min at non-permissive temperature in both the DnaBts and DnaCts mutants; a faint signal is also visible at 60 min in the DnaCts mutant.This low signal correlates to the weak blocked signal seen in Figure 3B of these samples, indicating the HJ is directly related to the formation, and processing, of the blocked signal.Therefore, the disappearance of the Y-signal (Figure 3B) reflects replication fork processing, and the HJ signal is evidence for RFR occurring.The sub-stantial disappearance of the Y-signal in the DnaCts strain suggests that the replisome collapses in this strain within 30 min, allowing processing of the forked DNA despite all the replisome components at the fork being wild type.
Taken together the data shows that the dnaBts and dnaCts strains have their replication forks blocked by the protein-DNA Fluorescent Repressor Operator System (FROS) array and that the shift to non-permissive temperature leads to processing of the fork and may be accompanied by dissociation of some or all of the replisome.However, upon return to permissive temperature, replication is able to restart throughout the population within 10 min, and viability is not affected.Replication fork collapse, processing and restart must be occurring very efficiently in these cells.The replication fork processing by RFR appears to be a major pathway although this does not rule out that other processing is also occurring.
The replication fork collapses at a similar rate in a wild type replisome to a temperature sensitive one
To further determine the time it takes for a replication fork to collapse, the wild type replisome in the dnaCts strain was compared to one that is synthetically forced to dissociate in the dnaBts strain, over a shorter time frame.Cells were grown at 30 • C, transferred to 42 • C and samples taken at the indicated time-points for analysis by 2D gel electrophoresis (Figure 5A).Within 10 min of the shift to non-permissive temperature, only 14% of the DNA in the dnaCts variant remained at the blocked signal.In comparison, a wild type strain had 44% and a DnaBts strain had 4% (Figure 5B).This suggests that the dnaBts mutation does rapidly lead to the replication fork being processed upon inactivation by exposure to non-permissive temperature.Whilst nothing is known as to the state of the replisome in these cells, it is a reasonable assumption that the inactivation of DnaB might lead to partial or complete dissociation of the replisome from the DNA.When strains carrying the dnaB8 allele are shifted to non-permissive temperature, DNA synthesis ceases (19,30).The A130V mutation is presumed to undergo a conformational change in response to the temperature shift leading to its dissociation from the DNA.As an integral component of the replisome, DnaB dissociation could cause at least some other components of the replisome to also dissociate.However, regardless of the occupancy of the replisome the Y-shaped DNA becomes accessible to processing proteins.
It is also clear that there is considerable collapse of the replication fork DNA in the dnaCts strain at the higher temperature.Furthermore, it can be seen that the decrease in the Y-shaped signal began to occur within 5 min of the temperature shift in both dnaBts and dnaCts.In a wild-type strain, the signal corresponding to blocked forks does not dissipate during the time course (Figure 5B).If the dnaCts mutation does not affect the stability of the stalled replisome directly, then the difference between wild type and dnaCts must be due to the inability of the mutant to re-load the replisome at non-permissive temperature.This indicates that a wild type replisome is not as stable as currently presumed; the wild type replisome must be continually dissociating and reassociating with the DNA at the roadblock.The 'stable' Y- shaped replication block signal seen previously (Figure 5B, (4)), therefore, represents the equilibrium state of replication forks that have encountered the fork and not yet collapsed, together with forks which have undergone RFR, processing and then re-loading of the replisome which then encounters the tetO roadblock again.This process must be in a fairly rapid equilibrium.This view is supported by the visualisation of HJs upstream in the wild type (Figure 4) showing the collapse and processing of forks is occurring.
The half-life of a stalled replisome is less than 5 min
To more precisely define the time at which the replication forks collapse, the half-life of the replisome at the roadblock was determined following a shift from 30 • C to 42 • C for both the dnaBts and dnaCts strains.Samples of each culture were taken at 1 min intervals and examined using 2D gel electrophoresis (Figure 6).Within 3 min, more than half of the DNA that had been present in the Y-arc of both strains was seen to revert to the size of linear DNA (Figure 6C).The calculated half-life of the replisomes in vivo from these experiments is 3.0 min and 3.1 min for dnaCts and dnaBts respectively.These figures are likely to be slight over-estimates of the replication fork stability because there will be a small time delay for the culture to reach non-permissive temperature upon the transfer to 42 • C.
The dnaCts mutation does not affect the ability of a replisome to function
A further experiment was carried out as a control to directly determine whether the dnaCts mutation affected the ability of a replisome to function at non-permissive temperature within this 5 min time period.Cells of all three strains had their replication blocked at the array as above.Each strain was then shifted to 42 • C for 2 min, and anhydrotetracycline was added to release the replication block.The cells were kept at the non-permissive temperature for a further 10 min to allow replisomes to continue through the array if they were functional.Cells were then examined under the fluorescence microscope and the percentage that had managed to duplicate the array was determined (Supplementary Figure S3).At this time, inthe wild-type strain 81% of cells showed duplication of the array (two or more foci per cell), compared to 61% in the dnaCts strain, and only 17% in the dnaBts strain.The percentage of cells able to restart in the dnaCts strain agrees well with the proportion of Y-shaped DNA that was seen to remain (∼60%) after 2 min at 42 • C (Figure 6).If instead, each strain was shifted back to 30 • C at the point of addition of anhydrotetracycline, then they all displayed over 80% of cells with ≥2 foci after 10 min.This confirms that the inactivation of DnaCts does not prevent existing replisomes from functioning, but inactivation of DnaBts does.It also suggests that the Y-shaped DNA seen in 2D gels has a functional replisome associated with it.
DISCUSSION
This study has determined the stability of a replication fork in vivo that has stalled because of a nucleoprotein block formed by an array of tetracycline repressor-operator site complexes.This FROS system is able to cause a replisome to stall in a known location on the chromosome and the replication status of the array can then be determined visually using fluorescence microscopy and verified with neutralneutral 2D agarose gels.In wild type cells the blocked replication forks appear stable as judged by a relatively constant level of Y-shaped replication forks present at the block over time.However, when a mutant in the replicative helicase loader protein, DnaCts, is introduced the blocked signal appears stable over time at permissive temperature, but when DnaCts is inactivated at 42 • C then the Y-shaped DNA rapidly disappears.In the absence of DnaC, the helicase DnaB cannot be re-loaded onto the DNA if it dissociates, and DnaB is a key protein in replisome assembly both at the replication origin and when re-loading the replisome by PriA/PriC away from the origin (31).It is not thought that DnaC itself is present at the replisome and so inactivation of the protein should not lead to changes in replisome stability or activity (Supplementary Figure S3).Therefore, the loss of the Y-shaped DNA replication fork must be due to its natural collapse over time and the failure to then re-load or re-activate the replisome without the activity of DnaC.We can estimate the half-life of the blocked replication fork to be around 3 min, which agrees well with earlier studies in vitro that the E. coli replisome has a half-life of∼5 min when it encounters a nucleoprotein block (14).It is also in agreement with the half-life observed for in vitro reconstituted replisomes during rolling circle replication that show a mean processivity of ∼85 kb and a speed of 535 bp/s; this means the average time an elongating replisome spends on DNA is around 2 min 40 s (32).
Using the FROS array in addition to the dnaBts allele, we can stop replication at a known position on the chromosome in a population of cells, and then, by temperature shift, cause the replisome to rapidly dissociate from the DNA.This could prove to be a highly useful tool for future studies on replication fork collapse and RFR, and the proteins and pathways involved in RFR and subsequent processing and reloading of the replisome.
What actually happens to the replisome when it is stalled and does it dissociate when the replication fork is processed?Previous studies using a fluorescent fusion of DnaQ have revealed that around 80% of cells show co-localisation of the replisome with the fluorescent repressor array.We have shown that 46-68% of the DNA at the array can be detected as being Y-shaped, depending upon the strain and conditions used (Figure 3B).Further, upon addition of anhydrotetracycline, the blocked array is rapidly replicated.This suggests that the majority of Y-shaped DNA signal at the block is either associated with a functional replisome that is paused and able to resume replicating once the protein roadblock is removed, or upon removal of the block, replisome reloading occurs rapidly to allow replication to resume.The re-activation of the replisome seen after 2 min at 42 • C in a dnaCts mutant argues the majority of the Yshaped fork is indeed associated with a stalled, but otherwise intact replisome (Supplementary Figure S3).When the replisome is inactivated using a dnaBts allele, the Y-shaped arc rapidly disappears and we believe that this is mostly due to RFR due to the prominent HJ signal seen upstream of the blocking array.The notion that the replication machinery has dissociated from the DNA in the strains carrying dnaBts or dnaCts alleles is supported by the observation that the addition of anhydrotetracycline to the cells after prolonged incubation at 42 • C does not result in duplication of the YFP focus (Figure 1; Supplementary Figure S1).At 30 • C, multiple foci are observed in these strains because anhydrotetracycline causes TetR-YFP to be released from the array enabling the fork to proceed through the former blockage and duplicate the array.In a wild type cell, this duplication is able to occur at either 30 • C or 42 • C. If the replisome was still present in the dnaCts cells at 42 • C, then it should be able to continue replicating without requiring re-loading by DnaC.This is what is seen at short times of incubation at non-permissive temperature (Supplementary Figure S3), but with prolonged exposure at the higher temperature this does not occur and we infer that the replisome has left the DNA.We can be confident that the requirement for both DnaB and DnaC to be functional for re-start to occur following RFR means that the DnaB helicase must be actively re-loaded, probably via PriA.Whether the entire replisome completely dissociates or not is unknown, but it is possible that some replisome components could remain associated with the DNA whilst others (including DnaB in these experiments) dissociate.But, the complete removal of the replisome would allow unfettered access to the DNA for the subsequent repair processes and is an attractive model.Once the repair proteins have dissociated the PriA-or PriCdependent reloading of DnaB will occur; DnaB serves as a key anchor for recruitment of the remaining replication proteins allowing the functional replisome to be reconstituted.
Using the same methodology as employed in this study it has been observed that the signal representing the stalled replication fork is stable for an extended period of time (4 h) in vivo (4).Given the current results and the previous absence of replisome reloading inhibition, it is almost certain that the prolonged signal that was obtained was an equilibrium view resulting from the turnover of forks; it is now known the replication forks collapse within a short timeframe (<5 min), and, therefore, the constant collapse, reloading of the replisome and reformation of the fork would not have been discernible with the previous methodology that was used (4).
The turnover of stalled replication forks is likely to also have had an effect on our results.We visually determined the replication status of the cells using fluorescence microscopy prior to inducing replication fork collapse at non-permissive temperature (Figure 1).We found 73% of cells had one focus, implying replication fork arrest, and the remaining population had two distinct foci that were well separated, suggesting replication blockage has occurred since the array was last duplicated.We would, therefore, expect to see between 73% and 100% of the DNA at the array to be Yshaped when analysed by 2D gels.Instead, the results of the 2D gels indicated ∼50% of the 5.5 kb DNA was linear before shifting to a non-permissive temperature.This lower than expected Y-signal in the 2D gels is likely due to a combination of effects: some Y-shaped forks may have fallen apart during DNA extraction, whilst RFR would also convert some of these forks into HJs which could then migrate outside the region being probed.These would then appear as linear DNA in the array region and as HJs in the upstream region.It is also a possibility that some replication forks were able to proceed through the array but cohesion resulted in a single focus, or that some cells in the population may not have been undergoing replication at all at the time of sampling, but we believe that these would represent a minor sub-population.
Fluorescently tagged replication proteins have been shown to colocalise at positions of nucleoprotein block (4,5).A 4h persistent colocalisation of SSB at repressor induced stalled forks has been observed (4), and it is now presumed that either SSB is staying associated with the DNA or is in a steady state of association/dissociation with the DNA.The colocalisation of DnaQ (the ⑀ subunit of PolIII) at a replication fork blockage (5) suggests the replisome is present at the blockage.However, 19% of those cells were not found to have DnaQ colocalised.This was reasoned to be owing to the cells being in the G1 cell cycle stage and therefore having an inactive replisome.Given the current data, we propose that at least some of that 19% of the population had undergone replisome dissociation and replication fork processing at the time of imaging.If DnaQ, and SSB, had been dissociating and reassociating, this would not have been able to be discernible with the methodology used in these studies.
From our work presented here, we conclude that the halflife of a stalled replication fork is ∼3 min.The times for the half-lives that we have obtained may vary somewhat from other studies because of the exact experimental conditions.Overproduction of TetR-YFP to obtain the roadblock occurs at 30 • C and the determination of the timed collapse occurs by shifting the culture to 42 • C. Activity of the proteins may, therefore, vary from what is seen in other studies where incubation of cells is often at 37 • C. Nonetheless the half-life that has been obtained by us is in line with previous works that have obtained a half-life of a replication fork at a nucleoprotein blockage in vitro of 6 min and a half-life of 4 min of replication forks blocked by accumulation of torsional strain in the DNA (14,15).It is also in line with the calculated half-life of extending replisomes in vitro (32), which suggests that perhaps blockage of the replisome does not alter the rate at which the replisome falls off DNA.The authors of the earlier in vitro work of replication blockage from DNA bound proteins suggested that either stabilising factors were present in vivo, or alternatively, the replisome was being continually reloaded once it had dissociated as a way of reconciling the short half-life with the evidence at the time of a stable replisome in vivo (4,14).The consistency of the half-lives obtained in vitro and in the current study in vivo suggest that neither stabilising factors nor external factors that assist in the disengagement of the replisome components are present in vivo.This implies the rate of collapse is inherent to the stalled fork and indeed to some essential component(s) of the replisome, and that the replisome may undergo repeated rounds of re-loading to produce the apparently stable structures seen in a wild-type strain.It also implies that replication forks will very seldom manage to replicate an entire chromosome without the need for re-loading.
The current understanding of DNA replication has evolved to view the replisome as a dynamic structure with dissociation of subunits during extension (33)(34)(35)(36).In particular, the core polymerase may dissociate and be replaced with another PolIII, or, if required, either PolII or PolIV (37).Furthermore, the lagging strand core polymerase dissociates from the DNA on completing the synthesis of an Okazaki fragment.However, although it has dissociated from the DNA, it does not necessarily dissociate from the replisome complex (27).On completion of an Okazaki fragment, the clamp loader loads a new  clamp onto the RNA primer to enable synthesis of the next fragment.Unlike the core polymerase, this clamp is highly stable with a half-life at 37 • C of ∼1 h (38).Therefore, it remains possible that the short half-life of the replication fork observed here may be limited to a replisome encountering a roadblock; a blocked replisome may dissociate more readily than an actively elongating one in vivo.In our model of events, the replication fork stalls because the combination of the DnaB replicative helicase and the accessory helicases Rep and UvrD are unable to dissociate the upstream proteins.The replisome subsequently dissociates and RFR takes place to allow for processing and subsequent replisome reloading.The trigger for the entire replisome to dissociate is not yet known and may be innate to the DNA-replisome complex itself, nor is it known if some subunits remain associated with the DNA.
One prediction of our model is that mutants that cannot reload the replisome following replication fork collapse should show the same fork instability (rapid loss of Ysignals in 2D gels) as observed with inactivation of Dn-aCts.The replication forks will fall apart with the described half-life and the absence of re-loading means that replisomes would not be replaced; the equilibrium seen in wild type cells is a balance between fork collapse and re-loading.However, it has not been possible to test replication restart mutants with our current system due to their severe viability defects.priA and dnaT mutants, part of the major restart pathway in E. coli, are sensitive to rich media, constitutively activate the SOS response, are sensitive to UV, show poor viability and small colony size (39,40).Mutants in either priB or priC show almost no phenotype individually due to redundancy in their functions, whereas the double priBC mutant shows even more severe growth and viability defects than priA (39).Furthermore, the priAC double mutant is lethal.These phenotypes reflect the vital role that replication restart plays in bacteria, consistent with a replisome that has a half-life significantly shorter than the time required to completely replicate a chromosome.
The relatively low stability of a stalled E. coli replisome described here is in stark contrast to that of a stalled eukaryotic replisome.The previously held conclusion that the prokaryotic replisome was stable when a replication fork met a nucleoprotein blockage (4) was in part influenced by the evidence in eukaryotes where the replisome remains intact and associated with the fork at the site of the blockage (41).When the replisome stalls, Mec1/ATR is recruited to the fork by an interaction with single-stranded DNA (42).Checkpoint mediator protein complexes involving Mrc1 and Tof1 are subsequently phosphorylated and Rad53 is activated (43).The activation of the checkpoint proteins inhibits late firing of origins preventing further replication from initiating (42).Subsequent work has found that in addition to the prevention of new replication forks from being formed, individual forks that are currently replicating may also be slowed (44).Previously, the stability of the eukaryotic replisome was thought to be dependent upon checkpoint proteins that are absent in prokaryotes (41) but it has since been shown that the replisome remains intact at the fork under hydroxyurea-induced replication stress even in the absence of ATR/Rad53 proteins (44).The repair of eukaryotic DNA following replication fork stalling, including RFR, takes place seemingly with the replisome intact (45)(46)(47).A system analogous to bacterial PriA has not yet been found in eukaryotes and, therefore, if the usually stable eukaryotic replisome does dissociate from the DNA, the DnaB homolog, CMG cannot be reloaded (48).Rather, a fork from another origin of replication will replicate the DNA to completion.The cause of the difference in stability between the prokaryotic and eukaryotic systems is still unknown, but the prokaryotic replisome may just be an innately more dynamic complex than the larger eukaryotic version.
Nucleoprotein blockages such as those studied here are thought to be the major contributor to replisome stalling (3).However, other types of replication blockages, such as UV lesions, can also cause replication fork stalling or collapse.DnaB appears stably bound to DNA on encountering a lesion following UV irradiation while the polymerase subunits dissociate to allow for processing (49).However, similar to our findings with a nucleoprotein blockage, DnaC has been shown to be required for replication restart following UV irradiation, suggesting DnaB does at some point disengage from the DNA after the encounter with the lesion (24).It is uncertain what has caused the variation in these findings but it does highlight that differences in replication blockages may lead to a repair pathway distinct from our model.On encountering a UV-induced lesion, multiple repair pathways have been proposed.The replisome can bypass the lesion and reinitiate downstream, either with or without replisome reloading (50,51).Alternatively, the replisome may dissociate to allow for DNA processing, including RFR, to remove the source of the blockage (reviewed in (52)) and the extent of DNA damage may contribute to the pathway that is utilised.
This study highlights the speed with which a replication fork is processed following stalling at a replication block.These blocks are predicted to be the most common sources of impediment the replisome is likely to encounter innately (3).While further investigation is required to determine the precise extent of replisome dissociation, these results do highlight the importance and frequency of utilisation of the pathways that process these stalled forks and reload the replisome to enable the continuation of replication.
Figure 1 .
Figure 1.Proportions of cells containing single or multiple foci, representing the tetO array, following overproduction of TetR-YFP.Left-most set of graphs: (A) wild type, (B) DnaBts or (C) DnaCts cells were grown at 30 • C in the presence of 0.1% arabinose (ara) for 1 h (dark grey bars) and then anhydrotetracycline was added for 10 min to a subpopulation to release the replication block (light grey bars).Middle set of graphs: cells that had been treated only with arabinose at 30 • C were shifted to 42 • C (a non-permissive temperature for DnaBts and DnaC(ts)) for 30 min (dark grey bars) and then anhydrotetracycline was added for 10 min to a subpopulation (light grey bars).Right-most set of graphs: the arabinose-only treated cells (42 • C, 30 min) were shifted back to permissive temperature (dark grey bars) and anhydrotetracycline was added for 10 min to a subpopulation (light grey bars).See supplementary material for representative micrographs.
Figure 2 .
Figure 2. Viability following creation of a replication roadblock.Wild type, DnaBts or DnaCts cells were grown at 30 • C in the absence or presence of 0.1% arabinose (1 h) to induce replication blockage.Subpopulations of the blocked (+ ara) cells were either incubated for 10 min in the presence of anhydrotetracycline (AT) or shifted to 42 • C (a non-permissive temperature for DnaBts and DnaC(ts)) for 1 h before also being incubated with anhydrotetracycline for 10 min.Cells were serial diluted 10-fold and either spotted or spread onto agar containing ampicillin only (−/+ ara samples) or ampicillin with anhydrotetracycline (+ AT samples) and grown at 30 • C to determine cell viability.Top: representative plates showing colonies at indicated dilutions.Bottom: graphs showing the average results +/− SEM.
Figure 3 .
Figure 3. Visualisation of DNA replication fork collapse.(A) Schematics of the EcoRV digest of the array region and subsequent signals visualised by Southern hybridisation and a radioactive probe to the array.Replication forks entering the array from the origin become blocked within the 5.5 kb fragment.Cells with a replication block at this position will have the signal corresponding to the 5.5 kb fragment located on the Y-arc.Restriction sites are indicated with arrows.(B) 2D gel analysis of EcoRV digested DNA following replication block (+ ara) and subsequent shift to 42 • C, a non-permissive temperature for DnaBts and DnaC(ts).(C) Percentage of 5.5 kb DNA within the blot located in the Y-arc of the wild type strain (error bars are SEM from three independent experiments).
Figure 4 .
Figure 4. Holliday Junctions are seen upstream of the tetO array.2D gel analysis of EcoRI digested DNA immediately upstream of the array following replication block (+ ara) and subsequent shift to 42 • C. Schematics are of the EcoRI sites 0.9 kb and 5.5 kb upstream of the array region (indicated by arrows) and subsequent signals observed by Southern hybridisation and a radioactive probe within the 4.6 kb fragment.Holliday junction (HJ) formation is visualised as a cone signal at the top of the Y-arc and a spike from the linear DNA at the end of the Y arc.
Figure 5 .
Figure 5. Replication fork stability at a replication roadblock.(A) 2D gel analysis of EcoRV digested DNA following replication block (+ ara) and subsequent shift to 42 • C, a non-permissive temperature for DnaBts and DnaC(ts).(B) Percentage of 5.5 kb DNA within the blot located in the Y-arc.
Figure 6 .
Figure 6.The half-life of a replisome at a nucleoprotein roadblock.(A) 2-D gel analysis of EcoRV digested DNA following replication block (+ ara) and subsequent shift to 42 • C, a non-permissive temperature for DnaBts and DnaC(ts).(B) Percentage of 5.5 kb DNA within the blot located in the Y-arc.(C) Percentage of 5.5 kb Y-shaped DNA plotted over time with an exponential decay curve fitted.
|
2016-05-12T22:15:10.714Z
|
2015-10-20T00:00:00.000
|
{
"year": 2015,
"sha1": "9c5fcb14c6adc85629fb75d92bc41e1a2f94780a",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/nar/article-pdf/44/2/657/17438276/gkv1079.pdf",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "f781ab378357a666147886841f96b698294cde54",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
88521181
|
pes2o/s2orc
|
v3-fos-license
|
A comparison of functional summary statistics to detect anisotropy of three-dimensional point patterns
The growing availability of three-dimensional point process data asks for a development of suitable analysis techniques. In this paper, we focus on two recently developed summary statistics, the conical and the cylindrical $K$-function, which may be used to detect anisotropies in 3D point patterns. We give some recommendations on choosing their arguments and investigate their ability to detect two special types of anisotropy. Finally, both functions are compared on some real data sets from neuroscience and glaciology.
Introduction
In some situations, the spatial correlation between the points in a point pattern is not only a function of the distances between the points, but also of the direction of the vector connecting them. Classical functional summary statistics such as Ripley's K-function or the nearest neighbor distance distribution function fail to detect such anisotropies. Hence, there is some interest in developing methods which allow for a detection and characterization of the degree of anisotropy in a spatial point pattern.
In the literature, the case of two-dimensional point patterns has been at the focus of interest up to now. Various approaches for anisotropy analysis have been introduced, including spectral methods (Bartlett, 1964;Mugglestone and Renshaw, 1996;Renshaw, 2002), wavelet transformations Mateu, 2013a,b, 2014), and an anisotropy test based on the asymptotic joint normality of the sample second-order intensity function (Guan et al., 2006). In addition, directional versions of functional summary statistics have been introduced in Ohser and Stoyan (1981), Stoyan and Beneš (1991), Stoyan (1991), and Stoyan and Stoyan (1995). Moreover, Møller and Toftaker (2014) introduced geometric anisotropic pair correlation functions.
At least some of these methods may be transferred to the three-dimensional case in theory. In practice, however, their application might be hampered, e.g. by problems in finding a suitable partition of the unit sphere. Furthermore, the visualization and verification of results is more challenging. Recently, two directional counterparts of Ripley's K-function for the analysis of three-dimensional point patterns were introduced. The common idea of both approaches is to replace the ball used in the definition of the K-function by a structuring element which is sensitive to direction. The motivation of the work in Redenbach et al. (2009) was to detect anisotropy introduced by the compression of a regular point pattern. For this purpose, the mean numbers of points contained in cones centered in the typical point and pointing to different directions were investigated. The motivating data sets were point patterns of bubble centers extracted from tomographic images of polar ice cores. In contrast, Møller et al. (2015) studied some data from neuroscience where the points are believed to be organized in linear columns. Hence, they decided to use a cylinder instead of a cone.
These examples illustrate how the development of methods may be triggered by the particular shape of anisotropy that should be detected. In the current paper, we want to investigate the generality of the two directional versions of the K-function. For this purpose, we will apply them to both real and simulated point patterns with different sources and various degrees of anisotropy.
In Section 2 we introduce the data sets used throughout the paper. Section 3 defines the conical and cylindrical K-functions. Based on a non-parametric isotropy test, some simulationbased recommendations on the parametrization of these summary statistics are given in Section 4. Finally, in Section 5, we apply these recommendations when comparing the functions in detecting the anisotropy in real pyramidal cell and ice data sets as well as realizations of models mimicking the structure of these data.
Data sets
In this section, we introduce the data sets used for the subsequent analyses. We start with presenting the real data studied in Møller et al. (2015) and Redenbach et al. (2009) providing the motivation for the development of the two versions of the directional K-function. To allow for an investigation of the performance of the methods under varying degrees of anisotropy, our analysis is extended to simulated data sets. The models are chosen to reproduce the type of anisotropy present in the neuroscience data and the ice, respectively.
Pyramidal cell point patterns
The first set of data consists of four samples containing the locations of the pyramidal cells from the Brodmann area 4 of the gray matter of the human brain collected by the Center for Stochastic Geometry and Advanced Bioimaging, Denmark. According to the minicolumn hypothesis in neuroscience (see e.g. Mountcastle (1957) and Rafati et al. (2015)), the point patterns are expected to be anisotropic due to the linear arrangement of the cells in a direction perpendicular to the pial surface of the brain, i.e, the xy-plane here. For more details on these data sets, see Rafati et al. (2015). A visualization of one sample is shown in Figure 1.
Ice data
The second set of data consists of a subset of the samples investigated in Redenbach et al. (2009). The point patterns consist of the center locations of air bubbles extracted from Figure 1: From left to right: a sample of the pyramidal data sets within an observation window of size 508 × 140 × 320 µm 3 , a sample of the ice data within an observation window of size 11.68 × 11.92 × 13.81 mm 3 , a realization of the PLCPP model for ρ = 500, σ = 0.001, ρ L = 200, α = 2.5, and the compressed center locations of random ball packing for ρ = 500, R = 0.05, c = 0.7. tomographic images of the Talos Dome ice core. The data were provided by the Alfred-Wegener-Institute for Polar and Marine Research, Bremerhaven, Germany. Details on the acquisition and the processing of the data can be found in Redenbach et al. (2009). Here, we consider 14 samples taken from a depth of 505 m where the anisotropy is most prominent. The point patterns can be interpreted as realizations of a regular point process. Anisotropy is introduced by a compression of the point pattern along the z-axis. Due to the location of the drilling site for this ice core, isotropy within the xy-plane can be assumed. A visualization of one sample is shown in Figure 1.
Poisson line cluster point processes
Motivated by the pyramidal cell data, a Cox process model called Poisson line cluster point process (PLCPP) for anisotropic spatial point processes was developed in Møller et al. (2015). The anisotropy of the realizations of this model is caused by linear arrangement of the points. For this purpose, we start with an anisotropic Poisson line process with intensity ρ L and a given directional distribution of lines. On each line l i contained in this process, a homogeneous Poisson process Y i with intensity α is independently generated. Finally, the points of the Y i are displaced in a plane orthogonal to l i by e.g. a zero-mean normal distribution with the standard deviation σ yielding independent Poisson processes X i whose superposition forms the PLCPP model X. The parameter σ controls the distances between the points and the lines. The intensity of X, i.e. the parameter ρ, is equal to the product of the intensity ρ L of the Poisson line process and the intensity α of the Poisson processes Y i on the lines.
Our investigations are based on PLCPP models with intensity ρ = 500, α = 2.5, and ρ L = 200, where the lines are parallel to the z-axis. We consider a high (σ = 0.001), medium (σ = 0.01), low (σ = 0.02), and very low (σ = 0.04) degree of linearity. Figure 1 shows a realization of a PLCPP model with a high degree of linearity. For the simulation study reported in the following, m = 1000 realizations were generated for each set of parameters.
Compressed regular point patterns
As discussed in Redenbach et al. (2009), the structure of the ice data can be modelled via compression of isotropic regular point processes. To represent different degrees of regularity, we consider both a Matérn hard-core process (low regularity, (Illian et al., 2008, Section 6.5.2)) and the center locations of balls in a dense packing simulated using the force-biased algorithm (high regularity, (Illian et al., 2008, Section 6.5.5)). In both cases, the intensity was chosen as ρ = 500 and the hard core radius was R = 0.05. Anisotropy was then introduced by applying a volume-preserving linear transformation T c = diag(1/ √ c, 1/ √ c, c), c ∈ [0, 1], to these isotropic regular point patterns. This implies that the data are compressed by a factor 0 < c < 1 in z-direction while they are isotropically stretched by a factor 1/ √ c in the xy-plane.
As in the case of the PLCPP models, m = 1000 realizations for each model and each set of parameters were generated within the unit cube. Different degrees of compression were realized by choosing c = 0.7, 0.8, and 0.9. Figure 1 shows a realization of a point pattern obtained from a ball packing compressed by a factor c = 0.7.
Conical and cylindrical K-functions
Ripley's K-function is a well-known summary statistic which is defined as the mean number of further points within a circle/sphere with radius r centered in a typical point of the point pattern divided by the intensity. Naturally, anisotropy cannot be detected using this function due to its symmetric structuring element. Redenbach et al. (2009) generalized the 2D directional K-function (see e.g. Stoyan and Stoyan (1995)) to the three-dimensional case by replacing the sector of a circle by a double cone. For a unit vector u, the conical K-function is defined as where ρ is the intensity, and C u (r cn , θ) denotes a double spherical cone in the direction u with an slant height of length r cn and an apex angle of size 2θ centered in 0 (see Figure 2). Briefly speaking, ρK u,cn (r cn , θ) is the mean number of further points within a cone x 0 + C u (r cn , θ) centered in a typical point x 0 of the point pattern. In Redenbach et al. (2009) the function K u,cn was called directional K-function. Here, we will call it conical K-function to distinguish it from the cylindrical K-function introduced below.
Møller et al. (2015) introduced a summary statistic, called the cylindrical K-function, to detect anisotropy of point patterns with columnar structure. It is a version of the space-time K-function (Diggle et al., 1995) and is defined via where Z u (r cl , h) denotes a cylinder with center 0, base radius r cl , and height 2h in the direction u (see Figure 2). Briefly speaking, ρK u,cl (r cl , h) is the mean number of further points within a cylinder x 0 + Z u (r cl , h) centered in a typical point x 0 of the point pattern.
For more details on the cylindrical K-function, see Møller et al. (2015).
Ratio-unbiased non-parametric estimates of the functions are, respectively, given bŷ where n is the number of points in the point pattern, ρ 2 = n(n − 1)/|W | 2 is an unbiased estimate of ρ 2 (see e.g. Illian et al. (2008)), and w is the translation edge correction factor defined as w(x 1 , in which W x denotes the translation of the observation window W by the vector x (see e.g. Stoyan and Stoyan (1995)).
Both estimators can easily be evaluated using a spherical or cylindrical coordinate system, respectively. However, unlike in the two-dimensional case, it is impossible to partition the unit sphere into equally sized cones or cylinders pointing to different directions. In practice, the directional K-functions can be evaluated for a set of directions evenly distributed on the unit sphere. Approaches for deriving such sets of directions are discussed in Altendorf (2011). If possible, the choice of the number of directions should be based on prior knowledge on the main directions of anisotropy.
While the classical summary statistics for point processes, e.g. Ripley's K-function, depend on one parameter, the summary statistics introduced above depend on two parameters which makes the investigations more challenging. For the cone, it seems natural to fix the parameter θ in advance such that the conical K-function only depends on the parameter r cn . In practice, θ should be chosen depending on the number of directions to be investigated and the intensity of the point pattern. In Redenbach et al. (2009) an angle of θ = π 4 was chosen when considering only coordinate directions. For larger sets of directions, θ should be reduced to avoid overlap of the cones for different directions. Additionally, the angle should be large enough to observe a reasonable number of points within the cones.
For the cylinder, the situation is more complicated as there are three ways to expand a cylinder (see Figure 3) depending on the two parameters r cl and h. A priori, none of these methods seems more natural than the other. In Rafati et al. (2015), the height of the cylinder was fixed while expanding its radius. In the present study, we are interested in a comparison of the cylindrical and the conical K-function. Hence, the expansion scenario should be chosen such that both functions behave similarly in some sense. Two possible approaches are discussed in the following section.
Equal volume
The first parametrization is based on the fact that sets of equal volume will contain a similar number of points. Hence, we suggest to parametrize the functions such that the volumes of the cone and the cylinder are equal. The details are as follows.
Recall that r cn and r cl refer to the radius of the cone (or the radius of the circumscribed sphere), and the radius of the cylinder, respectively (see Figure 2). Knowing that the volume of a cylinder, a cone, and a spherical cap are, respectively, given by where d = r cn − h is the height of the cap, the volume of the double cone (used as the structuring element of the conical K-function) is given by Using the above formula, those values of r cn , r cl , and h satisfying lead us to the equation V cn = V cl , i.e., the equality of the volumes of the structuring elements of the two functions.
Equation (3) leaves two degrees of freedom. In practice, it can be accompanied by further constraints such as the choice of an aspect ratio for the cylinder (see below).
Equal shape
An alternative approach is to require that similar regions of the data are scanned in the sense that the shapes of the structuring elements are similar. This is achieved by placing the cone inside the cylinder as shown in Figure 2. In this case, the following equations hold: and r 2 cn = h 2 + r 2 cl .
Following the recommendation given in Møller et al. (2015) on using an elongated cylinder, i.e. where h > r cl , the right hand side of equation (4) can be considered as an aspect ratio. It is clear that when this ratio is equal to one, no anisotropy is expected to be detected by this function. Taking an aspect ratio cot(θ) = a > 1 and using equations (4) and (5) results in h = ar cl (6) and r cn = r cl a 2 + 1 which provides us with an alternative relationship between the three parameters r cn , r cl and h. In the following, we will use the parametrization based on equations (6) and (7).
Isotropy test
Redenbach et al. (2009) introduced a non-parametric method to detect anisotropies in the point patterns as follows. Assuming isotropy in the xy-plane and knowing that the anisotropy is directed along the z-axis, the isotropy test for m replicated point patterns is based on the statistics given by where [r 1 , r 2 ] is a given interval, andŜ x ,Ŝ y , andŜ z are estimates of a summary statistic (here, either the conical or the cylindrical K-function) in the directions of the x-, y-, and zaxis, respectively. Here, r cl or r cn are chosen as the integration variable while the remaining parameters h and θ are chosen by any of the approaches discussed above.
In case of isotropy, these three estimates should behave similarly, whileŜ z should be clearly different fromŜ x andŜ y if the anisotropy is directed along the z-axis. Hence, the null hypothesis of isotropy will be rejected at significance level α if the value of T z,i corresponding to the i-th point pattern is larger than 100(1 − α)% of the estimated T xy,i values. The performance of the test is evaluated using its power, estimated by the average number of times the null hypothesis is rejected in 1000 repetitions of the test. Note that the values of r 1 and r 2 should be chosen depending on the type of anisotropy. We fix r 1 = 0 and will investigate the effect of different choices of r 2 on the power of the test (see also Redenbach et al. (2009)).
When using the equations obtained in the above sections, one should also decide on an appropriate aspect ratio a. Figure 4 shows plots of the power of the isotropy test at level 5% versus the parameter r 2 for the cylindrical K-function (and the corresponding r 2 for the conical K-function obtained using (7)) for the aspect ratios a from 1.5 to 3 with an increment of size 0.5, based on m = 1000 realizations under the PLCPP model introduced in Section 2.2.1.
The results indicate that the use of longer cylinders results in larger powers of the isotropy tests. This supports the recommendation given in Møller et al. (2015) on using an elongated cylinder. In each plot, the maximum is obtained for approximately the same r 2 value, no matter which h is chosen. For higher degrees of linearity, the power of the test is higher in general. Furthermore, it is less sensitive to the choice of r 2 .
Application
Even though the findings presented in the previous section suggest using a cylinder as long as possible, we have chosen an aspect ratio of a = 2 for the subsequent analyses. The reasons are as follows: When using a very long cylinder, serious edge effects may occur already for small values of r cl resulting in poor estimates of the cylindrical K-function. In addition, increasing the length of the cylinder would mean to reduce the angle used for the cone. As we already mentioned, one should make sure that the cone is not too narrow as in this case it will only contain very few points.
Hence, in the applications, we chose θ = 0.4636476 which is corresponding to a = 2, i.e. the case where the height of the cylinder is twice the diameter of its base. Figure 5 shows the means of the estimated values of conical and the cylindrical K-functions for 1000 realizations of the simulated data sets introduced in Sections 2.2.1 and 2.2.2. The mean values are obtained using the ratio estimation method described in Baddeley et al. (1993). The xaxis of the plot shows the values for r cl (and the corresponding parameters r cn and h are obtained from equations (6) and (7) to get comparable scales). With the exception of the Matérn case where the anisotropy is only weakly pronounced, both functions are able to detect the anisotropy. However, it is not easy to see which function is more sensitive to the structure of the anisotropy. Therefore, we made a comparison based on the power of the isotropy tests as follows.
The first four panels of Figure 6 show the plots of the powers of the isotropy test at a 5% significance level using m = 1000 simulations under the PLCPP models with four degrees of linearity as mentioned in Section 2. Figure 6: The powers of the isotropy test at level 5% as a function of r 2 with respect to r cl when using the cylindrical (black) and conical (red) K-function, for the realizations of the PLCPP with σ = 0.04, 0.02, 0.01, 0.001 (first four panels from top left to bottom right), Matérn hard-core (third row), and random packing of balls (last row). The last two rows correspond to the factors c = 0.9, 0.8, 0.7, from left to right, respectively. test is slightly higher when using the cylindrical K-function than when using the conical one. The shape of the two curves is similar in all plots. In contrast, the last two rows of this figure show that the conical K-function is more powerful than the cylindrical one in detecting the anisotropy caused by compression of the regular point patterns when choosing r 2 close to the hardcore radius while the cylindrical K-function is better for large r 2 .
Extra information provided by the plots is that the power of the test obtains its maximum where the whole column in the point patterns with columnar structure is captured. As an example to clarify this point, the fourth panel, which is corresponding to a realization of a PLCPP with σ = 0.001, satisfies our expectation of the diameter of a cylindrical cluster of points to be approximately 4σ = 0.004 (by definition of the PLCPP models). This pattern is followed by the other three values of σ as well. In case of the regular point patterns, the maximum is obtained for r 2 close to the hardcore radius of R = 0.05 which corresponds to the findings in Redenbach et al. (2009). Figure 7 shows the estimated K-functions for samples of the pyramidal cell and the ice data sets introduced in Sections 2.1.2 and 2.1.1 using the parametrization obtained in equations (6) and (7). As expected based on the power of the isotropy test, the conical K-function is more powerful than the cylindrical one in detecting the anisotropy in the ice data. On the other hand, the cylindrical K-function is stronger than the conical one in detecting the anisotropy caused by the linear arrangement of the pyramidal cells. Note that we obtained the same behavior when using the rest of samples.
Discussion
In this paper, we have presented a comparison of two directional versions of Ripley's Kfunction using a cone or a cylinder as structuring element. We derived a parametrization to make both functions comparable. Then, both functions were applied to data sets with different sources of anisotropy. The cylindrical K-function is generally more powerful than the conical one in case of columnar anisotropy and vice versa in case of compression. In situations where the anisotropy is clearly pronounced, although it can be detected by both functions, the cylindrical K-function is clearly more powerful than the conical K-function in detecting columnarity.
Our application examples show quite different model geometries: points clustered in linear patterns in the minicolumn data and compressed regular point patterns in the ice data. In order to get a comparable testing scenario, we decided to use the nonparametric setting suggested in Redenbach et al. (2009). While this approach is pretty general, it requires replicated data which are not always available in practice. Nevertheless, an investigation of plots of the directional K-functions for different directions may give an indication of existing anisotropies. In cases where a suitable model for the data is available, the test could be replaced by a model based Monte Carlo test.
The examples given in this paper emphasize the importance of an appropriate choice of the combination of the parameters (r cl , h) and (r cn , θ) as well as the integration interval in the test. An unfavorable choice may result in a poor performance of the functions in detecting the anisotropy of a point pattern. In practical situations, prior information on the construction of the anisotropy, e.g. the diameter of the clusters of points in case of the pyramidal cells or the hardcore radius in the regular data, can be used to determine interesting ranges of r values.
Throughout this paper, we assume that the main anisotropy directions are known and fixed. An approach for estimating the main directions in case of the cylindrical K-function was discussed in Møller et al. (2015). A similar investigation for the ice data has been done in Rajala et al. (2016).
|
2016-04-14T16:30:57.000Z
|
2016-04-14T00:00:00.000
|
{
"year": 2016,
"sha1": "f66448683e2fb7b332d67b7e3755abdc17778cc8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f66448683e2fb7b332d67b7e3755abdc17778cc8",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
199387418
|
pes2o/s2orc
|
v3-fos-license
|
Linking in silico MS/MS spectra with chemistry data to improve identification of unknowns
Confident identification of unknown chemicals in high resolution mass spectrometry (HRMS) screening studies requires cohesive workflows and complementary data, tools, and software. Chemistry databases, screening libraries, and chemical metadata have become fixtures in identification workflows. To increase confidence in compound identifications, the use of structural fragmentation data collected via tandem mass spectrometry (MS/MS or MS2) is vital. However, the availability of empirically collected MS/MS data for identification of unknowns is limited. Researchers have therefore turned to in silico generation of MS/MS data for use in HRMS-based screening studies. This paper describes the generation en masse of predicted MS/MS spectra for the entirety of the US EPA’s DSSTox database using competitive fragmentation modelling and a freely available open source tool, CFM-ID. The generated dataset comprises predicted MS/MS spectra for ~700,000 structures, and mappings between predicted spectra, structures, associated substances, and chemical metadata. Together, these resources facilitate improved compound identifications in HRMS screening studies. These data are accessible via an SQL database, a comma-separated export file (.csv), and EPA’s CompTox Chemicals Dashboard.
www.nature.com/scientificdata www.nature.com/scientificdata/ vendors further provide empirical spectral data for users to purchase (with matching algorithms executed within vendor software), but access and coverage remains limited 13 . To address these gaps, researchers have developed in silico fragmenters and MS/MS prediction models, including MetFrag 7 , CSI Finger-ID 14 , and CFM-ID 8 , among a number of others available commercially (e.g. ACD/MS Fragmenter 15 , Mass Frontier 16 ). Use of predicted MS/MS spectra in identification workflows has proven effective 5 , but requires the incorporation of command line utilities and/or on-the-fly processing of data for single chemicals. Prediction of MS/MS spectra en masse and mapping pre-computed spectra to structures and metadata within chemistry databases can enhance identification schemes and enable integration into various software systems and workflows.
The US EPA's DSSTox database is a comprehensive chemistry resource, containing more than 760,000 distinct chemical substances, associated chemical structures, and metadata 17 , and serves as the underpinning for EPA's CompTox Chemicals Dashboard (https://comptox.epa.gov/dashboard) 18 . Among its many functionalities, the Dashboard enables searching of masses and formulae generated from HRMS experiments. The data and algorithms associated with Dashboard searching have been shown to outperform the much larger ChemSpider database (ca. 67 million chemicals as of July 2018) using data source ranking for the identification of unknowns 6 . As an example, consider a search for the formula C 15 H 16 O 2 which produces a total of 263 results. Rank ordering the results based on data source or literature reference counts brings the most likely chemical (Bisphenol A) to the top of the search results (Fig. 1).
Additional metadata are now being optimized in a combined ranking scheme to further improve identifications. To improve Dashboard capabilities that support NTA research, we are generating, storing, and mapping predicted MS/MS spectra for all structures in the database.
Herein we describe: (1) the generation and storage of predicted MS/MS spectra for all chemical structures contained with DSSTox; (2) the validation and mapping of spectra to structures and substances; and (3) the publication of the comprehensive dataset for public dissemination (including the complete SQL database and schema). MS/MS spectra were predicted using competitive fragmentation modelling (CFM) and the open command line tools developed by Allen et al. 8,19,20 and named CFM-ID (available here: http://sourceforge.net/projects/cfm-id). All remaining data are sourced from the US EPA's DSSTox database and available via the EPA's CompTox Chemicals Dashboard (https://comptox.epa.gov/dashboard). Open and accessible data, integrated and provided in this dataset, enables NTA practitioners an improved means of small molecule identification when using MS/MS data from HRMS experiments.
Generation of predicted MS/MS Data.
To maximize use of predicted MS/MS data, both for our processes 13,21 and the mass spectral community at large, "MS-Ready" structures were used in the prediction model. An MS-Ready structure represents the form of a structure that would be observed via HRMS; these structures www.nature.com/scientificdata www.nature.com/scientificdata/ are de-salted, de-solvated, and processed such that chemical mixtures are separated 22 . These structures are stored in the DSSTox database with unique chemical identifiers (DTXCIDs) and linked to unique substance identifiers (DTXSIDs) to enable use of the structures and associated substance-level metadata in HRMS applications.
MS/MS spectra were predicted using CFM-ID with pre-trained parameters as defined by CFM-ID literature and described by Allen et al. 8,19,20 . All source code was downloaded from the CFM-ID SourceForge site: http://sourceforge.net/projects/cfm-id. The input data were 843,113 MS-Ready chemical structures as SMILES strings. Additional data associated with chemical structures included DTXCIDs, molecular formulas, standard InChIKeys generated using the Indigo Toolkit (http://lifescience.opensource.epam.com/indigo/), and monoisotopic masses. The obtained chemicals were saved in a local tab separated file.
MS/MS spectra were generated for each structure in the following ionization modes: electrospray ionization in both positive and negative modes (ESI+ and ESI-, respectively) at three collision energies (Energy0-10 eV, Energy1-20 eV, and Energy2-40 eV), and electron impact ionization (EI). Spectra were predicted using standard parameters provided with the software and available via the CFM-ID SourceForge site with no limits placed on the number of MS/MS spectra calculated for a given structure.
The mass spectra calculations were performed on a large-scale Linux cluster at the US EPA National Computer Center (https://www.epa.gov/greeningepa/national-computer-center). A master shell script was used to generate over 4,000 Slurm (https://slurm.schedmd.com/) queueing system run scripts that calculated EI, ESI+, and ESI-MS/MS spectra for 200 chemicals each. A small fraction of chemicals (<700) was excluded from CFM-ID calculations due to missing data and/or structural issues expected to fail in processing (such as SMILES notations of radicals, e.g. CC(C=C)=C[Al] |^3:5|). An additional 56 chemicals failed during calculation of all three prediction types. This was believed to occur due to the structural constraints of the models and ionization types as many of the failed chemicals were permanently charged species and metals ("Chemical Structures that failed during mass spectral prediction", data available at https://doi.org/10.23645/epacomptox.7776212.v1) 23 . Mode-specific failures occurred as follows: ~1000 chemicals failed during calculation of EI spectra, ~2000 failed during calculation of ESI+ spectra, and ~18,000 failed during calculation of ESI-spectra. The substantially higher number of failures occurring in ESI-mode are primarily driven by permanently charged species unlikely to ionize in negative electrospray.
For each type of mass spectra (EI, ESI+ and ESI-), the log files were merged and a Python script was used to separate the contents into a final output file (metadata followed by mass spectrum data for each chemical) and an error file (CFM-ID error messages for failed and timed out calculations). The final output file was a .dat ASCII file for each ionization mode ("Predicted EI-MS Spectra of CompTox Chemicals Dashboard Structures", "Predicted MS/MS Spectra in ESI-positive mode of CompTox Chemicals Dashboard Structures", "Predicted MS/MS Spectra in ESI-negative mode of CompTox Chemicals Dashboard Structures", data available at https://doi.org/10.23645/ epacomptox.7776212.v1) 23 .
Data storage and database structure. The raw output of the predicted MS/MS data described above required parsing and manipulation in order to generate MySQL loadable data. A Java application was developed to parse the data and generate MySQL load statements to load the database (described below). The resulting database required ~137 GB of storage and took 10 hours to load.
Mapping to chemical metadata with DSSTox and associated databases. MS-Ready structures, denoted by individual DTXCIDs, are stored in a structure relationship mapping table linking MS-Ready structures to original DSSTox structures and associated chemical substances (DTXSIDs). Chemical substances are associated with a variety of identifiers (e.g. InChI strings and keys, synonyms, database identifiers) and data (e.g. physicochemical properties, toxicity data, bioactivity data). Additional details regarding the relationship between DTXCIDs and DTXSIDs are explained in more detail elsewhere 18 .
The CompTox Chemicals Dashboard (https://comptox.epa.gov/dashboard/) enables users to search and peruse the data contained within multiple databases (see Table 2 in Williams, et al. 18 for a list of all databases). Many of the data contained within these databases are of value for ranking candidate chemicals in search results, including the number of data sources associated with a chemical in PubChem (https://pubchem.ncbi.nlm.nih. gov/), the number of associated articles in PubMed (https://www.ncbi.nlm.nih.gov/pubmed/), and the number of unique consumer product categories associated with a chemical in the Chemical and Products Database (CPDat; https://www.epa.gov/chemical-research/chemical-and-products-database-cpdat) 24 . As discussed above, ranking based on such metadata sources has already proven to be a valuable approach 6 .
To facilitate search and identification of unknowns using HRMS data, an export file from DSSTox was generated to include all DTXCIDs used to generated MS/MS data and valuable metadata, described below. Access to both substance-level metadata and predicted MS/MS data is made possible through the linked DTXCID identifier and database structure.
Data Records
The data described in this work is available in three primary formats: a SQL relational database, .dat ASCII files containing all predicted spectra, and as a complete export file in comma-separated format (.csv). Two types of data are presented to facilitate compound identification: predicted MS/MS spectral data and chemical metadata, described below and defined as data linked to a chemical structure. Access and use of the data are enabled by the inclusion of unique chemical identifiers (DTXCIDs) within all records to connect chemical structures to their associated data.
www.nature.com/scientificdata www.nature.com/scientificdata/ Spectral data. MS/MS spectra were generated for each structure in the following ionization modes: ESI+, ESI-, and EI. Each data record generated for a structure in ESI+ and ESI-contains MS/MS predictions for three collision energy levels while each record for EI contains results from a single collision energy only. Collision energy levels predicted for ESI are as follows: Energy0 (10 eV), Energy1 (20 eV), and Energy2 (40 eV). Preceding spectral predictions for a given structure are the following chemical structure metadata fields (see an example in Fig. 2 SQL database. In addition to raw files containing the predicted MS/MS spectra, data was stored in a SQL relational database ("Database of Predicted Spectra of CompTox Chemicals Dashboard Structures", data available at https://doi.org/10.23645/epacomptox.7776212.v1) 23 . Each chemical structure processed through CFM-ID resulted in MS/MS data from multiple ionization modes and collision energies. This collection of data (chemical structure, identifier, fragments and intensities) is identified as a single job.
These relationships are reflected in the Enhanced Entity Relationship (EER) Diagram (see Fig. 3) and provided as an SQL schema in a separate file ("Database Schema File of Predicted Spectra of CompTox Chemicals Dashboard Structures", data available at https://doi.org/10.23645/epacomptox.7776212.v1) 23 . The "chemical" table contains the list of all processed chemicals, denoted by a unique DTXCID. The "job" table represents the processing of a chemical for a selected spectrum and provides links into the "peak" and "fragment" tables. In addition, the "peak" table is linked to the "fragintensity" table which contains the fragment intensities and structural annotations for a given peak.
Access to the database is made available through a Python script. In addition to querying the database the script is also capable of ranking the matched chemicals according to their cosine dot product score 25,26 . Relevant information, including the mass of the parent ion, the DTXCID of the parent mass, the masses and intensities of the fragments, and the collision energy, are all provided by the querying script to perform the ranking. The MySQL database is accessed through the PyMySQL module in Python. A query is constructed to combine the fragmentation information from different tables, based on an initial search of the mass of the parent ion or the chemical formula. When the mass is searched, an accuracy level (typically within 10 ppm) is provided. The query will then search for all chemicals with masses within the defined accuracy window, and the predicted fragments for all three collision energies are provided. This information is then loaded into a DataFrame using the Pandas 27 module in Python, and further calculations, including relative intensities, cosine dot product, and ranking of the matched chemicals are performed.
Chemical metadata. Chemical metadata linked through the DTXCID are provided for all records for which predicted MS/MS spectra exist. An example of chemical metadata for a subset of structures is provided in Table 1. Metadata are provided in the "CFM-ID_metadata_DTXCID.csv" file for the following categories ("Chemical Metadata from the CompTox Chemicals Dashboard Linked to Predicted Spectra", data available at https://doi. org/10.23645/epacomptox.7776212.v1) 23 : • DTXCID: the unique DSSTox chemical identifier for the structure • DTXSID: the unique DSSTox substance identifier • Preferred Name www.nature.com/scientificdata www.nature.com/scientificdata/
Technical Validation
The reliability and accuracy of predicted MS/MS spectra using CFM-ID have been reviewed and validated in multiple publications 19,20,26 and subsequent applications 5,9 . Therefore, to verify the accuracy and ultimate utility of the present work, simple and small scale comparisons were conducted between predictions generated using the CFM-ID web application (http://cfmid.wishartlab.com/) and our own implementation of the command line tools. MS/MS spectra for three randomly selected structures in all three ionization types (for a total of nine comparison points) were predicted using each method and saved as text files ( Supplementary Files 1 and 2). Supplementary Files 1 and 2 present the output data copied from each source for a single collision energy for each ionization type. The CFM-ID web application truncates the number of predicted spectra output 19 and as such slight differences in predicted relative intensities and total number of spectra between the web application and our implementation were expected. As expected, comparison indicated exact output matching for smaller structures with fewer fragments (e.g. DTXCID107640/OC(CC(O)=O)C(O)=O) and highly similar outputs when spectra were truncated in the web application output (e.g. DTXCID00224961/NC(N)=NCCCC(NC=O)C(O)=O). In the instances where exact replication was not observed, only the relative intensities differ and do so by ~1%. Predicted fragments in all cases have identical m/z values between the two sources, indicating agreement between our implementation and the web application output.
Chemical metadata validation results from structural curation efforts and mapping within DSSTox between structural identifiers. To certify appropriate mapping between predicted spectra, chemical structures, and selected chemical metadata, a semi-automated process is conducted to link unique chemical identifiers with curated data. Mappings between MS-Ready DTXCIDs and linked DTXSIDs are stored in a structure relationship mapping table to facilitate access to pertinent chemical metadata associated with a DTXSID. The DSSTox database structure, MS-Ready linkages, and chemistry data have been previously described and validated 18 .
Usage Notes
Predicted MS/MS data are often used by researchers to compare an unidentified chemical (observed via HRMS) to a list of potential candidate chemicals. Empirically collected MS/MS data are scored against predicted spectra of a list of candidate chemicals to identify the best match. Spectral match scores provide an important piece of confirmatory data towards ultimate compound identification. A match score can be calculated between two sets of peaks using a variety of mathematical formulas 25,26,31 , any of which can be executed with simple queries of the present data. The most common use case will require a user to first query the database (or exported file converted to a data frame, for example) based on the parent mass or molecular formula of interest (i.e. observed via HRMS experimentation). The resulting set of structures from the defined search parameters will contain predicted MS/ www.nature.com/scientificdata www.nature.com/scientificdata/ MS data. These data must then be parsed, and ionization mode identified (if desired) in order to match and ultimately score peaks. Here we provide the means to conduct these searches using code developed in Python and match scores computed using the cosine dot product (https://github.com/USEPA/CFM-ID_generation_of_ CompTox_Chemicals_Dashboard_Structures_Paper). The matched chemicals, along with their fragments and the corresponding intensities at specific collision energies, are fed into a Python script that matches predicted with experimental spectra. A mass accuracy window (within a few ppm) is needed to search for matches between the fragments of the two spectra. Fragments that fall within this accuracy window are considered a match and are used in the final calculation of the cosine dot product score. The calculation as implemented in our work is computed at all three predicted energy levels. The matched chemicals are then ranked based on individual energy scores or their sum, depending on the user's preference.
Another potentially less common use case with these data involves a user interested in the predicted MS/MS spectra of a single structure. In this case again, a simple query of the database using structural identifiers will return the desired result. Ultimately, users will be able to conduct the aforementioned queries and calculations within a web interface via the CompTox Chemicals Dashboard. Development is in progress as of December 2018 and the prototype (with the scoring algorithm implemented in Java) enables users to input a mass or formula along with observed MS/MS data and query the database for matches. Users with experience in Python and/ or with data requiring customization of the match code will find the Python code of greater value while the Dashboard represents the most accessible means with which to access these data.
Additional chemical metadata linked via structural identifiers presents more options for users to increase the certainty of identifications of unknowns. These data can be accessed directly by querying the full comma-separated export using candidate chemicals. Once retrieved, data source counts associated with candidate chemicals can be used to rank within the set: the greater the number of data sources the more likely the chemical would occur in a sample 6,32 . Preliminary research indicates that data sources contained within DSSTox merged with CFM-ID match scores substantially boosts the number of correct identifications from unknowns. Optimization of combined scoring metrics is under development for implementation via the Dashboard.
Code Availability
All code for predicting the MS/MS spectra including model parameters and settings are available via http:// sourceforge.net/projects/cfm-id. Additional scripts used to implement the prediction algorithm and query the compiled database are available on GitHub (https://github.com/USEPA/CFM-ID_generation_of_CompTox_ Chemicals_Dashboard_Structures_Paper).
|
2019-08-04T13:02:26.417Z
|
2019-08-02T00:00:00.000
|
{
"year": 2019,
"sha1": "711011a5395b92f744912939dd62cf073eea884f",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41597-019-0145-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "711011a5395b92f744912939dd62cf073eea884f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20243826
|
pes2o/s2orc
|
v3-fos-license
|
In Vitro and in Vivo Phosphorylation of Rat Liver 3-Hydroxy-3-methylglutaryl Coenzyme A Reductase and Its Modulation by Glucagon*
3-hydroxy-3-methylglutaryl re- ductase
Rat liver 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMG-CoA reductase, EC 1.1.1.34) has been purified to homogeneity by a rapid procedure which employs HMG-CoA affinity chromatography. The purification of HMG-CoA reductase from solubilized enzyme can be completed in less than 24 h with a yield of 35%. Isolated HMG-CoA reductase migrated as a single band on sodium dodecyl sulfate gel electrophoresis with an apparent molecular weight of 51,000 k 1,800. Antibodies prepared against purified HMG-CoA reduc- The in vitro phosphorylation of HMG-CoA reductase was studied utilizing a purified enzyme system containing electrophoretically homogeneous HMG-CoA reductase and reductase kinase. With this system, phosphorylated HMG-CoA reductase contained approximately 4 mol of phosphate/tetramer of 200,000 molecular weight. Purified 32P-labeled HMG-CoA reductase could be dephosphorylated with a phosphoprotein phosphatase.
To demonstrate that HMG-CoA reductase undergoes phosphorylation in vivo, rats were injected with 32P and hepatic HMG-CoA reductase isolated by immunoprecipitation with a monospecific antibody to HMG-CoA reductase and by purification of the enzyme to homogeneity. Analysis of 32P-labeled immunoprecipitates and purified HMG-CoA reductase by sodium dodecyl sulfate electrophoresis revealed a single peak of radioactivity co-migrating with purified HMG-CoA reductase establishing that HMG-CoA reductase can undergo phosphorylation in vivo.
Glucagon administration in vivo resulted in a 10-to 12-fold increase in hepatic cyclic AMP content with no change in [32P]ATP specificity activity, a 2-fold increase in 32P incorporation into HMG-CoA reductase, and a 35 to 40% decrease in enzymic activity.
The enzymic activity of microsomal reductase kinase increased 2-fold following glucagon administration. Dephosphorylation of reductase kinase was associated with loss of enzymic activity.
These combined results represent the initial demonstration of the modulation of the in vivo phosphorylation of hepatic HMG-CoA reductase and suggest that * This work was presented in part at the meetings of the XIth International Congress of Biochemistry, July 8 to 13, 1979, and the Symposium on the Regulation of HMG-CoA Reductase, July 7, 1979 (Toronto, Canada). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "aduertisernent" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$ To whom correspondence should be addressed.
phosphorylation-dephosphorylation of HMG-CoA reductase is an important physiological mechanism for the short term regulation of cellular cholesterol biosynthesis.
We have reported previously that the catalytic activity of rat liver 3-hydroxy-3-methylglutaryl coenzyme A reductase (HMG-CoA reductase, EC 1.1.1.34), the rate-limiting enzyme of cholesterol biosynthesis, can be modulated in vitro by a phosphorylation-dephosphorylation reaction mechanism (1). In those studies, we utilized partially purified HMG-CoA reductase' and reductase kinase for analysis of the phosphorylation reaction. T o demonstrate the phosphorylation of HMG-CoA reductase, partially purified enzyme was radiolabeled with and the "'P-labeled HMG-CoA reductase isolated by immunoprecipitation with a monospecific antibody prepared against chicken liver HMG-CoA reductase. The radiolabeled immunoprecipitated enzyme had an identical electrophoretic migration as purified HMG-CoA reductase when analyzed by sodium dodecyl sulfate electrophoresis (1).
In the present report, we describe an extremely simple and rapid method for the purification of rat liver HMG-CoA reductase and demonstrate the phosphorylation of purified HMG-CoA reductase by reductase kinase. In addition, we also present evidence that hepatic HMG-CoA reductase undergoes phosphorylation in viuo. The in vivo phosphorylation of HMG-CoA reductase was shown to be increased by glucagon administration indicating that polypeptide hormones may modulate the enzymic activity and degree of phosphorylation of hepatic HMG-CoA reductase.
HMG-CoA reductase activity was determined as described earlier (1,2) except that the incubation mixture contained bovine serum albumin (5 mg/ml), 0.2 M KCI, and 0.15 M potassium phosphate (pH 6.9). Antibodies to purified rat HMG-CoA reductase (500 pg, two injections) were raised in a goat by intradermal injection. The yglobulin fraction of the antiserum was isolated by the method of Steinbauch and Audran (4). Immunodiffusion was performed in 1%' agar gel (Hyland immunodiffusion plates). Polyacrylamide gel elec-trophoresis were carried out as reported (5). Protein concentrations were determined by the method of Bradford (6) using bovine serum albumin as standard.
In r i t u "1'-labeled HMG-CoA reductase was isolated from solubilized hepatic microsomal membranes obtained from rats injected with .("P. The solubilized radiolabeled protein samples were incubated with antiserum prepared against purified HMG-CoA reductase (antibody concentration %fold greater equivalence) for 30 min and 18 h a t 37°C and 4°C. respectively, and the samples centrifuged (1500 rpm, 45 min). The immunoprecipitates were washed three times with 0.9% NaCI, dissolved in 10 mM Tris-HCI (pH 8.0) containing 1 7 SIlS, 40 mM dithiothreitol, 1 mM EDTA, 10 pg/ml of pyronin Y, incubated for 45 min at 70°C. and analyzed by SIN-gel electrophoresis (5.6% acrylamide, pH 7.4) (7). Following electrophoresis gels were either sliced ( I -to 2-mm sections) and "' I' radioactivity determined or stained for protein with Coomassie blue. The migration of the radiolabeled protein was compared to the electrophoretic position of purified HMG-CoA reductase and other proteins of known molecular weight.
Male Sprague-Ilawley rats, used for isolation of HMG-CoA reductase, were housed for 3 weeks in a light-controlled room (dark cycle, 6:30 a.m. to 6:30 p.m.). The animals had free access to food (I'urina Chow) and water. Kats were decapitated at 12:30 p.m. and hepatic microsomes isolated as described earlier (1).
RESULTS
Purification of HMG-CoA Reductase-HMG-CoA reductase was solubilized from rat liver microsomes as described earlier (8) except Buffer A contained glycerol as reported by Edwards et al. (9). The solubilized enzyme (198 mg) was fractionated with ammonium sulfate (35 to 50%). The precipitate (64 mg) was dissolved in 5 ml of 50 mM KCl, 40 mM potassium phosphate, 30 mM EDTA, 0.1 M sucrose, and 2 mM dithiothreitol, pH 7.2 (Buffer A), containing 1 M KC1 and 30% glycerol. The solution was heated a t 65°C for 30 min and centrifuged (15,000 X g). The supernatant was diluted (2-fold) with Buffer A and precipitated with ammonium sulfate (0 to 50%,). This precipitate (5 mg) was dissolved in 50 ml of diluted (1:l) Buffer A containing 20% glycerol (Buffer B) and applied to an affinity column of HMG-CoA (1.2 X 2.5 cm), equilibrated with Buffer B at room temperature. The column was washed with 50 ml of Buffer A containing 20% glycerol followed by 25 ml of Buffer A containing 200 p~ HMG-CoA. All HMG-CoA reductase activity was confined to the fractions eluted with HMG-CoA. The enzyme was concentrated (0.2 mg/ml) and stored in Buffer A containing 50% glycerol at -70°C.
A summary of the purification scheme for rat liver HMG-CoA reductase is shown in Table I. HMG-CoA reductase was purified approximately 6,000-fold with an overall yield of 35%. This procedure can be completed within 24 h. Aqueous polyacrylamide gel electrophoresis of purified HMG-CoA reductase demonstrated a single electrophoretic band (Fig. lA), and analysis of enzymic activity in gel slices (2 mm) revealed that greater than 98% of the reductase activity was associated with the single protein band. On SDS-gel electrophoresis purified HMG-CoA reductase migrated as a single electrophoretic band with a monomer molecular weight of 51,Ooo k 1,800 (Fig. 1B). An antibody prepared in a goat against purified rat liver HMG-CoA reductase formed a single immunoprecipitin line of identity with microsomal, solubilized, and purified HMG-CoA reductase (Fig. 1C).
The purified enzyme had a specific activity of 5,205 nmol/ min/mg of protein, and an apparent K,, for D-HMG-CoA and NADPH of 0.86 PM and 38 PM, respectively. The specific activity of HMG-CoA reductase purified by this procedure from cholestyramine-fed rats (5 weight % diet, 3 days) was higher than cycled normal rats and ranged from 10,000 to 15,000 nmol/min/mg of protein.
Inactivation a n d Reactivation of Homogeneous Rat HMG-CoA Reductase-When electrophoretically homogeneous HMG-CoA reductase was incubated with purified reductase kinase, 4 mM ATP plus 10 mM MgC12, a time-dependent inactivation of reductase activity was observed (Fig. 2). Incubation of inactivated HMG-CoA reductase with phosphoprotein phosphatase was associated with a time-dependent increase in enzymic activity of HMG-CoA reductase (Fig. 2). These results are consistent with inactivation-reactivation of microsomal HMG-CoA reductase reported earlier (1).
In Vitro Phosphorylation of Homogeneous Rat HMG-CoA Reductase-Purified HMG-CoA reductase did not undergo phosphorylation in the presence of ATP plus MgClr indicating that the isolated enzyme did not undergo autophosphorylation and that HMG-CoA reductase had been purified free of contamination with reductase kinase. This is in contrast to purified reductase kinase which readily undergoes autophosphorylation (data not shown). Incubation of purified HMG-CoA reductase with reductase kinase and [y-"'PIATP plus MgC12 was associated with a time-dependent increase in proteinbound radioactivity and decrease in enzymic activity. The inhibition of enzymic activity was dependent on ATP concentration with a decrease in HMG-CoA reductase activity (30 min) of 28% and 78% with 0.2 m~ and 1 mM [y-'v2P]ATP, respectively. To c o n f m t h a t HMG-CoA reductase had been radiolabeled with "P, 32P-labeled samples were analyzed by SDS-gel electrophoresis, and radioactivity (>98%) was associated with the single electrophoretic band of purified HMG-CoA reductase (Fig. 3). Assuming that HMG-CoA reductase Table I) and dialyzed against 10 mM K2POI buffer (pH 7.4) containing 0.2 mM dithiothreitol. ['"I'IHMG-CoA reductase was immunoprecipitated utilizing a monospecific antibody and the immunoprecipitates washed and dissolved in buffer containing SDS as described under "Experimental Procedures." The '"P-labeled immunoprecipitates of control (0) and glucagon-treated (0) rats were analyzed by SDS-gel electrophoresis (5.6% acrylamide) in a Tris-sodium acetate-EDTA-SDS (pH 7.4) system (7). No radioactivity was detected in the immunoprecipitates when nonimmune goat serum was employed. The electrophoretic mobility of purified HMG-CoA reductase is shown in the gel inset. Panel B, hepatic HMG-CoA reductase was purified to homogeneity from control and glucagon-treated rats by the procedure outlined in Table I. All buffers contained .50 mM NaF to inhibit endogenous phosphoprotein phosphatase which would be anticipated to dephosphorylate HMG-CoA reductase during purification and analysis. '"P-Labeled HMG-CoA reductase (20 pg) isolated from control (0) and glucagon-treated (0) rats was analyzed by S I Xgel electrophoresis as discussed in Panel A.
In Vivo Phosphorylation of HMG-CoA Reductase and Its Modulation by Glucagon--In order to demonstrate that HMG-CoA reductase undergoes phosphorylation in vivo rats were injected with '"P, liver microsomes isolated, and HMG-CoA reductase was either partially purified and immunoprecipitated or purified to homogeneity by affinity chromatography as described under "Experimental Procedures." Radiolabeled HMG-CoA reductase protein, and enzymic activity eluted from the affinity column as a single symmetrical peak. The ratio of specific activity of HMG-CoA reductase from control and glucagon-treated rats was constant throughout the various steps of the purification procedures. Recovery of enzymic activity during purification from control and glucagon-treated rats averaged 32% (three experiments).
Examination of the '"P-labeled immunoprecipitates and purified enzyme by SDS-gel electrophoresis revealed a single peak of radioactivity migrating with an apparent molecular weight of 51,000 (*1,800) coincident with the migration of purified HMG-CoA reductase (Fig. 4 A and B ) . No immunoprecipitable radioactivity was detected on the gel when 'lPlabeled HMG-CoA reductase was incubated with nonimmunized goat serum. These results establish that HMG-CoA reductase can undergo phosphorylation in uiuo.
The effect of glucagon on the enzymic activity and extent of phosphorylation of HMG-CoA reductase was determined in rats injected with 32P, and glucagon (1.5 mg/kg). The administration of glucagon was associated with a 35 to 40% decrease in the enzymic activity of HMG-CoA reductase, and a 10-to 12-fold increase in hepatic cyclic AMP content (Table 11). Dephosphorylation of HMG-CoA reductase from control and glucagon-treated rats resulted in an increase in total enzymic activity to nearly identical levels. '"P-labeled HMG-CoA reductase was isolated by immunoprecipitation from partially purified enzyme from liver microsomes of rats injected with 'I2P and glucagon as outlined above. A comparison by SDS-gel electrophoresis of the ,'"P radioactivity of immunoprecipitated HMG-CoA reductase from control and glucagon-treated rats is illustrated in Fig. 4A. These findings were substantiated by the purification of hepatic HMG-CoA reductase to homogeneity from control and glucagon-injected rats (Fig. 4B). A single peak of :' lP radioactivity coincident with purified HMG-CoA reductase was found when the samples were analyzed by SDS-gel electrophoresis (Fig. 4B). The incorporation of ,"2P radioactivity into hepatic HMG-CoA reductase was 2-fold greater in glucagon-treated than control rats (Table 11, Fig. 4B). No change in hepatic ATP specific activity following glucagon administration was observed (Table 11).
The ATP-dependent inactivation of the enzymic activity of HMG-CoA reductase by microsomal reductase kinase was increased approximately 2-fold in glucagon-treated rats as compared to control (Table 111). Treatment of active (phosphorylated) reductase kinase with phosphoprotein phosphatase was associated with loss of enzymic activity and ability to inactivate HMG-CoA reductase (Table 111). TABLE I1 Effect ofglucagon on the enzymic activity a n d phosphorylation of HMG-CoA reductase Kats were injected with 'r2P and glucagon as outlined in Fig. 4. Livers were removed, minced, and homogenized in 2% volumes of 0.3 M sucrose containing 5 mM EDTA, 2 mM dithiothreitol, and 50 mM NaF (pH 7.0). Isolation of liver microsomes and determination of HMG-CoA reductase activity were performed as described in the text. The .'lP incorporated into HMG-CoA reductase was determined following purification of the enzyme to homogeneity by the procedure detailed in Table I. NaF (50 mM) was added to Buffer A during the solubilization and purification of HMG-CoA reductase. Hepatic cyclic AMP content was determined in percholoric acid extracts of freezeclamped livers by the method of Frandsen and Krishna (11). Labeled ATP was isolated from trichloroacetic acid-methanol extracts of freeze-clamped liver. [:"P]ATP was quantified by thin layer chromatography utilizing 0.1-mm cellulose MN 300 polyethyleneimine plates (Brinkmann Instruments) in 1 M KHPO, (pH 4.5). Chemical ATP was determined according to the method of Cheer (12). Each value represents the mean f S.E. Effect of glucagon on enzymic activity of microsomal reductase kinase Rats were injected with saline or glucagon as outlined in Fig. 4. Hepatic microsomes were isolated in a buffer containing 5 mM EDTA and 50 mM NaF, as described under Table 11. Microsomal reductase kinase was prepared as described (2). For the preparation of dephosphorylated (inactive) reductase kinase, NaF was removed by dialysis against 50 mM imidazole, 5 mM EDTA, 1 mM dithiothreitol (pH 7.4) buffer and preincubated in the presence of phosphoprotein phosphatase for 37°C for 1 h (2). Control samples contained phosphoprotein phosphatase plus NaF. Active reductase kinase (0.3 mg) and phosphoprotein phosphatase-inactivated reductase kinase (0. 6 One unit is defined as amount of reductase kinase that catalyzes 1% inactivation of HMG-CoA reductase in the presence of ATP plus MgCL in 1 min under standard assay conditions.
DISCUSSION
The purification of solubilized HMG-CoA reductase from rat liver was achieved with a 12-to 16-h procedure employing affinity chromatography on HMG-CoA. The overall yield of approximately 35% is significantly higher than other published methods (10,(13)(14)(15)(16). Ness et al. (13) independently reported the use of HMG-CoA affinity chromatography in the purification of HMG-CoA reductase. Their method, however, requires a time-consuming repetitive solubilization procedure and an additional step (Affi-Gel Blue chromatography) (13).
The present report describes the fist demonstration of the in vitro phosphorylation of HMG-CoA reductase employing a system which utilizes electrophoretically homogeneous HMG-CoA reductase and reductase kinase. The demonstration and quantitation of the extent of phosphorylation of HMG-CoA reductase by a system which uses purified HMG-CoA reductase and reductase kinase eliminates several of the limitations encountered in the microsomal system (1, 17) (e.g. other specific (CAMP-dependent reductase (2)) kinases, and nonspecific kinases, as well as sodium fluoride-resistant phosphatases, all of which may influence the degree of phosphorylation of HMG-CoA reductase).
Results obtained in the present study suggest that phosphorylation of HMG-CoA reductase occurs in vivo and may be important in the intracellular regulation of HMG-CoA reductase enzymic activity. In addition, the extent of phosphorylation and enzymic activity of hepatic HMG-CoA reductase were modulated in rats following glucagon administration. The increase in the degree of phosphorylation as well as the decrease in enzymic activity of HMG-CoA reductase following glucagon administration was associated with an increase in intracellular concentrations of cyclic AMP, with no change in [32P]ATP specific activity. We have previously reported the presence of both a cyclic AMP-dependent and independent reductase kinase which catalyze the phosphorylation of HMG-CoA reductase and the reversible phosphorylation of reductase kinase ( 2 ) . An elevation in cellular cyclic AMP concentration would be anticipated to enhance the phosphorylation of HMG-CoA reductase by the cyclic AMPdependent reductase kinase. Furthermore, elevated cyclic AMP levels may also effect the activity of the phosphoprotein phosphatase inhibitory protein, thereby decreasing the phosphorylation of HMG-CoA reductase and reductase kinase as proposed earlier (2). Both these mechanisms would increase the phosphorylated or inactive form of HMG-CoA reductase, thereby decreasing intracellular cholesterol biosynthesis. However, other possible mechanisms by which glucagon may have been associated with an increased incorporation of ,'"P including subcellular compartments of ['"PIATP cannot be ruled out definitely (for review see Ref. 18).
Recently, Ingebritsen et al. (19) reported changes in enzymic activity of HMG-CoA reductase and reductase kinase as well as cholesterol synthesis in in vitro suspension of rat liver cells following treatment with insulin and glucagon. Although the extent of phosphorylation of HMG-CoA reductase and reductase kinase was not demonstrated directly, the changes in enzymic activity correlate well with the in vivo data presented in the present report.
Recently, Brown et al. (20) proposed that changes observed in enzymic activity of HMG-CoA reductase isolated from rats subjected to long term manipulation (e.g. prolonged feeding of cholesterol or cholestyramine, fasting, and stress) were not attributable to changes in the degree of phosphorylation of the enzyme. These authors did not examine acute effects on enzyme activities and these findings are consistent with our earlier model (1) in which short term regulation involves reversible phosphorylation, whereas long term physiological modulation involves changes in the quantity of enzyme protein.
The combination of in vitro and in vivo results described in this report support our concept that the enzymic activity of HMG-CoA reductase and reductase kinase are modulated by reversible phosphorylation in a bicyclic cascade system (2). This mode of regulation may represent an important short term mechanism for the regulation of cellular cholesterol biosynthesis (1, 2 ) .
|
2018-04-03T02:32:04.092Z
|
1980-09-25T00:00:00.000
|
{
"year": 1980,
"sha1": "2303687cccf6205c0af17d99ce16a2612eb28278",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(18)43531-4",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "07cd75aa6ebe73fb26372f119028b50062d97b94",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
13700951
|
pes2o/s2orc
|
v3-fos-license
|
UvA-DARE ( Digital Academic Repository ) Associations between Psychopathology in Mothers , Fathers and Their Children : A Structural Modeling Approach
This study investigated associations between parental and child psychopathology with parenting stress as a possible mediator, in order to get more insight in mothers’ and fathers’ roles in the development of psychopathology in children. Parents of 272 clinically referred (aged 6–20, 66% boys) reported about their own and their child’s behavioral problems, and about parenting stress. Data were analyzed using Structural Equation Modeling. Outcomes of path models demonstrated that mothers’ higher internalizing and externalizing problems were associated with respectively children’s higher internalizing and externalizing problems. Fathers’ higher externalizing problems were associated with both children’s higher internalizing and externalizing problems, but fathers’ internalizing problems were only associated with children’s lower externalizing problems. Parenting stress fully mediated the relation between mothers’ and children’s externalizing problems, and partly mediated the relation between mothers’ and children’s internalizing problems. For fathers, parenting stress partly mediated the relation between fathers’ internalizing problems and children’s externalizing problems. Findings indicate that for mothers, the association between parental and child psychopathology is specific, whereas for fathers it is non-specific. Furthermore, results suggest that reducing parenting stress may decrease child problem behavior. Longitudinal studies are needed in order to gain more insight in the direction and underlying mechanisms of the relation between parental and child psychopathology, including parental stress.
(psychopathology) and parenting rearing practices, family structure, -organisation and system dynamics, having social support and friends, etc., all interact and play a role in child (mal)adaptation (e.g., mental disorder). Similar, Cummings et al. (2000) describe psychopathology as a dynamic interplay of a constantly changing individual in an ever changing environment. Considering child psychopathology, the role of the parent(s) has gained much attention and several ways have been suggested how parents may play a role. First, parents pass their genes to their offspring which makes their child more or less susceptible to a particular disorder. For example, autism developmental disorders seem to have a high genetic component with heritability indices up to 0.92 (e.g., see review of Miles 2011). Second, parents use parenting strategies that may contribute to the development (or maintenance) of childhood mental disorders. For example, parental control has been found to be associated with childhood anxiety (meta-analysis of McLeod et al. 2007;Van Der Bruggen et al. 2008) and corporal punishment has been associated with several child outcomes (e.g., increased child delinquent and antisocial behaviour, decreased mental health; see meta-analysis of Gershoff 2002). Also, children may learn certain behaviours from parents (e.g., via conditioning or operant learning procedures like punishment or reinforcement, or via imitation or listening). Third, parents own characteristics and coping, personality and/or psychopathology may play a role. For example, parental stress has been found to predict child behaviour problems (e.g., Ashfort et al., 2008). Thus, parents may play a role in the development of their child's psychopathology in multiple ways, and presumably these different ways interact and may exacerbate each other. It may therefore be hardly surprising that research has demonstrated consistently that parental psychopathology and child psychopathology are associated (e.g. Connell and Goodman 2002;Ha et al. 2008;Hodge et al. 2010;Hicks et al. 2004;Van Meurs et al. 2009).
Associations between parent and child psychopathology can be specific (i.e. specific parental disorders are associated with specific disorders in their children), or non-specific (i.e. having a parent with a mental disorder is associated with a higher risk for children to develop any mental disorder), and both have gained some empirical support. For example, with regard to specific associations, it has been found that anxious parents are more likely to have anxious children and vice versa (e.g., Beidel and Turner 1997;Last et al. 1991), and anxious/depressed behavior, somatic problems and rule breaking behavior of the child was best predicted by the same problem behavior of the parent (Van Meurs et al. 2009). However, regarding non-specificity, it has been found that children who have depressed mothers are at elevated risk of developing not only depression themselves, but also conduct behavior problems (e.g., Beck 1999;Goodman and Gotlib 1999), and children whose parents have externalizing disorders (conduct disorder or drug/ alcohol dependence) are at increased risk for developing both externalizing disorders and internalizing problems (e.g., Bierut et al. 1998;Clarck et al. 1997;Hicks et al. 2004;Luthar et al. 1993). More knowledge about whether associations are specific or not may lead to better insights on how to conceptualize psychopathology and how to develop prevention or treatment programs. For example, if the associations are non-specific, this may lead to a more dimensional view of psychopathology (e.g., see Caspi et al. 2014, who consider a p-factor as a general psychopathology factor for psychiatric disorders) and more general or symptom-broader (at least not disorder-specific) preventionor treatment programs, than when associations are found to be specific.
There are (at least) two reasons why the associations between parents and their children's psychopathology may be different for fathers and mothers. First, females have been found to differ from males in the prevalence and symptom presentation of psychiatric disorders (e.g., Alonso et al. 2004;WHO 2002), and second, fathers and mothers may have different roles in child development and child psychopathology. For example, it has been proposed that fathers have the role to challenge the child and prepare it for the outside world, while mothers' role is to nurture the child (Bögels and Phares 2008;Bögels and Perotti 2011). Partial support for the different roles of fathers and mothers comes from a study in which fathers' fulfilment of his evolutionary role to challenge the child was linked to less social anxiety in young children (Majdandžić et al. 2014). Thus, theoretically, the parent-child associations may be different for fathers and mothers. However, previous research was mostly focused on mothers, and (the role of) fathers tended to be neglected for a long time (see review of Phares and Compas 1992;and review of Cassano et al. 2006). From studies that have included fathers (alongside mothers), it has become clear that fathers' psychopathology is (also) associated with child psychopathology: (1) most paternal psychiatric disorders (such as depression and substance use) are associated with an increased risk for the development of emotional and behavioural problems in their children, independent of maternal psychiatric disorders (see review of Ramchandani and Psychogiou 2009), and (2) results of a meta-analysis demonstrated that externalizing problems in fathers and mothers were comparably associated with externalizing problems in their children, however, while internalizing problems of both mothers and fathers were associated with internalizing problems in their children, the association appeared to be stronger for mothers (Connell and Goodman 2002).
A factor that is frequently linked to child psychopathology, is parenting stress. Although different studies use various definitions of parenting stress, most include the parent's perception of their capacity to cope with the demands of parenthood (Abidin 1992). Research has found associations between parenting stress and child psychopathology, as well as between parenting stress and parental psychopathology (Anastopoulos et al. 1992;Ashford et al. 2008;Costa et al. 2006;Crnic and Greenberg 1990;Morgan et al. 2002;Webster-Stratton and Hammond 1988). It has been hypothesized that parenting stress leads to poor parenting behaviors (e.g., more authoritarian, harsh and negative parenting) and that these parenting practices in turn causes child maladaptation (e.g., see review of Deater-Deckard 1998). In addition, parents' psychopathology and personality, among other variables, are hypothesized to play a role in parenting stress (e.g., see Abidin 1990). Further, parents with psychopathology may be more vulnerable to parenting stress (as they have less coping resources), which lead to more negative parenting, and in turn, parental stress may interfere with their abilities to inhibit negative parenting behaviors (e.g., avoidance or withdrawal when they are high on the internalizing spectrum, and/or aggression when they are high on the externalizing spectrum) (e.g., Bögels et al. 2010). Viewed this way, parenting stress may be an important mediator for the association between parent-and child psychopathology. In addition, if parenting stress is indeed found to mediate the association between parent-and child psychopathology, it may be a relatively easy and important target for treatment (e.g., mindful parenting is found to be an effective intervention to manage parenting stress; Bögels et al. 2014).
The present study aims at testing path models in order to examine the strength of the associations between parental psychopathology and child psychopathology, for both mothers and fathers. In addition, parenting stress is examined as a potential mediator (see Fig. 1). In line with Connell and Goodman's meta-analysis (2002) we investigated the broadband syndromes, internalizing and externalizing behaviour problems (as there appears to be a high rate of comorbidity within the broadband syndromes internalizing and externalizing problems). Consistent with the Child Behavior CheckList manual (CBCL; Achenbach and Rescorla 2001), internalizing problems were viewed as behaviours and emotions directed inwards which includes anxiety, somatic and depressive symptoms, and externalizing problems were conceptualized as behaviours and emotions directed outwards which includes aggressive, oppositional and rule-breaking behaviours. Based on previous research (see above), we expected internalizing problems in both mothers and fathers to be associated with internalizing problems in their children, and externalizing problems in both mothers and fathers to be associated with externalizing problems in their children. No other expectations were made as previous research reported inconsistent results and/or is lacking.
Participants
The sample of the present study consists of children and their families from an urban area, who were referred to UvA minds, a community mental health care center in the Netherlands, and an academic treatment center for parents and children. The center offers outpatient mental health care to children who have behavioural or emotional problems such as ADHD, anxiety disorders, posttraumatic stress disorder and autism spectrum disorders. All children function on a normal cognitive level (i.e., IQ > 70). Before the family's first appointment at the treatment center, parents of all children are asked by email to fill in several online questionnaires at home. If they do not complete the online questionnaire at home, they are asked to fill in these forms at the treatment center, before or immediately after their first appointment. Parents are informed about the academic purpose of the treatment center, in which anonymity is guaranteed and have the possibility to resign from participation in the study.
Procedure
Data were gathered from July 2010 till the end of June 2012 during which 414 mothers and 306 fathers completed the questionnaires. For the current study, data were used when (1) both biological parents participated in the research, and (2) the child was at least 6 years (i.e., for children under 6 a different (preschool) version of the questionnaire on child behavior was used). There were no exclusion criteria. That is, all parents of the children who were referred were asked to complete the questionnaires, irrespective of the reason for referral or diagnosis. Two families did not give their consent to use the completed questionnaires for research aims and their data were therefore removed. The final sample consisted of 272 children (n = 180, 66% boys, mean age = 10.35, SD = 2.80, range = 6-20 years), 272 mothers (mean age = 42.97 years, SD = 5.28, range 24-57 years) and 272 fathers (mean age = 45.65 years, SD = 5.98, range 28-64 years). Of the total sample, 82% of the parents were living together. Ethnicity was based on the mothers' and fathers' country of origin. Concerning the educational level of the parents, 66% of the mothers and fathers had a bachelor or master degree. The ethnic composition of the sample of children was 71% Dutch, 17% mixed (Dutch and another country), 12% other (Morocco, Surinam, Turkey, and other western and non-western countries), and 1% unknown.
Child psychopathology
Children's internalizing and externalizing behavior problems were measured with the Child Behavior Checklist (CBCL; Achenbach and Rescorla 2001). The questionnaire Fig. 1 Conceptual model of the mediating role of parenting stress for transmission of psychopathology from parent to child (based on Abidin 1990, andDeater-Deckard 1998) contains 113 items which are rated on a 3-point Likert scale (0 = 'not true', 1 = 'sometimes true' or 'somewhat true', 2 = 'often true' or 'very true'). The narrowband scales anxious/depressed behavior, withdrawn behavior and somatic complaints were used to form the latent construct internalizing problems. The narrowband scales aggressive behavior and rule breaking behavior were used to form the latent construct externalizing problems. Good reliability and validity of the American version of the CBCL was confirmed for the Dutch translation (De Groot et al. 1994).
CBCL T-scores were used in order to make the scores between children of different ages and gender comparable. Both mothers and fathers responded on the questionnaire, therefore their mean scores were used. Cronbach's alphas were calculated for the subscales for both mothers and fathers reports. The alpha ranged between .72 (somatic problems) and .81 (anxious/depressed behavior) for mother report. For father report, the alpha ranged between .69 (rule breaking behavior) and .90 (aggressive behavior).
Parent psychopathology
Mothers and fathers reported about their own internalizing and externalizing problems by completing the Adult Self Report (ASR, Achenbach and Rescorla 2003;Ferdinand et al. 1995). The ASR is a 123 item questionnaire, based on the CBCL. The narrowband scales anxious-depressed behavior, withdrawn behavior and somatic complaints were included in the latent construct internalizing problems, while the narrowband scales aggressive behavior, rule breaking behavior and intrusive behavior were included in the latent construct externalizing problems. ASR T-scores were calculated. Cronbach's alpha for the ASR subscales ranged between .72 (withdrawn behavior) and .90 (anxious/ depressed behavior) for mother report. The alpha ranged between .62 (rule breaking behavior) and .90 (anxious/ depressed behavior) for father report.
Parenting stress
Parenting stress was measured with the competence scale of the Nijmegen Parenting Stress Index (De Brock et al. 1992) assessing the degree to which the parent feels capable in dealing with the child. Mothers and fathers filled in the questionnaire about their own experiences of parenting stress. The competence subscale consists of 15 items, such as "I have many more problems raising children than I expected". Parents rated whether they agreed on the questions on a 6-point Likert-scale, ranging from (1) 'completely disagree' to (6) 'completely agree'. A higher score indicates higher feelings of stress experienced by the parent concerning the parents' perceived capabilities in parenting the child. For mothers and fathers, scores above respectively 31and 33 can be interpreted as above average. Cronbach's alphas for maternal and paternal parenting stress in this study were respectively .87 and .90.
Data Analyses
In order to analyze the proportion of children and parents that fell in the subclinical (scores between T-score ≥ 60 and T-score ≤ 63) and clinical range (scores above T-score > 63), the scales internalizing and externalizing behavior problems were constructed by calculating the sum score of their narrowband scales. Structural Equation Modeling (SEM) was used for the evaluation of the research questions. SEM was used because it is an appropriate statistical method for mediation analyses and factor analyses with latent variables. Observed covariance matrices were used as input for the analyses. The maximum likelihood estimation method was used to obtain estimates of factor loadings, covariances and residual variances. Several fit indices were used to evaluate the fit of the factor models (see Fig. 2 for the six-factor model) to the data. The Chi-square (χ²) test is a measure of exact fit. A significant χ²-value at an alpha-level of 0.05 indicates that the model does not fit the data. The χ² however is sensitive to sample size and is not very accurate (Browne and Cudeck 1992), and therefore the Root Mean Square Error of Approximation (RMSEA) and the comparative fit index (CFI) were also considered. The RMSEA is a measure of approximate fit. RMSEA values higher than 0.10 indicate bad fit, values lower than 0.08 indicate satisfactory fit, and values lower than 0.05 indicate close fit (Browne and Cudeck 1992). The CFI ranges from 0 to 1, where 1 indicates best fit (Hu and Bentler, 1999). Likelihood based confidence intervals were used to test the significance of the direct and indirect effects. Coefficients of the direct and indirect effects are standardized, and thus values of 0.1, 0.3, and 0.5 can be interpreted as respectively, 'small', 'medium', and 'large' effects (Cohen 1992). The analyses were conducted with the computer program OpenMx (Boker et al. 2011) and based on correlation residuals, i.e. differences between the observed and predicted covariances, exceeding .10, covariances were added to the models. Multiple models were run. First, to investigate the strength of the association between mother-child and father-child psychopathology, separate models were run for the associations between mother-child and father-child psychopathology. Second, to examine whether one of the parents had stronger associations with child psychopathology, i.e., to test the strength of the mother-child and the father-child association while controlling for the effect of the other partner, a structural regression model was constructed in which the psychopathology of both parents was included in the same model. Third, in order to test whether parenting stress mediated the relation between parental and child psychopathology, parenting stress was added to the structural regression models for both fathers and mothers separately. In the mediation models, direct effects represent the associations between parent psychopathology (maternal and paternal internalizing and externalizing problems) and child psychopathology (the child's internalizing and externalizing problems), while indirect effects represent the mediating influence of parenting stress (thus, whether or not the association between parent and child psychopathology is-partly or fullyexplained by parenting stress). It would have been interesting to see the indirect effect via parenting stress in the model with mothers and fathers together, however, this model did not converge.
Severity of Behavior Problems and Parental Stress
The average ratings of children's internalizing problems fell in the subclinical range, M = 60.63 (SD = 8.92). The average ratings of children's externalizing problems fell in the normal range, M = 56.28 (SD = 9.38). Of the total sample 30% of the children were reported by their parents to have scores falling within the normal range for both internalizing as well as externalizing problems. The percentage of children who's internalizing or externalizing behavior problems were rated in the subclinical range, using the mean scores of the mothers and fathers, were respectively 17 and 12%. The percentage of children who's internalizing or externalizing behavior problems were rated in the clinical range, were respectively 42 and 24%.
The mean scores of parents' internalizing problems (M mothers = 49.25, SD = 10.47, M fathers = 48.23, SD = 11.45), and externalizing problems (M mothers = 48.06, SD = 9.54, M fathers = 47.54, SD = 9.91) fell in the normal range. The scores did not differ statistically between mothers and fathers. Of the parents, respectively 16 and 11% rated themselves as having internalizing and externalizing problems in the subclinical or clinical range. The mean levels of parenting stress reported by mothers and fathers, were respectively 32.88 (SD = 11.57) and 31.77 (SD = 11.96), and were not statistically different. The mean scores indicate that on average, both mothers and fathers appeared to experience above average parenting stress. The percentages of mothers and fathers reporting above average parenting stress were respectively 48 and 49%.
Associations between Parent and Child Psychopathology
The six-factor model is presented in Fig. 2. The model consisted of the factors internalizing problems and externalizing problems of mothers, fathers and children. See Table 1 for the correlation matrices with standard deviations of the observed variables included in the factor and structural regression models. See Table 2 for the correlations between the latent factors internalizing problems and externalizing problems of parents and children. Figure 3a, b display the models in which the strength of the association of mothers' and fathers' psychopathology with child psychopathology is examined. The model for mothers (χ² (37) = 67.788 =, p = .001, RMSEA = .055 (95% CI = [.028, .079]), CFI = .96) showed two smallsized associations (more internalizing problems in mothers was related to more child internalizing problems, β = .27, p < .05; and more externalizing problems in mothers was related to more child externalizing problems, β = .29, p < .05). The model explained 11% of the child's internalizing problems, and 7% of the child's externalizing problems. For fathers (χ² (37) = 89.896 =, p = .000, RMSEA = .066 , CFI = .95), one large-sized association (more externalizing problems in fathers was related to more child externalizing problems, β = .50, p < .05) and two medium-sized relations (more externalizing problems in fathers was related to more child internalizing problems, β = .38, p < .05; and more internalizing problems in fathers was related to less child externalizing problems, β = −.32, p < .05) were found. The model explained 15% of the child's internalizing problems, and 10% of the child's externalizing problems. Figure 4 displays the structural regression models in which the strength of the association of mothers' and fathers' psychopathology with child psychopathology can be compared to each other. Concerning the RMSEA, there was satisfactory fit of the model to the observed covariance matrix, χ² (102) = 210.947, p = .000, RMSEA = .063 (95% CI = [.048, .077]), CFI = .93. Results of the combined parents model were similar to the separate mother-child and farther-child models with the exception that the positive association between mothers' internalizing problems and children's internalizing problems no longer reached significance. The small-sized positive association between mothers' externalizing problems and children's externalizing problems (β = .27, p < .05) remained. With respect to fathers, results were similar to the father-child model: two medium-sized positive associations (between fathers' externalizing problems and children's externalizing problems, β = .46, p < .05; and between fathers' externalizing problems and children's internalizing problems, β = .36, p < .05), and a medium-sized negative association (between fathers' internalizing problems and children's externalizing problems, β = −.32, p < .05) were found. The model explained 22% of the child's internalizing problems, and 15% of the child's externalizing problems.
Parenting Stress as Mediator
Figures 5a and 5b show the partial mediation models for the relation between parental and child psychopathology via parenting stress for mothers (χ² (39) = 67.678, p = .003,
Note:
The correlations above the diagonal concern the fathers, the correlations under the diagonal concern the mothers. Mothers' parenting stress was a small but significant mediator for the association between mothers' internalizing problems and both children's internalizing (β = .08, p < .05) and externalizing problems (β = .21, p < .05), and for the association between mothers' externalizing problems and both children's externalizing (β = .17, p < .05) and internalizing problems (β = .07, p < .05). The direct effects between mothers' (internalizing and externalizing) problems and children's (internalizing and externalizing) problems were not significant. The model explained 16% of the child's internalizing problem behavior, and 47% of the child's externalizing problem behavior. In the mediation model for fathers, a medium-sized negative direct effect between fathers' internalizing problems and children's externalizing problems (β = −.49, p < .05) was found, and this relation was found to be mediated by parenting stress. Contrary to the negative direct effect of fathers' internalizing problems on children's externalizing problems (i.e., more internalizing problems in fathers was related to less externalizing problems in their children), the indirect effect of paternal internalizing problems via parenting stress was positive, but small (β = .20, p < .05). Furthermore, there was a medium-sized positive direct effect of fathers' externalizing problems on children's externalizing problems (β = .45, p < .05), and children's internalizing problems (β = .35, p < .05) (i.e., more externalizing problems in fathers were related to more externalizing and internalizing problems in their children). The model explained 17% of the child's internalizing problem behavior, and 21% of the child's externalizing problem behavior. A mediation effect of parenting stress for the association between fathers' externalizing problems and children's psychopathology was not tested, since fathers' externalizing problems did not appear to be associated with fathers' parenting stress.
Discussion
This study examined the associations between parental psychopathology and child psychopathology with the use of Structural Equation Modeling, and investigated the mediating effect of parenting stress. The results of this study showed that psychopathology in both mothers and fathers is substantially associated with psychopathology in children. However, the pathways of these associations differed for mothers and fathers. That is, specificity was found for mothers (mothers' internalizing problems were positively related to child internalizing problems and the same applied for externalizing problems), whereas non-specificity was found for fathers (fathers' externalizing problems were positively related to both child internalizing-and externalizing problems, and fathers' internalizing problems were negatively related to child externalizing problems). Considering the strength of the associations of mothers and fathers in the combined parent model, it was found that the associations between fathers' and children's psychopathology was stronger than between mothers' and children's psychopathology. Moreover, the relation between mothers' and children's internalizing problems disappeared when fathers' psychopathology was controlled for. Regarding parenting stress, it was found that maternal parenting stress fully mediated the associations between maternal psychopathology and child psychology, while for fathers, parenting stress was only found to mediate the association partly and direct effects remained. Taken the results of all models together, it appears that mothers' internalizing problems only function as a risk factor for the development of child (internalizing) psychopathology when fathers' psychopathology is not accounted for, and that mothers' externalizing problems only play an indirect role via parenting stress. Instead, fathers' externalizing problems do appear to function as an important risk factor for child (internalizing and externalizing) psychopathology, whereas their internalizing problems appeared to be a protective factor for child externalizing problems.
Concerning child internalizing problems, no associations were found with parental internalizing problems which is in contrast to the finding of a small, but significant, association between both maternal-child and paternal-child internalizing problems (e.g., see meta-analysis of Connell and Goodman 2002), and in contrast to the studies that have demonstrated that anxious parents are more likely to have anxious children and vice versa (see review of Ginsburg and Schlossberg 2002). An explanation of the contrast in our findings relative to previous studies is that we included both maternal and paternal psychopathology in the same model, while previous studies analyzed mother and father data separately. Important to add here is that-in line with previous studies (e.g., meta-analysis of Connell and Goodman 2002)-in the separate mother-child model, a small but significant positive relation was found between maternal and child internalizing problems. However, in the separate father-child model this relation was not significant. What was found to be significant-in both the separate father-child model as well as the combined parent modelwas the relation between paternal externalizing problems and child internalizing problems. Thus, more externalizing problems in fathers was related to more child internalizing problems. Note that more externalizing problems in fathers was also associated with more child externalizing problems, suggesting that fathers' externalizing problems may be a more general risk factor to child (internalizing as well as externalizing) behavioral problems. Alternatively, fathers with externalizing problems may be less involved, and paternal involvement has been found to be associated with child internalizing as well as externalizing behavioral problems (e.g., see review of Barker et al. 2017).
Concerning externalizing problems in children, three significant associations were found. First, in line with previous research (e.g., meta-analysis of Connell and Goodman 2002), a positive association for externalizing problems was found between mothers and their children, however, it was also found that this association was fully mediated by maternal parenting stress. Thus, externalizing problems in mothers may have an indirect effect (via parenting stress) rather than a direct effect on their child's externalizing behaviors. In support, higher levels of parenting stress have been found to be associated with more dysfunctional parenting, which in turn is related to more child problem behaviors (e.g., see review of Morgan et al. 2002). Second, fathers' externalizing problems were found to be associated with their child's externalizing problems, and-in contrast to the results of mothers-these associations were only partly mediated by parenting stress. Explanations that may be offered for the difference in the direct and indirect effect in the mother-child association versus the father-child association are: (a) the direct effect between fathers' and children's externalizing problems may suggest a stronger genetic component in the father to child transmission of "externalizing genes" and/or fathers are more a role model when it comes to externalizing behavior such as displaying aggression-of which partial support comes from studies that have found males to exhibit more externalizing problems than females (Alonso et al. 2004;WHO 2002), and/or (b)-as the study used a cross-sectional design and no causal interferences can be made-the fully mediated effect of parenting stress in mothers may be explained by a higher susceptibility for mothers than fathers to child psychopathology and parenting stress. Support for this explanation comes from research involving parents of children with various disorders (e.g., autism spectrum disorder, ADHD, disruptive behavior) that demonstrated higher levels of parenting stress in mothers than in fathers (Baker 1994;Calzada et al. 2004;Moes et al. 1992;Oelofsen and Richardson 2006). Third, fathers internalizing problems were found to have a direct effect on children's externalizing behaviors, but in the way that paternal internalizing problems may serve as a protective factor for children's externalizing behaviors. Possibly, fathers with more internalizing problems have children who are more inhibited or less sensation-seeking, and therefore these children may be less susceptible for developing externalizing problems (Kimonis et al. 2006;Williams et al. 2009).
A specific strength of this study is the inclusion of both fathers and mothers, and the possibility to examine both separate mother-child and father-child models (to examine the strength of the association between mother/father and child psychopathology) as well as a combined parent model (to examine the parent-child associations while controlling for the effect of the other partner). Another strength is that this study was based on a large clinical sample which was not self-selected. This makes the results relevant for clinical practice, since the outcomes can be generalized to a large group of children with emotional-behavioral problems that are in need for treatment. However, limitations also need to be considered. First, the cross-sectional design of the current study did not provide the opportunity to consider the direction of the relation between parental and child psychopathology. The focus of this paper was to see whether parental psychopathology and child psychopathology are related and the starting point was the pre-assumption that parent psychopathology (whether or not mediated by parenting stress) influenced child psychopathology. However, likely, bi-directional relations between parent and child psychopathology, between parent psychopathology and parenting stress, and between parenting stress and child psychopathology exists (e.g., Pettit and Arsiwalla 2008;Neece et al. 2012) and longitudinal studies are necessary to investigate the causality between these factors. Second, only parents reported on children's problem behavior, which could have biased the results, since the presence of psychopathology in parents might bias their rating of their child's behavior (De Los Reyes and Kazdin 2005). Multiple reports and observations of child and parent psychopathology should be included in order to generate more confidence in the results. Third, it was remarkable that 30% of the children were reported by their parents to have scores falling in the normal range for both internalizing and externalizing disorders on the CBCL, while all children were referred to the mental health care center for diagnostics and/or treatment, and all children were diagnosed with at least one disorder according to the DSM-IV-TR (suggesting at least some impairment and/or emotional-behavioral difficulties). An explanation for this finding may be that not all emotional-behavioral problems are captured by the CBCL internalizing or externalizing scales. For example, the externalizing scale of the CBCL does not include the attention problem subscale -while scores on that subscale have been found to be predictive for ADHD (e.g., Hudziak et al. 2004). Likewise, children diagnosed with posttraumatic stress disorder may not score very high on a general measure that assesses child problem behaviors (Sim et al. 2005). Finally, a limitation of the study is that it was restricted to the association between parental psychopathology and child psychopathology and that only parenting stress was examined for its mediating influence. The relation between parent and child psychopathology however is complex (e.g., Cummings et al. 2000;Engel 1980) and depending on multiple variables including child characteristics. As an example McBride et al. (2002) found that the association between child temperament and parenting stress differed not only by the gender of the parent, but also by the gender match of the parent with the child.
With respect to future research, it is important to replicate the findings of this study with the use of a longitudinal design, in order to make causal inferences about the relation between parental and child psychopathology. In addition, research is needed to further investigate mechanisms of the relation between parental psychopathology, child psychopathology and parenting stress, under which the found reverse relation between fathers' internalizing and the child's externalizing symptoms. Furthermore, exploring the influence of possible mediating or moderating factors such as the child's gender, parental cognitions, co-parenting and marital functioning, will contribute to more insight in pathways to (mal)adaptive child outcomes. Such relations between the child and its environment can be expected to be interdependent and bi-directional (Garbarino and Ganzel 2000;Sameroff 2010). For example, problem behavior of a parent may lead to more parenting stress, resulting in less supportive parenting, which may result in more problem behavior in their children. In turn, children's behavior problems may lead to more parenting stress, probably resulting in less supportive parenting as well as more parental behavior problems. From the current study, no interferences can be made with respect to causality or bidirectionality, however, the findings of the current study do highlight the importance of including both mothers and fathers when studying the association between parental psychopathology, child psychopathology and parenting stress.
Informed Consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2019-05-23T15:08:34.526Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "f19a967f0e715b6cf85257f9d0c5fd60d1276f6f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10826-018-1024-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fe43c3b32a1acf1f819f2290066867903536a58",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
25689112
|
pes2o/s2orc
|
v3-fos-license
|
Protein Kinase C-δ Transactivates Platelet-derived Growth Factor Receptor-α in Mechanical Strain-induced Collagenase 3 (Matrix Metalloproteinase-13) Expression by Osteoblast-like Cells*
Matrix metalloproteinase-13 (MMP-13, or collagenase 3) has been shown to degrade intact collagen and to participate in situations where rapid and effective remodeling of collagenous ECM is required. Mechanical strain induction of MMP-13 is an example of how osteoblasts respond to high mechanical forces and participate in the bone-remodeling mechanism. Using MC3T3-E1 osteoblast-like cells, we dissected the signaling molecules involved in MMP-13 induction by mechanical strain. Reverse transcription-PCR and zymogram analysis showed that platelet-derived growth factor receptor (PDGFR) inhibitor, AG1296, inhibited the mechanical strain-induced MMP-13 gene and activity. However, the induction was not affected by anti-PDGF-AA serum. Immunoblot analysis revealed time-dependent phosphorylation of PDGFR-α up to 2.7-fold increases within 3 min under strain. Transfection with shPDGFR-α (at 4 and 8 μg/ml) abolished PDGFR-α and reduced MMP-13 expression. Moreover, time-dependent recruitments of phosphoinositide 3-kinase (PI3K) by PDGFR-α were detected by immunoprecipitation with anti-PDGFR-α serum followed by immunoblot with anti-PI3K serum. AG1296 inhibited PDGFR-α/PI3K aggregation and Akt phosphorylation. Interestingly, protein kinase C-δ (PKC-δ) inhibitor, rottlerin, inhibited not only PDGFR-α/PI3K aggregation but PDGFR-α phosphorylation. The sequential activations were further confirmed by mutants ΔPKC-δ, ΔAkt, and ΔERK1. Consistently, the primary mouse osteoblast cells used the same identified signaling molecules to express MMP-13 under mechanical strain. These results demonstrate that, in osteoblast-like cells, the MMP-13 induction by mechanical strain requires the transactivation of PDGFR-α by PKC-δ and the cross-talk between PDGFR-α/PI3K/Akt and MEK/ERK pathways.
Mechanical strain to bone is considered to be important for the maintenance of bone integrity and architecture. The process of bone (re)modeling under mechanical loading may repair fatigue damage and improve bone strength (1)(2)(3). Such (re)modeling requires bone resorption and deposition by the concerted efforts of osteoblasts and osteoclasts. Several studies have demonstrated that, in the absence of the systemic and local factors, mechanical loading on osteoblasts in vitro is able to increase prostaglandin release (4), stimulate cell division (5), alter collagen synthesis (6), and promote collagenase activity (7). Other induced proteins such as insulin-like growth factors I and II, transforming growth factor-, osteocalcin, osteopontin, nitric-oxide synthase, and cyclooxygease-2 have also been reported (8).
Previously, we reported that mechanical strain induces collagenase 3 (MMP-13) 2 expression by MC3T3-E1 osteoblastlike cells (9). The MMP-13 mRNA induction is transient, stable, and does not require de novo protein synthesis, suggesting that an immediate action be taken by strained osteoblasts to participate in the resorption phase of matrix (re)modeling. MMP-13 is a neutral proteinase capable of degrading native fibrillar collagens in the extracellular space (10,11). It may be involved in situations where rapid and effective remodeling of collagenous extracellular matrix is required. Hence, MMP-13 can be detected in primary fetal ossification during bone morphogenesis, and in remodeling of the mature skeletal tissue (12,13).
Mechanical strain induction of MMP-13 may be mediated through a process of mechanotransduction, converting physical forces into biochemical signals and integrating these signals into cellular responses. In our stretch chamber system, we showed that the mechanotransduction utilizes the MEK/ERK signaling pathway to implement MMP-13 expression (9). However, the transduction mechanism involved remains unclear and awaits further investigation. Three lines of studies have prompted us to investigate the receptor of platelet-derived growth factor receptor (PDGFR) as a potential mechanoreceptor in the MMP-13 induction. PDGF-BB induces MMP-13 expression in osteoblasts (14,15), whereas in vascular smooth muscle cells the mechanical strain increases PDGF-B and PDGFR- expression (16) and activates PDGFR-␣ (17).
The PDGFRs, including PDGFR-␣ and -, are membrane glycoproteins of ϳ170 and 180 kDa, respectively. Their structures are similar to those of the colony-stimulating factor-1 receptor and the stem cell factor receptor. The extracellular parts of PDGFR consist of five immunoglobulin-like domains, among which three outer-most domains are for ligand binding, and domain 4, for direct receptor-receptor interactions. The intracellular parts contain a tyrosine kinase domain, with characteristic inserted sequences without homology to kinases (18). The PDGFR-␣ binds all combinations of PDGF-A/-B forms, whereas PDGFR- binds only PDGF-BB. The binding of the ligand induces dimerization of the PDGFR, leading to the activation via autophosphorylation of tyrosine residues in the PDGFR kinase domain. Inside the kinase domains, autophosphorylation increases the kinase activity, whereas, outside of it, autophosphorylation creates docking sites for the recruitment of cytoplasmic molecules containing SH domains as in enzyme, PI3K, or in adaptor protein, Grb2.
To dissect the sequential signaling involved in the MMP-13 induction by mechanical strain, we applied mechanical stretching to MC3T3-E1 osteoblast-like cells grown on a collagencoated flexible membrane in the presence of inhibitors and dominant mutants of interest. We found that in osteoblast-like cells, the mechanical strain induced MMP-13 expression requires transactivation of PDGFR-␣ by PKC-␦.
EXPERIMENTAL PROCEDURES
Materials-Murine MC3T3-E1 cell line was used as a homogeneous source of non-transformed osteoblast-like cells. Primary osteoblast cells were obtained from calvaria of neonatal mice (ICR-CD1) through standard protocol of collagenase digestion (19).
In Vitro Equibiaxial Stretch Device-The equibiaxial stretch chamber (9), modified from the work of Lee et al. (20) was used to deliver uniform, isotropic, and static tensile strain to osteoblast-like cells in the absence of shear as previously described. The chambers were H 2 O 2 gas-sterilized before use. Similar to the study of gene expression by applying a cyclic 8% stretch at a frequency of 0.5 Hz to human chondrosarcoma (21), in the present experiments, we used 8% stretch for the optimal expression of MAPKs as a reference. Nevertheless, to exclude the release of intracellular MMP-13 as a result of cell injury, media conditioned by control or tested cells was assayed for lactate dehydrogenase (22,23). There was no significant difference in release of lactate dehydrogenase in the medium, and no slippage of the strained cells from the collagen-coated membrane with a prolonged period of time (9).
The MC3T3-E1 osteoblast-like cells were grown in flasks to sub-confluence in ␣-MEM containing 10% fetal bovine serum before plating to stretch chamber. The elastic sheets of the chamber were coated with a solution of 0.01% type I collagen overnight to promote cell attachment. Then, the MC3T3-E1 cells were transferred and plated at a density of 1 ϫ 10 5 cells/ cm 2 to the collagen-coated sheet and grown to confluence. After conditioning in serum-free ␣-MEM medium overnight, quiescent adherent cells were stretched under testing condition. For experimental purposes, the selective inhibitors (all at 10 M concentration) were added 1 h before testing. Control cells were treated in an identical fashion as test cells, yet without being stretched. Test and control experiments were carried out simultaneously with the same pool of cells in each experiment to match temperature, CO 2 content, and pH of the medium for the test and control cells.
Zymogram Analysis-Briefly, aliquots of the control and test media were electrophoresed on a 10% SDS-polyacrylamide gel containing 1.25% gelatin. Afterward, the gel was washed with 2.5% Triton X-100 to remove SDS, rinsed with 50 mM Tris-HCl, pH 7.5, and then incubated overnight at room temperature with the developing buffer (50 mM Tris-HCl, pH 7.5, 5 mM CaCl 2 , 1 M ZnCl 2 , 0.02% thimerosal, 1% Triton X-100). The zymographic activities were revealed by staining with 1% Coomassie Blue and later, destaining of the gel and were quantified by laser densitometry of the corresponding bands in the linear response of the gelatin zymogram.
RNA Isolation, Reverse Transcription, and PCR-The adherent cells were harvested after being stretched for the time indicated. Total RNA was isolated using TRIzol reagent according to the manufacturer's instructions and quantified by optical density. 1 g of total RNA was added to a reverse transcriptase (RT) reaction in RT buffer containing 20 mM Tris-HCl (pH 8.4), 50 mM KCl, and 2.5 mM MgCl 2 , 10 mM dNTPs, 0.1 M dithiothreitol, 0.5 mg of oligo(dT) primer, 200 units of SuperScript II RT, and RNase H. 5 l of cDNA from the RT was added directly to a 50-l PCR containing 20 mM Tris-HCl (pH 8.4), 50 mM KCl, 25 mM MgCl 2 , 10 mM dNTP, 2.5 units of TaqDNA polymerase. The amplification conditions for MMP-13 were as follows: 94°C/1 min, 62°C/1 min, and 72°C/2 min, which was amplified for 30 cycles. Oligonucleotide primers were designed to span at least one intron to detect any contaminating genomic DNA carried over from the RNA isolation step. The -actin primer sequences have been described previously (Ambion), and MMP-13 primer sequences were derived from the mouse MMP-13 sequence (24) as follows: MMP-13, sense primer 5Ј-GGT CCC AAA CGA ACT TAA CTT ACA-3Ј and antisense primer 5Ј-CCT TGA ACG TCA TCA TCA GGA AGC-3Ј, a total of 445 bp. Conditions were established so that PCR was stopped in the linear range, and the reaction products could be accurately quantified and compared. PCR products were electrophoresed on 1.5% agarose gels. Ethidium bromide staining of the bands corresponding to MMP-13 was photographed and digitized. Density analysis was performed using the UN-SCAN-IT gel program (Silk Scientific, Inc. Orem, UT). The levels of MMP-13 mRNA were normalized to those of -actin RNA to correct for differences in loading and/or transferring.
Preparation of Cell Extracts and Immunoblot Analysis of Signaling Molecules-Unless mentioned otherwise, protein concentrations were determined (25) with bovine serum albumin as the standard. MC3T3-E1s in control or in test (by 8% stretch) groups were incubated for various times before subjected to cell lysis as described previously (9). At the termination of mechanical stimulation, cells were rapidly washed with ice-cold phosphate-buffered saline twice, and lysed on ice in 0.2 ml of lysis buffer (containing 25 mM Tris-HCl, pH 7.4, 25 mM NaCl, 25 mM NaF, 25 mM Na 4 P 2 O 7 , 1 mM Na 2 VO 4 , 2.5 mM EGTA, 2.5 mM EDTA, and 0.05% Triton X-100, 0.5% Nonidet P-40, 0.5% SDS, 0.5% deoxycholate, and protease inhibitors such as 5 g/ml leupeptin, 5 g/ml aprotinin, and 1 mM phenylmethylsulfonyl fluoride). The lysates were centrifuged at 45,000 ϫ g for 1 h at 4°C to yield the cell extract. Equal amounts of samples were electrophoresed on a 10% polyacrylamide gel and were then blotted to nitrocellulose membrane. Subsequently, the membrane was incubated at room temperature with 5% bovine serum albumin in TTBS (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 0.05% Tween 20) for 1 h. The total protein profiles and the phosphorylated forms of the kinases were identified by immunoblot analysis with anti-serum raised against the signaling molecules or their phosphorylated forms. Briefly, membranes were incubated with a 1:1000 diluted solution of specific anti-PDGF-␣, anti-phospho-PDGFR-␣, anti-phospho-Akt, anti-phospho-p42/p44 MAPK, anti-PI3K, or anti-GAPDH antibodies, and then with the second antibody (anti-rabbit horseradish peroxidase antibody in 1% bovine serum albumin/ TTBS; 1:1500 dilution). Immunoreactive bands were visualized by using an enhanced chemiluminescent (ECL) system.
Coimmunoprecipitation Assay-Cell lysates containing 1 mg of protein were incubated with 2 g of anti-PDGFR-␣ antibody at 4°C for 1 h, and then 10 l of 50% protein A-agarose beads was added and mixed for 16 h at 4°C. The immunoprecipitates by anti-PDGFR-␣ serum were collected and washed three times with lysis buffer without Triton X-100, dissolved in 5ϫ Laemmli buffer, and then subjected to electrophoresis on 10% SDS-PAGE. Immunoblot analysis was performed using anti-PI3K and anti-PDGFR-␣ serum.
Plasmids and Transfection-The plasmids encoding ⌬ERK1 K52R, ⌬MEK K97R, and ⌬PKC-␦ (dominant negative mutants of ERK1 and MEK1/2) were prepared by using Qiagen plasmid DNA preparation kits. For transfection, the amount of plasmid (1 g) was kept constant for each experiment. The DNA PLUS-Lipofectamine reagent complex was prepared according to the instructions of the manufacturer (Invitrogen). The adherent MC3T3-E1 cells grown to 70% confluence were washed once with phosphate-buffered saline and then with serum-free ␣-MEM medium and then transfected and incubated with plasmid in serum-free ␣-MEM (0.8 ml) and DNA PLUS-Lipofectamine reagent (0.2 ml) at 37°C for 5 h and later with ␣-MEM (1 ml) containing 10% fetal bovine serum overnight. After 24 h of transfection, cells were washed twice with phosphate-buffered saline and maintained in ␣-MEM containing 10% fetal bovine serum for an additional 24 h. Before applying an 8% stretch, cells were washed once with phosphate-buffered saline and incubated with serum-free ␣-MEM for 24 h.
Statistic Analysis-Data were presented as mean Ϯ S.D. Statistical comparisons of control group with treated groups were carried out using the paired sample t test with p values corrected by the Bonferroni method. Comparisons among three or more groups were made by one-way analysis of variance followed by Dunnett's post hoc analysis. An effect was considered significant when p Ͻ 0.05.
Activation of the PDGFR-␣ Leads to the MMP-13
Induction by Mechanical Strain-In this study, we investigated whether the signaling molecules at the level of plasma membrane-associated receptor tyrosine kinases may be involved in MMP-13 expression by mechanical strain. The inhibitors used as diagnostic tools included genistein and herbimycin A for blocking the protein-tyrosine kinases in general, and specifically, AG1296 and AG1478 for blocking the receptor tyrosine kinases such as PDGFR and EGFR, respectively. The zymogram revealed that the mechanical strain-induced MMP-13 activities were partly blocked by pretreatment with AG1296 (Fig. 1A). However, AG1478 failed to inhibit the activities. Moreover, the RT-PCR analysis reflected the above finding of the AG1296 by matching the reduction of MMP-13 gene with that of activity under mechanical insults (Fig. 1B). Taken together, these data suggested that the PDGFR participates in the mechanical strain-induced MMP-13 expression.
Because PDGF-A mRNA expression was reported to be induced by mechanical strain within 2 h in MC3T3-E1s (28), one may question whether MMP-13 expression was a secondary effect as a result of PDGF-AA production induced by mechanical insult to cells. Here, pretreatment with anti-PDGF-AA serum did not interfere with MMP-13 activities, and protein expression induced by mechanical strain (Fig. 1C). The accountability of the anti-PDGF-AA serum to neutralize PDGF-AA was assured by the following experiment. MC3T3-E1s were pretreated with PDGF-AA (10 ng/ml) to induce MMP-13 expression in the presence or absence of anti-PDGF-AA serum (Fig. 1D). The results showed that MMP-13 expressions increased by prolonged incubation with PDGF alone. However, the increased amount of MMP-13 by PDGF-AA returned to basal level in the presence of anti-PDGF-AA serum. Taken together, these data suggested that the mechanical induction of MMP-13 be independent of PDGF-AA effect.
Receptor Phosphorylation under Mechanical Strain Condition-The PDGFR consists of PDGFR-␣ and PDGFR- forms. We examined which of the PDGFRs might be involved in the mechanical strain-induced MMP-13 expression by immunoblot analysis with anti-PDGFR-␣ or with anti-PDGFR- serum to determine the amount of the proteins present, then with anti-phospho-PDGFR-␣ and - serum to detect the state of activation, or lastly with anti-GAPDH serum as an internal control (Fig. 2). We noted that only trace amounts of the PDGFR- were present within cells. Overwhelmingly, the PDGFR-␣ form appeared to be dominant in quantity over PDGFR- form ( Fig. 2A, rows 2 and 4). When challenged by mechanical strain, the PDGFR-␣ form became phosphorylated and activated. The receptor activation of the phosphorylated PDGFR-␣ was a fast process. It peaked at 3 min with 2.7-fold increases and then declined gradually at 30 min (Fig. 2B). Within such a short time, obviously, the total amounts of both PDGFR-␣ and GAPDH did not change. On the other hand, neither PDGFR- nor EGFR (data not shown) appeared to be activated throughout the course of challenge. Thus, these results indicated that, under a static stretch condition, the PDGFR-␣ phosphorylation occurred in a time-dependent manner. 3A) to knock down the PDGFR-␣ in vivo and then traced the consequences after mechanical insults. Cells were transfected with shPDGFR-␣, vector alone (pTOPO-U6II), or small interfering RNA targeting firefly luciferase (Ffl) (27), incubated for additional 24 h, and afterward, serum-free conditioned overnight before testing for 30 min.
PDGFR-␣ Participates in Mechanical Strain-induced MMP-13 Expression-To
The MMP-13 activities were enhanced by mechanical strain in control cells and in cells transfected with either vector or Ffl. Such enhancements were diminished in cells transfected with the shPDGFR-␣ at doses of 4 and 8 g/ml (Fig. 3, B and C). Immunoblot analysis with anti-PDGFR-␣ or anti-MMP-13 serum revealed that the shPDGFR-␣ knocked down the PDGFR-␣ protein expression and correspondingly, attenuated the mechanical strain-induced MMP-13 expression (Fig. 3C, lanes 7-10). Contrarily, vector or Ffl alone did not alter the amount of PDGFR-␣ expression (Fig. 3C, lanes 3-6). To sum up, the results confirmed that PDGFR-␣, at the level of plasma membrane-associated receptor tyrosine kinase, participated in the mechanical strain induced MMP-13 expression. In line with this, the role of the PDGFR-␣ was further identified by the subsequent investigations in signaling relay leading to MMP-13 expression.
The Downstream Effectors of PDGFR-␣: PI3K-Akt and ERK-Upon activation by stimuli, the PDGFR-␣ recruits the cytoplasmic molecule such as PI3K that contains conserved SH domain. The PI3K, an enzyme, was identified in our stretch system via immunoprecipitation from tested MC3T3-E1s with anti-PDGFR-␣ serum followed by immunoblot analysis with anti- PI3K serum. Even though the amounts of PDGFR-␣ remained constant with or without strain, the recruitment of PI3K by PDGFR-␣ started within 1 min of testing and reached 3-fold increases or higher from 3 to 5 min (Fig. 4A). Such recruitments were blocked by pretreatment with AG1296 (Fig. 4B, lane 6), indicating an association of PI3K with PDGFR-␣. Pretreatment with AG1478, LY294002, or IgG (Fig. 4B, lane 11) did not affect the association. The blockade by rottlerin (Fig. 4B, lanes 9 -10) is separately described in the next section.
To confirm the finding that the PDGFR-␣/PI3K pathway was involved in MMP-13 induction, sequential activation of the PDGFR-␣ and PI3K complex was analyzed by immunoblot with anti-PDGFR-␣ and anti-phospho-PDGFR-␣ serum (Fig. 5A). In this experiment, pretreatment with AG1296 inhibited the mechanical strain-stimulated phosphorylation of PDGFR-␣, whereas LY294002 did not (Fig. 5A, lane 6), confirming that PI3K was downstream of PDGFR-␣. On the other hand, both inhibitors blocked the MMP-13 gene expression induced by mechanical strain (Fig. 5B). The effects of PKC inhibitors, including GF109203X, Gö6976, and rottlerin, on PDGFR-␣ phosphorylation and MMP-13 gene expression (Fig. 5, A and B) are described below. To sum up, we concluded that mechanical strain-induced MMP-13 expression was mediated via PDGFR-␣/PI3K signaling.
To trace signaling relay downstream of PI3K, the PDGFR/PI3K/ Akt (29) and p42/p44 MAPK pathways were studied. The phosphorylation and activation of Akt transiently occurred within 1 min and peaked within 5 min after strain application (Fig. 6A), corresponding to the optimal time frame of the PDGFR-␣/PI3K recruitment. The mechanical strain-stimulated phosphorylations of Akt or p42/p44 MAPK were blocked by pretreatment with AG1296 or LY294002 (Fig. 6, B and C), suggesting that PDGFR-␣/PI3K is upstream of the Akt and p42/p44 MAPK in this signaling transduction pathway.
The Upstream Modulator of PDGFR-␣: PKC-␦-To investigate whether and which isoform of the PKCs may participate in the PDGFR-␣/PI3K pathway, the PKC inhibitors, including the broad spectrum GF109203X, the calciumdependent Gö6976, and a selective calcium-independent rottlerin (for PKC-␦), were used for these purposes. Mechanical strain-induced MMP-13 gene expression was attenuated by pretreatment with GF109203X and rottlerin (Fig. 5B, lanes 5 and 7), but not by Gö6976 (Fig. 5B, lane 6), suggesting the involvement of the calcium-independent PKC-␦ (a novel PKC) in MMP-13 induction.
Mechanical Strain-induced MMP-13 Expression in a Primary Mouse Osteoblast Cell Model-To examine whether a similar pathway regulates MMP-13 expression in normal osteoblastic cells, we obtained osteoblasts from calvaria of neonatal mice through standard protocol of collagenase digestion (19). The resultant MMP-13 expressions with and without specific inhibitors were revealed and analyzed by means of zymogram, Western blot, and RT-PCR (Fig. 8A). The inhibitors include rottlerin, AG1296, LY294002, and U0126 (for MEK1/ 2). The data showed that all the inhibitors blocked the MMP-13 expression induced by mechanical strain in primary mouse osteoblast cells. To sum up, similar to the MC3T3-E1 cells, at a high impact, the normal osteoblasts respond to the mechanical strain by synthesis and secretion of the MMP-13 and participate in bone healing process.
DISCUSSION
In this study we analyzed the mechanisms by which the mechanical force is translated into biochemical signals and verified the signaling molecules including PKC-␦, PDGFR-␣, PI3K, Akt, and ERK1/2 to be responsible for the MMP-13 gene expression. In particular, transfection with shPDGFR-␣ completely knocked down the PDGFR-␣ molecule and inhibited MMP-13 zymographic activity and gene induction by mechanical strain. In response to mechanical stimuli, the time-dependent activation of the PDGFR-␣ and the time-dependent recruitment of the PI3K by PDGFR-␣ outlined the PDGFR-␣/PI3K lineage. Lastly, the signaling relay was verified through sequential phosphorylations.
During our study, we first investigated and considered PDGFR-␣ to be a candidate receptor for MMP-13 induction by mechanical strain. However, upon continuing to search for the downstream effectors, we found that the known PKC-␦ turned out to be the PDGFR-␣ upstream regulator. Accordingly, the transactivation of PDGFR-␣ by PKC-␦ was proposed and discussed first, whereas the PDGFR-␣/PI3K/Akt pathway was discussed later.
PKC activations have been shown to participate in growth factor-dependent cellular responses. For example, the PKC-␦ activation is found to be a principle rate-limiting event in the basic fibroblast growth factor-dependent stimulation of MMP-13 in human articular chondrocytes (30), whereas the stimulation of the PDGFR- signaling pathway activates PKC-␦ in the PDGFR--mediated-monocytic differentiation (31). Furthermore, the stretch-induced expression of vascular endothelial growth factor appeared to be mediated by the PI3K/PKCpathway (32). In osteoblasts, PKC activation was reported in the PDGF-BB-induced MMP-13 expression (14). Nevertheless, the specific isoform of the PKCs in this reaction remains unknown. In our stretch system, we identified PKC-␦ to be involved in mechanical strain-induced MMP-13 expression. Rather interestingly, instead of being activated by a PDGFR-␣-dependent mechanism, PKC-␦ activated PDGFR-␣. This was supported by FIGURE 5. Involvement of the signaling molecules up-or down-stream of PDGFR-␣ in MMP-13 induction by mechanical strain. The adherent MC3T3-E1 cells were prepared for testing at 8% stretch for 3 or 30 min to examine PDGFR-␣ phosphorylation and MMP-13 gene expression. Immunoblot analysis was performed using anti-p-PDGFR-␣, anti-PDGFR-␣, or anti-GAPDH (as a control) antibody (A). Total RNAs from control and strained cells were amplified by RT-PCR using mRNAs of the MMP-13, and -actin as templates. Representative mRNA levels of MMP-13 and -actin genes (as an internal control) were shown (B). The inhibitors included in the assay were AG1296 (for PDGFR-␣), LY294002 (for PI3K), GF109203X (for total PKC), Gö 6976 (for calcium-dependent PKC), and rottlerin (for selective calcium-independent PKC-␦). Data are summarized and expressed as mean Ϯ S.E. of at least three independent experiments (bar graph). *, p Ͻ 0.05; # , p Ͻ 0.01, as compared with the stretch alone. the following findings: first, pretreatment with rottlerin inhibited the assembly of the PDGFR-␣⅐PI3K complex and the Akt phosphorylation and second, pretreatment with rottlerin and with dominant negative PKC-␦ mutant (⌬PKC-␦) inhibited the PDGFR-␣ phosphorylation. On the other hand, the results of inhibition by rottlerin or ⌬PKC-␦ on the mechanical straininduced p42/p44 MAPK phosphorylation suggested a mechanism of the molecular cross-talk between MAPK and PKC-␦ (30). In fact, we also found that cross-talk occurred between PDGFR-␣/PI3K and p42/p44 MAPK pathways.
The reports that PDGF-BB induces MMP-13 expression in osteoblasts (14,15) prompted us to examine the involvement of PDGFR in the mechanical strain-induced MMP-13 expression. There are two classes of PDGF receptors reported to recognize different isoforms of PDGF (33). The  receptor binds only the BB dimer, whereas the ␣/ receptor binds AA, BB, and AB dimers. In other words, the PDGFR-BB induces all three dimeric combinations of ␣and -receptors. Other reports point out that, in vascular smooth muscle cells, the mechanical strain not only increases PDGF-B and PDGFR- expression (16) but also activates PDGFR-␣ (17). In light of these findings, we found that only PDGFR-␣ participated in the MMP-13 expression induced by mechanical strain in these cells.
Perhaps, one may speculate that mechanical strain changes cellular morphology leading to altered receptor conformation and thus, induces autophosphorylation of the surface membrane receptor and subsequent signal transduction. However, in the identical stretch chamber system, we did not detect any activation of the membrane-associated receptors of PDGFR- and EGFR examined. On the other hand, we may propose that, being induced by the strain-altered conformational change at the cell membrane, PKC-␦ transactivates PDGFR-␣. Then, the receptor dimerizes and induces a conformational change affecting the ATP binding domain, which can be blocked by AG1296 at the ATP-binding site (34). One plausible mechanism involved in PKC-␦ may be illustrated by the G-proteincoupled receptor-induced EGFR transactivation (35), a ligandindependent process with several cytoplasmic players acting as mediators of this inter-receptor cross-talk. Whether cross-talk exists between PDGFR-␣ and G-protein-coupled receptor after strain-altered conformational change in our stretch chamber system remains to be defined.
Outside the PDGFR kinase domain (as in kinase insert region or in the C-terminal domain) lie the tyrosine-phosphorylated residues creating the docking sites for signal transduction molecules containing SH2 domains. These molecules may belong to enzymes such as PI3K and phospholipase C␥ (36), or molecules, devoid of enzymatic activity, functioning as an adaptor such as Grb2 in vascular smooth muscle cells linking the receptor with downstream catalytic molecules (17). Despite our efforts to immunoprecipitate the complexes by using anti-PDGFR-␣ or anti-Grb2 serum as a primary antiserum, we failed to detect the presence of both PDGFR-␣ and Grb2 within the precipitated complex (data not shown). Nevertheless, when turning toward the enzymatic candidate such as PI3K, we established the PDGFR-␣/PI3K connection by a coimmunoprecipitation method. The aggregation was further confirmed by the subsequent experiments of using inhibitors to interrupt the complex formation.
The mechanism of the PDGFR-␣/PI3K signaling for MMP-13 induction by mechanical strain could be outlined (Fig. 8B). The transactivation of PDGFR-␣ by PKC-␦ induces dimerization of the PDGFR, leading to their activation via autophosphorylation of tyrosine residues in the PDGFR kinase domain. The autophosphorylation creates docking sites for the recruitment of PI3K. Blocking the kinase activity of PKC-␦ would not create docking sites out of the inactive PDGFR-␣. Accordingly, no subsequent recruitment of PI3K would occur. Because PDGFR-␣ binds to p85 regulatory subunit of PI3K, whereas the inhibitor LY294002 inhibits catalytic subunit of PI3K, it is logical that, in the presence of LY294002, PDGFR-␣ still binds PI3K (37). Moreover, experiments with LY294002 (inhibitor specific for PI3K) suggested that p42/p44 MAPK phosphorylation was downstream to PI3K activation, instead of Akt activation. Transfection with dominant negative mutant Akt had no significant effects on p42/p44 MAPK phosphorylation, indicating no interaction between Akt and p42/p44 MAPK. Thus, we concluded that cross-talk occurred between pathways of p42/p44 MAPK and of PDGFR-␣/PI3K (instead of PDGFR-␣/PI3K/Akt).
The PDGFR-␣ has been demonstrated to play an essential, cell-autonomous role in the development of cranial and cardiac neural crest cells (38,39). Moreover, PDGFR-␣ has been reported to maintain at a relatively constant level throughout stages of osteoprogenitor and pre-osteoblast (40). It was, however, up-regulated late in the differentiation stage and decreased in relative concert with osteoblast-associated markers such as bone sialoprotein and osteocalcin in the more mature cells. Because PDGFR-␣, present in large quantities over longer duration, binds all combinations of PDGF-A/-B forms, one may infer that PDGFR-␣ meet broader demands for a widespread biological function throughout osteoblast development. Hence, it appears to be natural for the "strain" signal to adopt PDGFR-␣ in mediating MMP-13 expression.
Bone remodeling to repair the micro-crack, micro-damage, or fatigue damage resulting from the habitual or functional demand has been extensively discussed and reported (1,3,6,(41)(42)(43). Nevertheless, the bony repair designed by the intentional "stress fracture healing" receives less attention even though such protocols have been implemented for some time in our patient care. The therapeutic means of delivering intermittent heavy mechanical loading includes distraction osteogenesis in patients with Pierre Robin syndrome, rapid maxillary expansion, and orthodontic tooth movement (44) with or without dento-alveolar corticotomy. Even a static application of the high mechanical strain can be illustrated by the surgical bilateral sagittal split osteotomy in patients with mandibular disadvantages (45). All of these operational procedures with high impact loadings demand the immediate response of bone healing/remodeling.
At a high impact, how do osteoblasts adapt and respond at the molecular level? Our working model of transient MMP-13 expression by applying 8% stretch to osteoblasts might explore primitively the underlying mechanism of bone repair. These strategies include the adoptions of the following: 1) the MMP-13 molecule, extracellularly, to be responsible for the degradation of the native collagen in bone remodeling FIGURE 8. Effects of inhibitors on MMP-13 expression induced by mechanical strain in primary mouse osteoblasts (A) and schematic summary of the signal transduction mechanism involved in the MMP-13 induction by mechanical strain (B). Osteoblasts were obtained from calvaria of neonatal mice through standard protocol of collagenase digestion (19). The resultant MMP-13 expressions with and without specific inhibitors were revealed by means of zymogram, Western blot, and RT-PCR. The inhibitors include rottlerin, AG1296, LY294002, and U0126 (for MEK1/2), all of which interfere the signaling transduction mechanism leading to the MMP-13 induction by mechanical strain (A). When mechanical stimuli are triggered, certain strained mechano-sensors at the surface of the cell membrane receive the message and initiate a signal transduction mechanism dictating MMP-13 biosynthesis. The signal cascade begins with the PKC-␦ molecule by transactivating PDGFR-␣ and then, the activated PDGFR-␣ takes over the subsequent signal relay to the MMP-13 expression. The well received PDGFR-␣-PI3K-Akt and the known MEK-ERK signaling pathways jointly mediating the MMP-13 gene expression are depicted. The processing of the MMP-13 is independent of the de novo protein synthesis. Accordingly, the MMP-13 synthesis and secretion to the extracellular matrix can be transient and immediate in response to the mechanical stimuli (B). (10), and intracellulary, acting as an immediate early response gene (9), similar to the c-fos gene induction by mechanical stretch in cardiac muscle cells (22); 2) the PDGFR-␣ molecules, to be abundant and active throughout developments of osteoblasts (40); 3) the PDGFR transactivation model by PKC-␦, similar to the PDGFR transactivation in vascular smooth muscle cells, yet via the G-protein-coupling receptor-mediated mechanism (46); and 4) the PI3K molecule, expressed as a critical control point for the PDGFR/PI3K/Akt signaling in determining the differences of EGF and PDGF in stimulating human mesenchymal stem cells differentiation (47). By integrating these existing modules to operate the synthesis and secretion of MMP-13, osteoblasts respond promptly to meet the therapeutic demands for repair in bone remodeling.
|
2018-04-03T05:12:36.954Z
|
2009-07-24T00:00:00.000
|
{
"year": 2009,
"sha1": "a0f9c8ee948568f5ee6fe3538a70fece03160d75",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/284/38/26040.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "dddd88f38591155991e4c9e49fd6b95f313a5129",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
118337446
|
pes2o/s2orc
|
v3-fos-license
|
Exploring the QCD phase transition in core collapse supernova simulations in spherical symmetry
For finite chemical potential effective models of QCD predict a first order phase transition. In favour for the search of such a phase transition in nature, we construct an equation of state for strange quark matter based on the MIT bag model. We apply this equation of state to highly asymmetric core collapse supernova matter with finite temperatures and large baryon densities. The phase transition is constructed using the general Gibbs conditions, which results in an extended coexistence region between the pure hadronic and pure quark phases in the phase diagram, i.e. the mixed phase. The supernovae are simulated via general relativistic radiation hydrodynamics based on three flavor Boltzmann neutrino transport in spherical symmetry. During the dynamical evolution temperatures above 10 MeV, baryon densities above nuclear saturation density and a proton-to-baryon ratio below 0.2 are obtained. At these conditions the phase transition is triggered which leads to a significant softening of the EoS for matter in the mixed phase. As a direct consequence of the stiffening of the EoS again for matter in the pure quark phase, a shock wave forms at the boundary between the mixed and the pure hadronic phases. This shock is accelerated and propagates outward which releases a burst of neutrinos dominated by electron anti-neutrinos due to the lifted degeneracy of the shock-heated hadronic material. We discuss the radiation-hydrodynamic evolution of the phase transition at the example of several low and intermediate mass Fe-core progenitor stars and illustrate the expected neutrino signal from the phase transition.
Introduction
The investigation of the QCD phasediagram via heavy-ion experiments at BNL's RHIC, CERN's LHC and the FAIR facility at GSI poised to explore the properties of QCD matter under extreme conditions. Three of the most important aspects are the behaviour and the position of the critical points in the phase diagram, the phase transition from hadronic matter to quark matter at finite chemical potentials and the properties of the predicted quark phases. In search for these aspects, observations of astronomical objects and astrophysical processes that are assumed to contain quark matter could be helpful. In such favour, Cold and isolated or binary neutrons stars (NS) have long been served as powerful objects to probe the equation of state (EoS) for hadronic as well as for quark matter. In the latter case the NS is called a hybrid star if in addition to the quark core a hadronic envelope is present. The astrophysical processes that leave an isolated NS as the final remnant are core collapse supernovae of low and intermediate mass Fe-core progenitor stars. Naturally the question rises at which stage during the dynamical evolution from a collapsing Fe-core to an isolated NS the thermodynamic conditions are such that the appearance of quark matter is favoured? Even further in the case of a phase transition from hadronic matter to quark matter, which hydrodynamic evolution can be expected and is there a relation to observations? The two intrinsically different scenarios are either during the early post bounce phase when the temperatures are moderately high or during the cooling of the remnant, where deleptonisation causes a temperature decrease and a density increase. The present article discusses the first case due to the critical conditions given by the quark matter EoS, where a relation to the explosion mechanism is explored. Therefore, a detailed study of core collapse supernovae including radiationhydrodynamics with spectral neutrino transport and a sophisticated EoS for hot and dense asymmetric matter is required to simulate the matter conditions accurately.
The first study of the QCD phase transition in core collapse supernovae was published by Takahara and Sato (1988), suggesting a relation of the multi-peak neutrino signal from supernova 1987A (see Hirata et al. (1988)) to the appearance of quark matter. Using general relativistic hydrodynamics, they modelled the phase transition via a polytropic EoS. Due to the absence of neutrino transport they could neither confirm nor exclude the suggested prediction of a neutrino signal from the phase transition. Additional microphysics was included in the study by Gentile et al. (1993), where general relativistic hydrodynamics is coupled to a description of deleptonisation during the collapse phase of a progenitor star. Applying the MIT-bag model for the description of the quark phase, they find the formation of a (second) shock wave as a direct consequence of an extended co-existence region between the hadronic phase and the quark phase with a significantly smaller adiabatic index. The second shock wave follows and merges with the first shock from the Fe-core bounce after a few milliseconds. However, due to the lack of neutrino transport in the post bounce phase, they were also not able to confirm the predictions made for a possible neutrino signal from the phase transition. The recent investigation by Nakazato et al. (2008) is based on general relativistic radiation hydrodynamics with spectral three flavour Boltzmann neutrino transport. They investigate very massive progenitors of ∼ 100 M which collapse to a black hole. Applying the MIT-bag model for quark matter, the time after bounce for black hole formation is shortened and corresponds to the appearance of quark matter, where the central mass exceeds the maximum stable mass given by the quark EoS.
We follow a similar approach and apply the MIT-bag model for the description of quark matter in general relativistic radiation hydrodynamics simulations, based on spectral three flavor Boltzmann neutrino transport. Our simulations are launched from low and intermediate mass progenitors, where no explosions could be obtained in spherical symmetry. We investigate the dynamical effects and discuss the possibility of an observable in the neutrino signal related to the QCD phase transition.
The manuscript is organised as follows. In §2 we will present the standard core collapse supernova scenario and in §3 we will lay down our neutrino radiation hydrodynamics model including both the hadron and quark EoSs. In §4 we will discuss the appearance of quark matter during the early post bounce phase of core collapse supernovae of intermediate mass Fe-core progenitors and summarise the results in §5.
Core collapse supernova phenomenology
The Fe-core of massive progenitor stars in the mass range of 9 − 75 M collapse at the final stage of nuclear burning due to photodisintegration and deleptonisation, which reduces the proton-to-baryon ratio given by the electron fraction Y e . During the collapse, the density and temperature increase. The collapse continues and at about ρ 10 13 g/cm 3 neutrinos are trapped. At nuclear densities of ρ 2 − 4 × 10 14 g/cm 3 , the collapse halts and the central core is highly deleptonised where Y e 0.3. The core bounces back and a shock wave forms which travels outwards with positive velocities The central object formed is a hot and lepton-rich protoneutron star (PNS). The shock suffers continuously from the dissociation of infalling heavy nuclei into free nucleons and light nuclei. In addition as the shock crosses the neutrinospheres, which relate to the neutrino energy and flavour dependent spheres of the last scattering, additional electron captures release a burst of electron-neutrinos. This deleptonisation burst carries away energy of several 10 53 erg/s (depending on the progenitor model) on a timescale of 10 − 20 ms. This energy loss turns the expanding shock into a standing accretion shock (SAS) with no significant matter outflow already about 5 ms after bounce.
It has long been investigated to revive the SAS via neutrino reactions, leading to neutrino driven explosions (Bethe and Wilson (1985)). Unfortunately, there have been no explosions obtained for progenitors more massive than 8.8 M (Kitaura et al. (2006), Fischer et al. (2009b) in spherical symmetry. A possible solution has been suggested and has been explored only recently via multi-dimensional effects such as rotation, convection and the development of known fluid instabilities (see for example Miller et al. (1993), Herant et al. (1994), Burrows et al. (1995) and Janka and Mueller (1996)). Such effects increase the neutrino heating efficiency and help to understand aspherical explosions, see for example Bruenn et al. (2006), Janka et al. (2008) and Marek and Janka (2009).
The model
Our core collapse supernova model is originally based on Newtonian radiation hydrodynamics and spectral three flavour Boltzmann neutrino transport, developed by Mezzacappa & Bruenn (1993a-c). It has been extended to solve the general relativistic equations for both, hydrodynamics and radiation transport, as documented in Liebendörfer et al. (2001a,b). Special emphasis has been devoted to accurately conserve energy, momentum and lepton number in Liebendörfer et al. (2004). The following neutrino reactions are considered where N ∈ (n, p, A, Z ) and ν ∈ (ν e , ν µ/τ ). Nuclei are considered via a representative nucleus with average atomic mass number A and charge Z respectively. The calculation of the reaction rates for these reactions is based on Bruenn (1985). N -N -Bremsstrahlung has been implemented following based on Hannestad and Raffelt (1998). The annihilation of trapped electron neutrino pairs was implemented recently and is documented in Fischer et al. (2009a).
The properties of (nuclear) matter in core collapse supernovae are modelled via an EoS. The two standard (hadronic) EoSs for matter in nuclear statistical equilibrium (NSE) are from Lattimer and Swesty (1991) based on the compressible liquid drop model including surface effects and the EoS from Shen et al. (1998) based on the RMFapproach and Thomas-Fermi approximation. The first one can be applied using the three different compressibilities 180, 220 and 375 MeV and the low asymmetry energy of 29.3 MeV. It is considered a rather soft EoS and was distributed to the community as a subroutine and recently as a table as well. The EoS contains contributions from electrons and positrons as well as photons. The latter EoS has a significantly larger asymmetry energy of 36.9 MeV as well as a high compressibility of 281 MeV which results in a stiff EoS. It is distributed to the community as a baryon EoS-table. For matter in non-NSE, formerly the approximation of an ideal gas of Si-nuclei was used for the baryon EoS. This leads to an increasingly inaccurate internal energy evolution, especially in long term simulations of explosion models where the explosion shock passes through the different layers of composition given by the progenitor. Recently, a nuclear reaction network has been implemented (see Fischer et al. (2009b)) to be able to handle the nuclear composition of the progenitor more accurately and to include more mass of the progenitor up to the hydrogen envelope (depending on the progenitor model). Additionally, contributions from electrons and positrons as well as photons and ion-ion-correlations (only for non-NSE) are added based on Timmes and Arnett (1999) and Timmes and Swesty (2000). The quark EoS applied to the present investigation is based on the widely applied MIT bag model. The two values chosen for the bag constant are B 1/4 = 162 MeV and B 1/4 = 165 MeV, which leads to a critical density close to nuclear saturation density at low Y e and finite temperatures and stable maximum gravitational masses of M = 1.56 M and M = 1.50 M respectively. This is in agreement with the most precise NS mass measurement of 1.44 M (Hulse-Taylor pulsar). The EoS describes three flavour quark matter where up-and down-quarks are considered to be massless and a strange-quark mass of 100 MeV is assumed. The behaviour of the critical density is illustrated in Fig. 1 for the two values of the Bag constant. For high temperatures and low Y e , the critical density decreases. This behaviour might change if a different quark matter description is applied. The mixed phase is constructed using the Gibbs conditions, which leads to an extended co-existence region and a smooth phase transition where the entropy per baryon and the lepton number are conserved.
The nucleon and charge chemical potential as well as the nucleon mass fractions are constructed from the quark chemical potentials and fractions, which are then used to calculate the neutrino reaction rates and the transport based on the hadronic description by Bruenn (1985). Since matter is opaque for neutrinos at densities where quark matter appears, this approximation can be applied as long as the hydrodynamic timescale is shorter than the diffusion timescale for neutrinos to diffuse out of the PNS. This is the case for the post bounce scenario considered here, where the hydrodynamic timescale is 1−100 ms and the diffusion timescale is of the order seconds. In addition, the timescale to establish β-equilibrium is much shorter than the hydrodynamic and the diffusion timescales. Hence, β-equilibrium is obtained instantaneously for quark matter.
4 The QCD phase transition during the early post bounce phase The appearance of quark matter in core collapse supernova simulations is monitored using the quark matter volume fraction x Q , defined as follows This is a standard procedure in nuclear physics where a transition between two different phases is constructed based on two different nuclear physics descriptions. The conditions for the onset of the mixed phase as illustrated in Fig. 1 (thick lines) are already reached at core bounce for the 10, 13 and 15 M progenitor models from Woosley et al. (2002) under investigation. However, the quark fraction reduces again after bounce due to the density decrease in the expanding regime. Only when the expanding bounce shock stalls and turns into the SAS, the continued mass accretion causes the central density to increase again. The quark fraction rises on a timescale depending on the mass accretion rate and is related to the compression behaviour of the central PNS given by the EoS. The adiabatic index reduces for matter in the mixed phase. Consequently, the smaller compressibility results in a higher central density for matter in the mixed phase during the post bounce accretion phase on a timescale of 100 ms. The higher central density results in a different degeneracy due to the different β-equilibrium, where matter is found to be more neutron rich reducing the electron fraction below When a certain amount of mass of the PNS is converted into the mixed phase, typically ∼ 0.8 M , the reduced adiabatic index for matter in the mixed phase causes the PNS to become gravitationally unstable and contract. The contraction is illustrated in Fig. 2 and proceeds into a supersonic adiabatic collapse. Due to the contraction, density and temperature increase which results in an increased degeneracy and hence the electron fraction reduces (see Fig. 2 graphs (b) and (c)). The collapse halts as a reasonable amount of matter (depending on the progenitor) inside the PNS is converted into the pure quark phase via compression and the EoS stiffens again due to the increased adiabatic index. A stagnation wave forms which propagates outwards and turns into a shock wave at the sonic point. This scenario differs from the bounce of the collapsing Fe-core, where the forming shock wave travels outwards with positive velocities immediately after its formation due to the overshooting of the hydrostatic equilibrium configuration. Here, the forming shock wave is not related to a matter outflow and can be considered as a pure accretion shock at the moment of its formation and shortly after. This accretion shock propagates outwards because the thermal pressure behind the shock is much larger then the ram pressure ahead of the shock. The large thermal pressure behind the shock corresponds to the converted hadrons into quarks since the shock spatially separates mixed and hadronic phases. As soon as the expanding accretion shock inside the PNS reaches the PNS surface, it is accelerated along the decreasing density and detaches from the mixed phase. This behaviour as well as selected hydrodynamic variables are illustrated in Fig. 3 (left panel). This scenario is again different from the early post bounce behaviour of the stalling bounce shock -since the bounce shock suffers from the dissociation of infalling heavy nuclei, matter falling onto the second shock is already dissociated and the nucleons are only shifted to higher Fermi levels. In this sense the second shock wave can accelerate quasi-freely. Due to the large density decrease at the PNS surface over several orders of magnitude from 10 15 to 10 9 g/cm 3 (see Fig.3 (b)), the expanding shock wave reaches relativistic velocities of the order ∼ 50% the vacuum speed of light (see Fig.3 (a)). In other words, not only general relativistic effects are important for the hydrodynamics equations as well as for the neutrino transport in the presence of strong gravitational fields inside the PNS but also kinetic relativistic effects become important due to the high matter outflow velocities.
The previously shock heated and highly deleptonised material is accumulating onto the second expanding shock, where it is shock heated again and the degeneracy reduces such that β-equilibrium is established at a larger value of the electron fraction (see Fig.3 (d)). This lifted degeneracy relates to a large fraction of electronantineutrinos that are emitted but still can not escape because the matter conditions correspond to the trapping regime. Only when the second shock propagates across the neutrinospheres, the additionally produced neutrinos can escape which becomes observable in the neutrino spectrum as a second neutrino burst. It is accompanied by a significant increase of the mean neutrino energies. This burst is then dominated by electron-antineutrinos due to the lifted degeneracy, again different from the deleptonisation burst at bounce which is only due to electron-neutrinos (see Fig. 4). The post bounce time of the second burst contains correlated information about the contraction behaviour of the central PNS, which in turn depends on the (compressibility and asymmetry energy of the) EoS for hadronic matter and the progenitor model, as well as the critical conditions for the QCD phase transition. For the same progenitor model, a low critical density (i.e. the small bag constant) corresponds to an early phase transition. The critical conditions for the quark matter phase transition can be related to the delay of the second neutrino burst. This delay depends on the central density increase given by the mass accretion rate and the hadronic EoS. The expanding second shock wave finally merges with the first shock, which remained unaffected by the happenings inside the PNS at ∼ 100 km, and an explosion is obtained. After the explosion has been launched, the luminosities and mean neutrino energies decay. In addition, a region of neutrino cooling develops between the expanding explosion ejecta and the PNS surface at the centre. This can be seen in Fig. 3 at the velocity profiles in the graphs (a) (right panel), where initial mass inflow develops to an additional weak accretion shock. This expanding and contracting shock determines the oscillating luminosity behaviour after the second neutrino burst, illustrated in Fig. 4. The following dynamical evolution of the explosion ejecta can be approximated by an adiabatic expansion. However the later appearance of the neutrino driven wind and the related dynamic impact to the composition of the ejecta will have to be analysed consistently in explosive nucleosynthesis investigations. The results of this first investigation of the QCD phase transition in core collapse supernova simulations of low and intermediate mass Fe-core progenitor stars is summarised in Table 5. The simulations are launched from progenitor stars of 10, 13 and 15 M from Woosley et al. (2002). The post bounce times t pb correspond to the appearance of the second neutrino burst in the spectra and the PNS masses M PNS are taken at the electron-neutrinospheres at some late post bounce times when the simulations are stopped. The central thermodynamic conditions, density ρ c , temperature T c and electron fraction Y e , correspond to the initial PNS collapse. A special model is the 15 M progenitor using B 1/4 = 165 MeV, where the maximum mass is reached shortly after the QCD phase transition. Hence, the PNS collapses to a black hole before the already formed second shock is accelerated to positive velocities. Due to our co-moving coordinate choice, no stable solution for the equations of energy and momentum conservation can be obtained and t pb determines the time of black hole formation after bounce.
Summary
The significant softening of the EoS for matter which is in the mixed phase causes the PNS to collapse. As a direct consequence of the softening of the EoS for matter which reaches the pure quark phase, a second shock wave forms. The second shock accelerates and finally merges with the first shock and launches an explosion. The explosion energies E expl are give in Table 5. This mechanism was explored and found to even produce explosions in spherical symmetry using general relativistic radiation hydrodynamics based on spectral three-flavour Boltzmann neutrino transport, where otherwise no explosion could be obtained. The explosion energies in Table 5 are evaluated at some late post bounce times when the simulations are stopped and might have not yet converged to the final value of the explosion energy. The simulations will have to be carried out longer. However, moderate explosion energies of 10 51 erg could be found for the 10 M model using B 1/4 = 165 MeV (i.e. the later phase transition), otherwise lower explosion energies are obtained. The explosion energies might be shifted to larger values when multi-dimensional effects such as convection and rotation are taken into account.
The second shock forms in the high density regime where neutrinos are fully trapped. Hence, no direct observational identification of the QCD phase transition can be found in the emitted neutrino spectra. It would be different if analysing the emitted gravitational waves directly from the phase transition (Abdikamalov et al. (2009)). Unfortunately, gravitational waves have proven difficult to detect. Nevertheless, an indirect observation can be found in the emitted neutrino spectra. A second neutrino burst is released when the second shock, formed during the PNS collapse, propagates over the neutrinospheres. This second burst is, due to the lifted degeneracy of the shock heated hadronic material, dominated by electron-antineutrinos. The burst is accompanied by a significant increase of the mean neutrino energies which might become resolvable by neutrino detector facilities such as Super-Kamiokande for a future Galactic core collapse supernova explosion if quark matter appears.
One of the most important observations from supernova explosions is the composition of the ejecta, which is determined by explosive nucleosynthesis investigations and are model dependent (see for example Fröhlich et al. (2006a-c)). Of special interest is the production of heavy elements for which rapid neutron captures (rprocess) in supernova explosion models have long been investigated (see for example Woosley and Baron (1992), Woosley et al. (1994), Witti et al. (1994), Otsuki et al. (2000), , Wanajo et al.(2006a,b), Arcones et al. (2007), Panov and Janka (2009)). The abundances are calculated via post processing of mass trajectories from explosion models and compared with the well known solar abundances. Unfortunately, the very recent neutrino driven explosion models fail to provide conditions favourable for the r-process, which are high entropies per baryon, a fast expansion timescales and generally neutron rich material. Especially the latter aspect is a subject of research. Although the explosion models achieved via the QCD phase transition of the present article have to be analysed consistently with respect to explosive nucleosynthesis, a reasonable amount of ejected matter is found to be neutron-rich where Y e 0.35 − 0.45. In addition, moderate entropies per baryon and a fast expansion timescale are obtained. The conditions found may indeed be favourable for the r-process.
|
2010-07-27T14:40:28.000Z
|
2010-05-25T00:00:00.000
|
{
"year": 2010,
"sha1": "e85c2d63ddf393a38c4458554b8ef3e0262011f7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e85c2d63ddf393a38c4458554b8ef3e0262011f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
181819277
|
pes2o/s2orc
|
v3-fos-license
|
Label-free tomography of living cellular nanoarchitecture using hyperspectral self-interference microscopy
: Quantitative phase imaging (QPI) is the most ideal method for achieving long-term cellular tomography because it is label free and quantitative. However, for current QPI instruments, interference signals from different layers overlay with each other and impede nanoscale optical sectioning. Integrated incubators and improved configurations also require further investigation for QPI instruments. In this work, hyperspectral self-reflectance microscopy is proposed to achieve label-free tomography of living cellular nanoarchitecture. The optical description and tomography reconstruction algorithm were proposed so that the quantitative morphological structure of the entire living cell can be acquired with 89.2 nm axial resolution and 1.91 nm optical path difference sensitivity. A cell incubator was integrated to culture living cells for in situ measurement and expensive precise optical components were not needed. The proposed system can reveal native and dynamic cellular nanoscale structure, providing an alternative approach for long-term monitoring and quantitative analysis of living cells.
nanoarchitecture imaging is still a challenging task. For QPI techniques, a phase shift results from summation of phase of electromagnetic waves as the incident light propagates through the entire imaging depth. Due to the spatial point spread function, interference signals of different layers are collected into the aperture of the objective lens and overlay with each other in the image plane. The compound phase shift cannot be resolved to reveal the original mappings of different layers [9]. For optically thick or multiple scattering specimens, the phase shift will even become imperceptible. Therefore, current QPI techniques cannot provide nanoscale axial resolution, though the interference phenomenon offers sub-nanometer sensitivity. Usually, the axial resolution of a QPI instrument is more than half of the wavelength, which impedes reconstruction results. Further, common QPI instruments use PC or DIC configurations to generate the coherent signal and exploit a spatial light modulator (SLM) or liquid crystal filter (LCF) to modulate the phase or wavelength. These components are all quite expensive, and precise adjustment is needed, which restrict the practical applications of QPI.
Interferometric reflectance imaging sensor (IRIS) is a label-free molecule detection platform that has gained extensive attention and interest [10][11][12][13][14]. This method records the interferometric reflectance spectrums for biomolecular mass measurements. Interference signals of different layers within the specimen can be separated into independent peaks in the Fourier domain [15]. Meanwhile, the phase information of each frequency reveals the tiny fluctuations in both the reflectance index and morphological structure. Based on this principle, various biosensors have been developed for nanoscale biomolecule detection, such as nucleic acid, protein, and biological nanoparticles detection [16][17][18]. In addition, HIS biosensors do not rely on precise instruments to generate solvable coherent signals, and thus, costly optical configurations or components are not necessary. Therefore, IRIS represents a promising method to solve the existing problems with QPI. In this work, hyperspectral self-reflectance microscopy (HSM) is proposed to achieve label-free tomography of living cellular nanoarchitecture. A transparent specimen is placed on a monocrystalline silicon substrate. The incident light is reflected by both the heterogeneous cell and the silicon substrate. The reflected light generates a coherent signal that is a cosine function along the wavenumbers. A mathematical model describing the multi-layer interference signal was established, and a Fourier transformation-based algorithm is presented to reconstruct the nanoscale structure of the specimen. Moreover, based on this method, an integrated imaging system was developed with long-term cell culture ability. Compared to similar techniques, the proposed microscopy method possesses three major merits. First, the quantitative morphological structure of whole living cells can be acquired without any exogenous labels. Next, summation of phase is resolved, which contributes to nanoscale tomography of 89.2 nm intervals. Finally, a cell incubator was integrated to culture living cells for in situ measurement. The proposed microscopy method was able to reveal native and dynamic cell structure with nanoscale sensitivity, providing an alternative approach to longterm monitoring and quantitative analysis of living cells.
HSM principle
In this paper, living cells (HeLa cells, bought from ATCC company, item number: EY-X0129) were cultured on a monocrystalline silicon substrate and the absorption was negligible. As Fig. 1(a) shows, the substrate could be divided into the silica layer (500 nm thickness) and the silicon layer. Living cells were immersed in the culture medium and grew on the silica substrate. The coherent signal was generated by three signals: the reflected light of the cell membrane 1 U , the scattered signal of the cell content collected by the objective 2 U and the reflected light of the silicon wafer 3 U .Thus, the intensity of the multi-layer interference signal equals [19]: where 0 n and this paper, w membrane.
Here, the mul previous publ R It was obviou transform (FF g cells. In f the cell g that the λ , the part ve equals (4) proven in (5) st Fourier
The integrated HSM system
In Fig. 1(b), the diagram of the integrated HSM system is shown. The basic optical setup is an epi-illumination microscope. All of the lenses were purchased from Edmund Optics (USA). The light source was a tungsten-halogen lamp (71PT250, Saifan, China), which provided a broadband spectrum of 200-1100 nm. Compared with common wide-field optical microscopy, the main difference is that the HSM system utilized a fiber-spectrograph (Maya2000, Ocean Optics, USA) system to collect the hyperspectral self-interference signal of the samples. The motion platform consisted of two parts: two pairs of step motors (E28H49-05, Haydon, USA) and lead screws (HGH25CA, Freud, China) for sample location and a two-dimensional piezo-ceramics platform (P-734, Physik Instrumente, Germany) for nanoscale scanning. A two-dimensional scanning task was performed so that the interferometric spectrum could be obtained pixel by pixel. In the following experiments, the scanning steps were set to 500 nm, and the integration time was set to 10 ms. Long-term cell imaging without disturbance is very important for dynamic and long-term biomedical experiments. For example, dynamic quantitative imaging during cell proliferation and attachment is very important in biomedical research since this process is related to cancer growth and metastasis. Without the custom incubator, this long-term cell imaging experiments couldn't be carried out because living cells couldn't survive such a long time. In order to make the system more practical, we designed a custom incubator to culture the living cells. The smart incubator was integrated into HSM system, which is shown in Fig. 1(c). In this figure, we inclined the humidifier to make the heating film and the Pt100 sensor visible. The incubator is located on the motion platform and able to control temperature, humidity, and carbon dioxide density. A transparent cover made of indium-tin oxide-coated glass is not shown in the figure. The glass helped to heat the cover of the incubator to prevent condensation.
A control system was developed to control the motion platform and the smart incubator. Briefly, a Wheatstone bridge was used to transform the temperature to voltage signal. Then, the analog-digital converter (AD7705, Analog Devices, USA) contributed to transmitting the voltage to the microcomputer (STM32f103zet6, STMicroelectronics, Italy). A proportionintegration-differentiation algorithm was utilized to determine the output [23]. Sequence signals were offered to control the two motors with general purpose input/output configurations. A digital synthesizer (AD9854, Analog Devices, USA) helped to generate adjustable voltage and control the nanoscale platform.
Tomography reconstruction
First, standard wafer samples (500-nm thickness) were used for out-of-focus correction. We recorded the interferometric spectrums of the wafer while scanning the objective along the axial direction. Each spectrum had only one frequency component, thus displaying one peak after FFT. With this method, the relationship between the amplitude and focus plane position was detected. The amplitude correction factor for each out-of-focus plane was defined as the decay rate of the amplitude compared to that of the in-focus plane.
As shown in Eq. (5), the reflectance spectrums of the living cells were just a constant plus a series of cosine functions. Here, we utilized a bilinear interpolation function to reshape the reflectance spectrums to evenly distributed 1 1024 × vectors. The FFT algorithm helped to calculate the frequency domain results. Then, each amplitude was divided by the corresponding correction factor. The largest peak in the frequency domain indicated the thickness of the cell membrane, and the amplitude of each frequency indicated the refractive index of each layer. Thus, information of each layer could be acquired, and tomography images of whole living cells could be reconstructed pixel by pixel.
Principle
In this exper samples were HSM. First, w corresponding different thick nm, the peak distinguish tw Next, we contrasted the In Fig. 2(b Fig. 4, the t mance of the t ubator, the rise For the tempera dy error was 0. as much more r the carbon as 3 min. Th 26 min. The s uch more rapi ted in Fig. 4 In this experiment, the main differences of the two incubators were the rise time of temperature control and carbon dioxide concentration control. Another obvious difference was that the temperature control curve of the smart incubator was rougher. These two phenomena were because the volume of the smart incubator was much smaller than the commercial incubator. The responses of the heating and carbon dioxide inlet were far more rapid for a smaller volume.
Dynamic imaging of HSM
In this experiment, dynamic tomography during cell division was demonstrated. We chose a HeLa cell that was dividing into two cells. Tomography images of four layers (z = 3.3 μm, z = 3.39 μm, z = 4.73 μm, and z = 4.82 μm) were recorded over 60 min. In Fig. 5, tomography of different layers during cell proliferation is shown. At 0 min, the cell had not divided into two isolated cells, the chromosomes existed, and the mean average refractive index of the chromosomes was more than 1.45 while the one of cytoplasm was only about 1.4. At 20 min, two cells were generated, and the chromosomes decondensed. The refractive index inside the cell decreased to ~1.4. At 40 min, the two cells completely separated. The refractive index nearby the membrane greatly increased, which indicated the cells synthesized protein and prepared for adherence [26]. At 60 min, the two cells had adhered to the substrate, and the contrast between the nucleus and the cell cytoplasm became distinct again. The average refractive index of the cell nucleus was 1.57, which approached that of protein, while the average refractive index of the cytoplasm was 1.39, which approached that of water.
Discussion
In Fig. 2(a), the axial resolution of HSM was demonstrated as the average thickness growth when the position of the peak after FFT increased one. In fact, the axial resolution could also be calculated theoretically with the frequency resolution definition of FFT: where max T is the maximal imaging thickness, max f is the maximal frequency of FFT and s f is the sampling frequency.
It has to be pointed out that refractive index of biological is not homogenous, hence both the calculated axial resolution and the calculated maximal imaging thickness are merely approximate values. The precondition to confirm the two important performance indexes is to assume the refractive index of the living cell in 1.37.
In Fig. 2(b), the refractive index of the silica layer was assumed to be 1.47 [15]. The largest thickness measurement error of silica layer in this experiment was 1.3 nm. So the imaging sensitivity in optical path difference could be calculated as 1. Here, we chose an indirect method to quantify the uncertainty in measured values of refractive index. In this experiment, we proved that the optical path difference sensitivity was 1.91 nm and the axial resolution was 89.2 nm. These two parameters meant that the proposed imaging method could tell the 1.91 nm optical path difference for every tomography and the thickness of each tomography was 89.2 nm. So the refractive index uncertainty could be calculated as 1.91 nm ÷ 89.2 nm = 0.02.
Due to the small volume of the proposed incubator, the temperature control and carbon dioxide concentration control were much more difficult. In Section 3.3, we demonstrated the detailed control performance to validate the incubator could culture living cells with the similar performance of the commercial incubator. The similar overshoot, steady error, cell concentration values proved this matter. The temperature rise time and carbon dioxide concentration rise time of the proposed incubator were much less. This was just due to the volume differences of the two incubators. But these two indexes are very important for the HSM system. Since the commercial incubators are always used without even a temporary break, the rise time is not critical. However, an imaging system is just used during the experiments, with frequent closures and restarts. No matter how long the experiment lasts, researchers have to power on the incubator and wait for the temperature and carbon dioxide concentration rise, then place the living cells. So in our system, researchers do not have to prepare for a long time. We believe that this will be very helpful in biomedical experiments.
In the Section 3.4, we chose the time interval of 20 min just because that after such an interval, the imaging results would change obviously during cell proliferation. In this experiment, the scanning area was 20 μm × 100 μm, the scanning steps were 500 nm and the exposure time for every spectrum data was 10 ms. So the total imaging time for the whole cell is 83.4 s, which is the minimum imaging time interval.
The maximum total imaging time is determined by how long the cells will cover the whole silicon substrate. The system can culture living cells and transport culture medium for the cells. But when the cells cover the whole substrate, they have to be digested with trypsin and transferred to a new substrate. This time is related to the cell type and initial cell concentration. Generally, the total imaging time can be longer than 24 hours. In this experiment, the broadband light source was powered on only when the spectrums was being collected, which only lasted for less than 1.5 min. For the other time, only white light was used for custom cell imaging with the CCD. So the light toxicity was negligible for long-term imaging. Figure 5 also proved that the cell proliferation and attachment could carried out, which validated that light toxicity didn't affect the cell heavily.
Conclusion
Long-term cellular tomography of living cellular nanoarchitecture is an indispensable and powerful tool to decipher dynamic cellular morphology and metabolism. In this work, HSM is proposed to achieve label-free tomography of living cells. As a label-free method, HSM can be applied to reveal the nanoarchitecture of the whole cell with 89.2 nm axial resolution and 1.91 nm OPD sensitivity. The compound phase shift could also be resolved to reveal the original mappings of different layers, which makes this method an alternative choice for optically thick or multiple scattering specimens. HSM does not rely on expensive and precise optical components. A common epi-illumination setup could meet the imaging requirements. A cell incubator was integrated to culture living cells in an uninterrupted way, and thus, longterm and dynamic imaging can be achieved.
|
2019-06-07T20:43:56.284Z
|
2019-05-13T00:00:00.000
|
{
"year": 2019,
"sha1": "89efd9c7e21011513709fc8f35e8ecfabc5c117e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.10.002757",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e9f862f5cbd04f037b729c25d22ae85447312da1",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
251598776
|
pes2o/s2orc
|
v3-fos-license
|
A stratification strategy to predict secondary infection in critical illness-induced immune dysfunction: the REALIST score
Background Although multiple individual immune parameters have been demonstrated to predict the occurrence of secondary infection after critical illness, significant questions remain with regards to the selection, timing and clinical utility of such immune monitoring tests. Research question As a sub-study of the REALISM study, the REALIST score was developed as a pragmatic approach to help clinicians better identify and stratify patients at high risk for secondary infection, using a simple set of relatively available and technically robust biomarkers. Study design and methods This is a sub-study of a single-centre prospective cohort study of immune profiling in critically ill adults admitted after severe trauma, major surgery or sepsis/septic shock. For the REALIST score, five immune parameters were pre-emptively selected based on their clinical applicability and technical robustness. Predictive power of different parameters and combinations of parameters was assessed. The main outcome of interest was the occurrence of secondary infection within 30 days. Results After excluding statistically redundant and poorly predictive parameters, three parameters remained in the REALIST score: mHLA-DR, percentage of immature (CD10− CD16−) neutrophils and serum IL-10 level. In the cohort of interest (n = 189), incidence of secondary infection at day 30 increased from 8% for patients with REALIST score of 0 to 46% in patients with a score of 3 abnormal parameters, measured ad D5–7. When adjusted for a priori identified clinical risk factors for secondary infection (SOFA score and invasive mechanical ventilation at D5–7), a higher REALIST score was independently associated with increased risk of secondary infection (42 events (22.2%), adjusted HR 3.22 (1.09–9.50), p = 0.034) and mortality (10 events (5.3%), p = 0.001). Interpretation We derived and presented the REALIST score, a simple and pragmatic stratification strategy which provides clinicians with a clear assessment of the immune status of their patients. This new tool could help optimize care of these individuals and could contribute in designing future trials of immune stimulation strategies. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13613-022-01051-3.
Introduction
For patients with critical illness, occurrence of secondary infection is a major and frequent complication, affecting between 15 and 40% of patients after an Intensive Care Unit (ICU) admission [1][2][3][4][5][6]. Such infections are associated with increased morbidity and mortality and represent a high burden of care with longer ICU length of stay and overall greater healthcare costs [2,7]. In addition, they contribute to higher rates of microbial resistance through extensive use of antibiotic and antifungal agents, a pressing and worldwide issue [8][9][10] which has recently been further highlighted amidst the COVID-19 pandemic [11].
Among factors leading to acquisition of secondary infection in the ICU, the contribution of critical illnessinduced immune dysfunction is now well recognized. Although this phenomenon, which affects both innate and adaptive immune responses, has been mainly described in sepsis [12,13], similar immune alterations have been described in various aetiologies of critical illness [14][15][16], suggesting a somewhat common immune pathway. The REALISM study [17] (REAnimation Low Immune Status Marker) was performed to describe deep immune profiling of injury-induced immune response in a variety of critical illnesses, and among other findings has further reinforced the concept of a common global immune response to various types of severe injury.
Although multiple immune parameters have been shown to have some degree of predictive power for occurrence of secondary infection, significant heterogeneity exists regarding which test to use, with which cutoff values, at which timepoint and in which population [14,18]. As such, there is a need for clinically relevant stratification tools to assess the occult immune status of critically ill patients to better tailor care of such fragile individuals. Of note, in REALISM, the occurrence of secondary infection was somehow late (median day 9 [6][7][8][9][10][11][12][13][14][15] after ICU admission) and predominantly occurred in patients who were still in the ICU.
As a sub-study of REALISM, the REALIST score was thus developed as a pragmatic approach to help clinicians better identify and stratify patients at high risk for secondary infection after the initial phase of resuscitation, using a simple set of relatively commonly available and technically robust biomarkers. The main objective of this study is to explore the predictive power of the REALIST score regarding subsequent secondary infection.
Methods
This is a sub study of REALISM [17], for which a detailed protocol has been previously published [19]. In summary, REALISM is a prospective, observational cohort study of critically ill patients admitted with sepsis, severe trauma or planned surgery, which was performed from 2015 to 2018 at the Edouard Herriot Hospital (Hospices Civils de Lyon, France). The study protocol was approved by institutional review board (Comité de Protection des Personnes Sud-Est II) under number 2015-42-2.
Inclusion criteria for REALISM were: adult patients admitted to the ICU with a clinical diagnosis of sepsis as defined by 2016 SEPSIS-3 consensus guidelines [20]; or severe trauma with injury severity score (ISS) > 15; or surgical patients undergoing major surgeries, such as eso-gastrectomy, bladder resection with Brickers' reconstruction, cephalic pancreaticoduodenectomy and abdominal aortic aneurysm surgery by laparotomy. Exclusion criteria were any of the following: presence of a pre-existent condition or under treatment that could influence patients' immune status, pregnancy, institutionalized patients and inability to obtain informed consent. Written informed consent was obtained from every patient or their representative upon inclusion in this protocol. In the event that only the informed consent of a third party has been sought at the time of inclusion, the patients were informed as soon as possible of their participation in this study and asked to give their own consent to continue the study.
Sampling and clinical data collection
Regarding the present sub-study, samples were collected three times during the first week after enrollment: at day 1 or 2 (D1-2), D3-4 and D5-7, with the latter pre-emptively selected as the timepoint of interest. Peripheral whole blood was collected in one ethylenediaminetetraacetic acid (EDTA) tube at each timepoint for each patient. Tubes were immediately transferred to the lab and processed within 3 h after blood sampling for flow cytometry immune phenotyping and plasma cytokine level measurements.
The main cohort consisted of all patients initially enrolled in the REALISM study. As most secondary infections in the ICU occur more than 1 week after admission (median 9 [6][7][8][9][10][11][12][13][14][15] days in the REALISM study) and in an effort to select patients with persistently high risk of events, a predefined cohort of interest was formed, consisting of all patients who were still alive and in the ICU at D5-7. Patients who developed a secondary infection prior to their sampling day were excluded.
Patients' demographics, comorbidities, diagnosis, severity and clinical outcomes were prospectively collected and longitudinal follow-up was performed for 90 days. The following data were recorded: demographic information (age, gender, body mass index (BMI)), disease severity measured by the Simplified Acute Physiological Score (SAPS) II at ICU admission [21] and the Sequential Organ Failure Assessment (SOFA) score [22] measured at D1, D3-4 and D5-7. ISS was collected for trauma patients [23]. Hospital and ICU lengths of stay and survival were measured until day 90 after admission. Follow-up location after ICU discharge was recorded. During hospital stay, patients were screened daily for exposure to invasive mechanical ventilation and for secondary infection occurrence. The main outcome of interest was the occurrence of secondary infection within day 30 after ICU admission and prespecified secondary outcomes were mortality at day 30, days free from ICU at 30 days and days free from hospital at 30 days.
Definition of secondary infections
Information related to infections were collected by research nurses and reviewed and validated by a dedicated adjudication committee composed of 3 clinicians not involved in patients' recruitment or care and scrutinizing data simultaneously. Confirmation of secondary infection occurrence by this committee was based on guidelines defined by the European Center for Disease Prevention and Control [24] and Infectious Diseases Society of America [25]. "Definite" and "likely" infections were included and only the first secondary infection episode was considered in the analyses. Adjudication committee was blinded for results of immune parameters. Patients who died without being identified as having a "definite" or "likely" secondary infection were not censored from analysis.
Score derivation
Five biological parameters were initially selected for analysis, based on their established association with critical illness immune perturbations, availability and cost in the clinical setting and technical robustness and reproducibility outside expert centers. These 5 parameters were: monocyte HLA-DR by flow cytometry, percentage of immature neutrophils by flow cytometry (CD10 − CD16 low ) [26], IL-6 and IL-10 concentration by enzyme-linked immunosorbent assay (ELISA) and total lymphocytes count by hemocytometer. Technical details may be found in the main study protocol [19].
Determination of cutoff points
Receiver operating characteristic (ROC) curves were constructed for each of the 5 parameters at each timepoint, with the relevant clinical endpoint defined as secondary infection at day 30. Parameters with poor predictive ability (area under the curve (AUC) < 0.6 at each timepoint) were removed from the model and excluded from further analysis. For remaining parameters, optimal cutoff points were derived using the top left index (minimal distance to top left corner) and each parameter was further binarized into "low risk" and "high risk" at each timepoint.
Finally, pairwise Cox association models were performed, "adjusting" parameters with each other, thus giving a model for each pair of parameters to identify complementary or redundant ones. Redundant parameters were excluded from the model and from further analysis, to ensure that each parameter independently brought information to the model (excluding the redundant parameter with the lowest individual HR). A score was thus constructed as the combined sum of the binarized remaining parameters (with 1 point for "high risk", and 0 point for "low risk" markers).
Score
The predictive power of the resulting combination (score) was evaluated at the prespecified timepoint of interest D5-7 in patients still in ICU at that timepoint (cohort of interest), with absolute risk of secondary infection presented in each category. Univariate and adjusted Cox proportional hazards models were performed to adjust for a priori identified clinical risk factors for secondary infection, i.e. the physiological severity of illness (SOFA score at timepoint of interest), and disruption of normal barriers by invasive mechanical ventilation at timepoint of interest. Unadjusted and adjusted hazard ratios were computed.
Data are presented as numbers and percentages (qualitative variables) and medians and 25th/75th percentiles (quantitative variables). Chi square or Fisher's exact test were used for qualitative variables assessment. Quantitative variables were compared with Mann-Whitney U test. The level of significance was set at 5%. Statistical analyses were computed with R software v3.6.2.
The cohort of interest (patients alive and still in ICU at D5-7) consisted of 189 patients after excluding individuals with missing data. From this cohort of interest, 42 (22.0%) developed subsequent secondary infections at D30 (Additional file 1: Table S1). The detailed description of patients' characteristics is presented in Table 1.
Predictive power of individual parameters
ROC curves were computed for each prespecified immune parameter (mHLA-DR, percentage of immature neutrophils, IL-6 and IL-10 concentration and total lymphocytes count, see e- Table 1). All parameters had an AUC above 0.6 at least at one timepoint except for lymphocyte count (AUC 0.46, 0.56 and 0.52, respectively), which was thus excluded from further analysis. Cutoffs values were computed for the four remaining parameters (mHLA-DR, percentage of immature neutrophils, IL-6 and IL-10). For consistency, we only used the cutoff values computed at D5-7 from the cohort of interest. Values were dichotomized ("high-risk" vs "low-risk"). When adjusted for each other through pairwise association, every parameter brought complementary information to the models except for IL-6 and IL-10, which were considered redundant. Because IL-6 had the lowest predictive power for occurrence of secondary infection (unadjusted HR 1.82 (0.95-3.45), p = 0.069), it was excluded from further analysis.
The three remaining parameters were thus mHLA-DR, percentage of immature neutrophils and IL-10 ( Table 2). When measured at days 5-7 from the cohort of interest, all three parameters had excellent predictive power for occurrence of secondary infection at day 30, with percentage of immature neutrophils performing best Predictive power for these three parameters was also computed for other prior timepoints (D1-2 and D3-4, see Additional file 1:
The REALIST score
For these remaining parameters (mHLA-DR, immature neutrophils and IL-10), a point was given for each "highrisk" results as measured at D5-7. As such, for each patient still in the ICU at D5-7 (n = 189), a score between 0 and 3 was obtained, with 3 representing the highest risk of secondary infection. Incidence of secondary infection increased from 8% for patients with score of 0 to 46% in patients with a score of 3 (Fig. 1). Higher REALIST score was also associated with increased mortality at 30 days (p = 0.001 by Fisher's Exact Test).
When adjusted for a priori identified clinical risk factors for secondary infection, (SOFA score and invasive mechanical ventilation at timepoint of interest) a higher REALIST score was independently associated with increased risk of secondary infection (Table 3). For instance, patients with a score of 3 were 3.2 times more likely to develop secondary infection than patients with a score of 0, independent of clinical risk factors (adjusted HR 3.22 (1.09-9.50), p = 0.034).
Discussion
As a sub-study of the REALISM project, the REALIST score was developed as a pragmatic and clinically applicable stratification strategy to identify patients with occult immune dysfunction. In our cohort of mixed ICU patients, the REALIST score was able to identify patients at high risk of secondary infections, an association that was independent from major clinical risk factors for infection. As such, this approach demonstrated that genuinely occult immune dysfunction can be identified in ICU patients with tools that are quite potentially available to the frontline critical care physician outside expert research centers [27,28].
Insights from the REALISM study
The REALISM study outlined how the immune response to injury engages all components of the immune system and does not significantly vary with the type of injury (infectious vs sterile). The initial response is not associated with increased risk of death and secondary infection, illustrating that the initial pro-inflammatory immune response induced by injury should not necessarily be seen as a deleterious factor per se but rather represents an adaptive response to the injury. As induction of the pro-inflammatory effector response is associated with the concomitant development of regulatory mechanisms to protect the host from such overwhelming immune response, this also illustrates the complex interplay between the effector and regulatory mechanisms of the immune system to set up a coordinated immune response to injury. This initial host response likely aims at controlling the aggression and at protecting the host from deleterious off-targets effects of this tremendous immune response.
Thus, after the initial physiologic immune response to injury, it is the persistence (or delayed recovery) of immune alterations that predisposes patients to deleterious infectious events, independently of usual confounding factors. In this subgroup of patients, this persistently dysregulated immune profile cannot be considered as part of the physiologic response to injury but rather as a maladaptive evolution of the immune response.
Therefore, as tempting it might be to try to predict subsequent infection in patients soon after ICU admission, this might not be neither practical nor pertinent. A promising approach to immune monitoring, therefore, seems to be to target the persistence of immune alterations at the end of the first week of ICU stay, identifying patients in which immune homeostasis is pathologically compromised. Knowledge of such occult immune dysfunction is not only interesting, it can also directly influence and hopefully optimize patients' care, either through enhanced clinical surveillance, accelerated start of antimicrobials in case of infection suspicion, removal of potentially superfluous invasive device, and eventually through immune stimulation strategies [29]. In parallel, a REALIST score of 0 in an otherwise clinically stable patient would provide further reassurance and possibly support deescalating antimicrobial treatment in the right clinical context, for instance. Finally, such immune function scoring could be used to enrich a study population with patients at high risk for secondary infection in the context of an eventual immune stimulation randomized controlled trial.
Our study echoes the work of Conway-Moris et al.in the important INFECT study [14], in which the authors elegantly presented an immune score based on levels of mHLA-DR, Treg lymphocytes (CD4 + /CD25 ++ /CD127 − ) and dysfunctional neutrophils (nCD88) by flow cytometry after strict standardization within different centers. Their score was shown to predict secondary infection in critically ill patients with organ dysfunction (unadjusted HR 4.30 [CI 1.70-10.20] when measured at days 4-6), which interestingly is quite similar to the performance of the REALIST score (unadjusted HR 4.41 [CI 1.63-11.98] at D5-7). Even though Conway-Morris et al. selected different immune parameters than ours and even though patients in the INFECT study had higher illness severity than patients in REALISM, these similar results tend to validate the concept and pertinence of combining parameters to tackle immune monitoring in the ICU across a wide range of patients, even those whom clinical status might seem somewhat reassuring.
Recently, Fang et al. described and validated a similar immune dysfunction score performed at day 1 of ICU admission to help predict mortality in critically ill patients. In that study [30], the combination of monocyte HLA-DR, Il-10 levels, G-CSF levels and ratio of segmented neutrophils to monocyte allowed predicted 28 day mortality with an AUC of 0.789. Interestingly, even though the timepoint and outcome of interest are different than those in the REALIST score, there is significant similarities between chosen immune parameters, further reinforcing the rationale behind immune monitoring in the critically ill.
Choice of immune parameters
Besides mHLA-DR, which is the most studied and validated biomarker in the field with widespread standardization across laboratories [31,32], Conway-Morris et al. also used the level of Treg (CD4 + /CD25 ++ /CD127 − ) and neutrophils surface expression of CD88. Neutrophil CD88, a receptor for complement anaphylatoxin C5a, has been relatively scarcely described in the critical care context and has only been reported in expert research centers [33][34][35]; it has not been performed in the REALISM study. In parallel, the phenotypical identification of Treg lymphocytes is notoriously problematic and the standardization of their staining by flow cytometry is challenging even with modern techniques in expert centers [36][37][38]. Of note, in the REALISM study, percentage of Treg lymphocytes was not associated with occurrence of secondary infection at D30 through univariate analysis, whether measured at days 1-2, 3-4 or 5-7 (best HR at D3-4: 1.08 (0.84-1.38), p = 0.552).
As immunophenotyping has historically suffered from lack of standardization and reproducibility [39], particular importance must be attributed to these aspects if one hopes for immune monitoring tools to permeate into clinical practice. Thus, for the REALIST score, we purposely and pre-emptively chose immune parameters based on their immediate applicability in clinical practice outside expert centers. As such, technical robustness and reproducibility were major drivers for selecting otherwise relevant immune parameters from the REALISM study.
To complement mHLA-DR, we selected the expression of CD10 and CD16 on neutrophils as a technically simple marker of dysregulated granulopoiesis and inadequate granulocyte maturation. In other words, CD10 low CD16 low neutrophils are immature and quite probably the immunophenotypic equivalent of band cells [26,40,41], although variability and impreciseness in band cell measurement [42] has precluded such a definite association. Like band cells [43], an increase in CD10 low CD16 low neutrophils has been associated with poor outcomes in sepsis patients, namely, occurrence of secondary infection and death [41], and might also directly contribute to impaired T cell function [40]. Our study supports these past findings, as higher proportion of circulating CD10 low CD16 low neutrophils at days 5-7 was independently associated with occurrence of secondary infection at day 30. Lymphopenia has been found to be associated with poor outcome after sepsis and in other clinical illnesses, and therapeutic interventions to increase lymphocytes levels after sepsis have been proposed and are under investigation [44]. Surprisingly, low lymphocytes levels were not associated with secondary infection or mortality at any timepoint in the REALISM cohort, a finding that might be due to lower severity of illness and relatively low event rate.
IL-6 and IL-10 levels were both associated with occurrence of secondary infection in our study, although they brought redundant information in pairwise analysis. As IL-6 is a pro-inflammatory cytokine and IL-10 a globally anti-inflammatory cytokine, it was somewhat expected that elevated IL-10 levels would be associated with immune dysfunction, which was confirmed in this study. Of note, IL-10 levels are strongly correlated with IL-6 levels at all timepoints, as shown in the REALISM study [17], a finding that reflects the intricate and immediate interplay between effector and regulatory limbs of the immune system. As such, IL-6 levels that fail to return towards normal at days 5-7 suggest either immune dysfunction through impaired homeostasis mechanisms and/or uncontrolled inflammatory focus with associated higher disease severity. Of note, the latter hypothesis is supported by the finding that the association between higher IL-6 levels at days 5-7 and secondary infection was not statistically significant after controlling for SOFA score and presence of invasive device.
Strengths and limitations
In our study, we assessed the performance of multiple parameters at multiple timepoints in a relatively large cohort of mixed critically ill patients. We tailored our score to be easily applicable in clinical practice, with a fixed and clear timepoint, reliable and technically robust parameters with a strong track record, and simple computation by bedside clinicians. We also demonstrated strong association with secondary infections even after controlling for SOFA score and presence of invasive mechanical ventilation. We elected to control for both of these variables, even though they are partially redundant (as respiratory support is included in the SOFA score) because of the strong and clinically important association between invasive mechanical ventilation and risk of infection (namely, pneumonia). This thus represents a more stringent statistical correction than seen in other similar studies, further supporting the claim that our score is not a mere marker of disease severity but really an immune monitoring tool that provides genuinely new and previously occult information to the clinician.
Among significant limitations, our study was singlecentre and suffers from a relatively low disease severity and low event rate. Although this reduced the strength of association between parameters and outcome, it also suggests that the score is applicable to a wide array of patients with varying disease severity, as secondary infections can occur even in patients with low disease severity with genuinely occult immune dysfunction. Importantly, the REALIST score will have to be further validated in a separate multicentric cohort.
Interpretation
In conclusion, we derived and presented the REAL-IST score, a simple and pragmatic stratification strategy which provides critical care clinicians with a clear and useful assessment of the occult immune status of their patient. This new tool could help optimize care of these fragile individuals and could contribute in designing future trials of immune stimulation strategies. Ultimately, we believe this score, in conjunction with the main REALISM study, provides important didactic value, as the question of critical illness induced immune dysfunction warrants widespread discussion within the critical care community if we are to adapt our practice to this complex phenomenon and lastingly provide better care.
Author contributions JAT designed the study, analysed and interpreted the data and wrote the manuscript. FP and LK contributed substantially to the study design and performed the statistical analyses. JT, FV and TR contributed substantially to the study design, data analysis and in writing the manuscript. KBP, ACL, LQ, CV and LKT helped in data interpretation and in writing the manuscript. GM was the instigator of the study, contributed significantly to the study design, data analysis and interpretation and in writing the manuscript. JAT and GM are both guarantors of this manuscript, taking responsibility for the integrity of the work as a whole.
Funding
JAT was funded by the Royal College of Physicians of Canada and by the Fonds de Recherche Québec Santé. FP is an employee of bioMérieux SA, an in vitro diagnostic company and works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. LK works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. JT is an employee of bioMérieux SA, an in vitro diagnostic company and works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. KBP is an employee of bioMérieux SA, an in vitro diagnostic company. ACL works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. LQ is an employee of Sanofi Pasteur. CV is an employee of BIOASTER. LKT is employee of and hold stock and shares in GlaxoSmithKline. FV works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. TR works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux. GM works in a joint research unit, co funded by the Hospices Civils de Lyon and bioMérieux This study received funding from the Agence Nationale de la Recherche through a grant awarded to BIOASTER (Grant number #ANR-10-AIRT-03) and from bioMérieux, Sanofi and GSK.
Availability of data and materials
Original data will be made available upon adequate request to the corresponding author.
Declarations Ethics approval and consent to participate
The study protocol was approved by institutional review board (Comité de Protection des Personnes Sud-Est II) under number 2015-42-2. Consent for publication Not applicable.
Consent for publication
Not applicable.
|
2022-08-17T13:17:33.499Z
|
2022-08-17T00:00:00.000
|
{
"year": 2022,
"sha1": "df39e2ca8c01231fe6b0b32aa7cd2b64b770265c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "5556ee2eb7694f49fdb86ee87a62b7b5d00c5483",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202559204
|
pes2o/s2orc
|
v3-fos-license
|
Pretibial dystrophic epidermolysis bullosa associated with aberrant exon splicing of type VII collagen
Epidermolysis bullosa (EB) is a group of genetic disorders characterized by blisters and erosions/ulcerations in response to otherwise benign mechanical forces applied to the skin. Pretibial epidermolysis bullosa (PEB) is a form of EB that most often presents with blistering, ulceration, scarring, and milia localized to the bilateral legs. We report a case of man in his 50s presenting with blistering and scarring of his bilateral legs caused by PEB found to be associated with a mutation in COL7A1 that results in exon skipping.
INTRODUCTION
Epidermolysis bullosa (EB) is a group of genetic disorders characterized by blisters and erosions/ ulcerations in response to otherwise benign mechanical forces applied to the skin. Pretibial epidermolysis bullosa (PEB) is a form of EB that most often presents with blistering, ulceration, scarring, and milia localized to the bilateral legs. We report a case of man in his 50s presenting with blistering and scarring of his bilateral legs caused by PEB found to be associated with a mutation in COL7A1 that results in exon skipping.
CASE REPORT
A 57-year-old man with type 2 diabetes mellitus presented with a four-year history of blistering on the lower extremities. Every 2 to 3 weeks, he had burning, painful red plaques that blistered over the course of hours and eventually ruptured, leaving crusted erosions and eventually violaceous scars with milia. He denied oral ulcers or blisters of any other body region. No other family members were affected.
Prior biopsies found dermal fibrosis and dilated blood vessels as well as scar and regenerative changes. Before presenting at our site, his condition was treated with compression stockings, topical steroids, and a prednisone taper, none of which provided improvement. Previous laboratory tests included antinuclear antibody, anti-dsDNA, C3, C4, hepatitis B serologies, hepatitis C serologies, rheumatoid factor, angiotensin-converting enzyme, anti-CCP and anti-U1RNP, all of which were negative/ normal. A lower extremity vascular Doppler ultrasound scan was performed and read as normal.
Physical examination found tense bullae and well-circumscribed violaceous plaques of varying sizes in the pretibial region as well as on the calves, some with milia and ulceration (Fig 1, A-C ). Lower extremity nail plates showed miniaturization, thickening. and yellow discoloration.
A biopsy of a tense bulla on the right calf was performed. Subepidermal separation with dermal fibrin and underlying scar with inflammation were noted. Direct immunofluorescence microscopic studies showed linear trace IgG and C3 along the dermoepidermal junction. A split skin study was negative as were indirect immunofluorescence studies.
Whole exome sequencing was subsequently performed, showing a heterozygous splice mutation in COL7A1 on chromosome 3p21.1, specifically the deletion of 2 nucleotides before coding exon 108 (c.7984-2delA). This alteration, which abolishes the native acceptor splice site, is consistent with a diagnosis of dystrophic epidermolysis bullosa (DEB) with autosomal dominant inheritance. Polymerase chain reaction (PCR) analysis of cDNA from our patient was consistent with loss of exon 108 in the mRNA from one of the patient's COL7A1 alleles (Fig 2). The patient continues to report Depicted is the genomic DNA (center) with exons illustrated as boxes with intervening introns illustrated as a solid black line. A red X marks the expected variation site for our patient. The expected mRNA is depicted as labeled for wild type (top) and for mRNA with a skipped exon 108 (bottom). PCR amplification products (dotted lines) are shown with the primers that were used (black boxes); resulting product lengths are labeled. B, Products of indicated PCR reaction are shown. Lane numbers correspond to DNA ladder with white arrowheads indicating ladder band length size (1), no template PCR reaction (2), healthy control/wild-type genomic template (3), and PEB patient genome template (4). Blue arrowhead is shown at approximately 251 base pairs, where wild-type template product is expected. Red arrowhead is shown at approximately 188 base pairs, where exon108-skipped template product is expected.
waxing and waning of his disease with wound care and avoidance of trauma to the lower extremities.
DISCUSSION
The symptoms of DEB, a mechanobullous disease, are prompted by dysfunction of the anchoring fibrils within the basement membrane caused by a pathogenic mutation in type VII collagen. 1 When the skin fragility, blistering, and resultant scarring are localized to the pretibial region, often with pruritus and nail dystrophy (especially of the toenails), a diagnosis of PEB, a rare subtype of DEB, should be considered. 1 As in our patient, this diagnosis may not be made until later in life, although PEB can also present in childhood. 2 The disease tends to be progressive, with some patients reporting worsening at the onset of puberty or in the heat or humidity. 3 Marked improvement in adulthood, however, has been reported in 2 female patients. 3 Familial cases of dominant inheritance in Taiwan, Japan, China, Finland, and Belgium have been studied, with pedigrees displaying glycine substitution mutations in the C-terminal portion of COL7A1. [3][4][5] Recessive cases have been reported at about half the rate of dominant cases, with mutations alternatively distributed along the entire length of the COL7A1 gene. 5 Two patients with splice site mutations prior to our patient have been reported. 5,6 We provide a previously unreported dominant mutation associated with exon skipping, supporting genomic investigations for similar dermatologic cases presenting during adulthood.
|
2019-09-14T13:05:34.074Z
|
2019-08-29T00:00:00.000
|
{
"year": 2019,
"sha1": "a9735025be4313197bd0d34e6140b2fa73ce96fc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.jdcr.2019.06.032",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f827aced5b6d8ded2bbfd3e4379cbc7de0dc7abc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258276053
|
pes2o/s2orc
|
v3-fos-license
|
Surgical site wound infection and pain after laparoscopic repeat hepatectomy for recurrent hepatocellular carcinoma
Abstract This study aimed to compare the effects of laparoscopic repeat liver resection (LRLR) and open repeat liver resection (ORLR) on surgical site wound infection and pain in recurrent hepatocellular carcinoma. PubMed, EMBASE, Cochrane Library, China National Knowledge Infrastructure, and Wanfang Data were systematically searched for studies comparing LRLR with ORLR for the treatment of recurrent hepatocellular carcinoma, with a search timeframe from their inception to December 2022. Two investigators independently screened the literature, extracted information, and evaluated the quality of the studies according to the inclusion and exclusion criteria. This study was performed using RevMan 5.4 software. A total of 20 publications with 4380 patients were included, with 1108 and 3289 patients in the LRLR and ORLR groups, respectively. The results showed that LRLR significantly reduced surgical site wound infection rate (1.71% vs. 5.16%, odds ratio [OR]:0.32, 95% confidence interval [CI]: 0.18‐0.56, P < .001), superficial wound infection rate (1.29% vs. 4.92%, OR: 0.29, 95% CI: 0.14‐0.58, P < .001), bile leakage (3.34% vs. 6.05%, OR: 0.59, 95% CI: 0.39‐0.90, P = .01), organ/space wound infection rate (0.4% vs. 5.11%, OR: 0.23, 95% CI: 0.07‐0.81, P = .02), and surgical site wound pain (mean difference: −2.00, 95% CI: −2.99 to −1.02, P < .001). Thus, the findings of this study showed that LRLR for recurrent hepatocellular carcinoma significantly reduced wound infection rates and improved postoperative wound pain.
• LRLR group significantly reduced postoperative wound pain compared with the ORLR group
| INTRODUCTION
Primary liver cancer is the seventh most common cancer in the world, and hepatocellular carcinoma is the main pathological subtype of primary liver cancer, accounting for 75% to 85% of the total pathology.Hepatocellular carcinoma is also the second leading cause of cancer-related deaths worldwide. 1Hepatectomy is one of the mainstays of treatment for primary liver cancer; however, its efficacy is not optimal, with a recurrence rate of >70% at 5 years postoperatively. 2][5] Conventional open liver resection (OLR) has become highly advanced; however, this surgical method is highly traumatic and has a long recovery time, especially for older patients and those with cirrhosis or other underlying diseases.This procedure has various risks of complications and the prognosis is poor, which affects patients' quality of life.With the rapid development of minimally invasive technology in recent decades, laparoscopic liver resection (LLR) has been increasingly used for the treatment of hepatocellular carcinoma.Studies have shown that there is no significant difference in the long-term prognosis of patients treated with LLR and OLR; however, LLR can significantly reduce the incidence of postoperative complications, including surgical site wound infection. 6,7espite various preventive measures, the incidence of postoperative surgical site wound infection complications ranged from 5% to 20% in patients undergoing hepatectomy.][10] Compared with OLR, LLR may reduce complications associated with postoperative surgical site wound infections. 6,7However, most previous studies have had potential selection bias in terms of cases, and laparoscopic repeat liver resection (LRLR) has not been widely adopted because of the technical challenges of recurrent tumour resection in patients with a history of hepatectomy.This study aimed to compare the effect of LRLR with that of open repeat liver resection (ORLR) on wound infection and pain in recurrent hepatocellular carcinoma using a meta-analysis to provide a clinical reference.
| Inclusion and exclusion criteria
The inclusion criteria were as follows: (1) studies involving recurrent hepatocellular carcinoma; (2) studies comparing LRLR with ORLR for recurrent hepatocellular carcinoma; and (3) outcome indicators, including surgical site wound infection, superficial SSI, deep SSI, bile leakage, organ/space SSI, and visual analogue scale (VAS).The exclusion criteria were as follows: (1) studies that did not compare LRLR with ORLR; (2) studies in which the full text was not available; and (3) studies that were conferences, abstracts, reviews, case reports, or duplicate publications.
| Data extraction and quality assessment
All retrieved studies were screened independently by two authors according to the inclusion and exclusion criteria described above.In case of disagreements, a third investigator was involved to discuss and resolve disputes.The data extracted included the author, year of publication, country, study sample size, sex, age, and outcome indicators.The quality of the included studies was assessed using the Newcastle-Ottawa scale (NOS) in terms of selectivity, comparability, and exposure.
| Statistical analysis
The meta-analysis was performed using RevMan 5.4 software.The odds ratio (OR) and 95% confidence interval (CIs) were selected to evaluate the count data, and the mean differences (MDs) and their 95% CIs were selected to evaluate the measurement data.The heterogeneity of the included studies was assessed using the χ 2 test and I 2 , and if the statistical heterogeneity between the results of the studies was large (P < .1 or I 2 > 50%), a randomeffects model was used for meta-analysis; otherwise, a fixed-effects model was used for meta-analysis.A sensitivity analysis was performed using a sequential exclusion method to assess the robustness of the results.Funnel plots were used to assess publication bias when more than 10 papers were included.
| Surgical site wound infection
Eleven studies reported surgical site wound infections, with 703 and 2830 patients in the LRLR and ORLR groups, respectively.There was no statistical heterogeneity among the studies (I 2 = 0%, P = .71);therefore, a fixed-effects model was used for the meta-analysis.The results of the meta-analysis showed that the incidence of postoperative surgical site wound infection was significantly lower in the LRLR group than that in the ORLR group (1.71% vs. 5.16%; OR, 0.32; 95% CI: 0.18-0.56,P < .001)(Figure 2).
| Superficial wound infection
Four studies reported superficial wound infections, with a total of 541 and 2683 patients included in the LRLR and ORLR groups, respectively.There was no statistical heterogeneity among the studies (I 2 = 0%, P = .96);therefore, a fixed-effects model was used for the meta-analysis.Results of the meta-analysis showed that the incidence of postoperative superficial wound infection was significantly lower in the LRLR group than that in the ORLR group (1.29% vs. 4.92%, OR: 0.29, 95% CI: 0.14-0.58,P < .001)(Figure 3).
| Deep wound infection
Three studies reported deep wound infections, with a total of 96 and 165 patients included in the LRLR and F I G U R E 2 Forest plot of comparison of LRLR versus ORLR for surgical site wound infection.
ORLR groups, respectively.There was no statistical heterogeneity among the studies (I 2 = 0%, P = .62);therefore, a fixed-effects model was used for the meta-analysis.
Results of the meta-analysis showed that the incidence of postoperative deep wound infection was higher in the LRLR group than that in the ORLR group; however, the difference between the two groups was not statistically significant (3.13% vs. 2.42%, OR: 1.24, 95% CI: 0.30-5.16,P = .77)(Figure 4).
| Organ/space wound infection
Five studies reported on organ/space wound infection, with 248 and 352 patients included in the LRLR and ORLR groups, respectively.There was no statistical heterogeneity among the studies (I 2 = 0%, P = .96);therefore, a fixed-effects model was used for the meta-analysis.
Results of the meta-analysis showed that the incidence of postoperative organ/space wound infection was significantly lower in the LRLR group than that in the ORLR group (0.4% vs. 5.11%, OR: 0.23, 95% CI: 0.07-0.81,P = .02)(Figure 5).
| Bile leakage
Eleven studies reported on bile leakage, with a total of 808 and 3007 patients included in the LRLR and ORLR groups, respectively.There was no statistically significant heterogeneity among the studies (I 2 = 16%, P = .29);therefore, a fixed-effects model was used for meta-analysis.Results of the meta-analysis showed that the incidence of postoperative bile leakage was significantly lower in the LRLR group than that in the ORLR group (3.34% vs. 6.05%,OR: 0.59, 95% CI: 0.39-0.90,P = .01)(Figure 6).
| VAS
Five studies reported on VAS scores, with 186 and 180 patients in the LRLR and ORLR groups, respectively.Statistically significant heterogeneity was observed among the studies (I 2 = 95%, P < .001);therefore, a random-effects model was used for the meta-analysis.
F I G U R E 6 Forest plot of comparison of LRLR versus ORLR for bile leakage.
F I G U R E 7 Forest plot of comparison of LRLR versus ORLR for VAS.
(A) (B)
F I G U R E 8 Funnel plot for publication bias.A, Surgical site wound infection.B, Bile leakage.
| Sensitivity analysis and publication bias
The results of this study showed that the heterogeneity of VAS, which did not change significantly by excluding each study and analysing it individually, was high.This had no effect on the final results, indicating that the results of this study were highly reliable.The results of the publication bias test with bile leakage and surgical site wound infection outcome indicators are shown in Figure 8.The funnel plot showed a symmetrical distribution among the studies, indicating that there was no significant publication bias in the results of this study.
| DISCUSSION
Recurrence of hepatocellular carcinoma is an important factor affecting the prognosis of patients with liver cancer; it is also a difficult area in the treatment of hepatocellular carcinoma. 29For patients with recurrent liver cancer who meet certain indications, secondary surgical hepatectomy can achieve higher long-term survival rates than non-surgical treatment. 30Moreover, hepatectomy for recurrent hepatocellular carcinoma is more technically challenging than that for primary hepatocellular carcinoma.Surgery has a high risk of bleeding and intestinal injury because of cirrhosis and abdominal adhesions, which is further complicated by the altered anatomical location and major vascular dissection caused by previous surgery. 3LRLR and ORLR are the two main clinical approaches for hepatic resection of recurrent hepatocellular carcinoma.Compared with ORLR, LRLR reduces intraoperative bleeding, decreases the incidence of postoperative complications, and shortens the length of hospital stay. 6,7[10] This meta-analysis included 20 studies that examined the effects of LRLR and ORLR on wound infection and pain in recurrent hepatocellular carcinoma.The results showed that the incidences of postoperative surgical site wound infection, superficial SSI, and organic/space SSI were significantly lower in the LRLR group than those in the ORLR group.Shirai et al. compared the incidence of postoperative infection in patients with hepatocellular carcinoma who underwent LLR with that in patients who underwent OLR.The results showed that the incidence of organ/space SSI (1.2% vs. 7.8%) and remote infection (0.3% vs. 5.1%) were significantly reduced with LLR, which is consistent with our findings. 31Matsukuma et al. showed that partial resection and left hepatic tumour resection using a laparoscopic approach reduced the rate of superficial SSI infection, which is also consistent with the results of this study. 32Organ/space SSI after hepatectomy occurs mainly due to bacterial infections in the abdominal cavity near the liver cross section. 33Intraabdominal infections are associated with bacterial contamination due to microbial invasion from intestinal bile or skin flora during surgery. 34Some reports suggest that postoperative bile leakage and repeat liver resection are independent risk factors for postoperative organ/space SSI. 35,36Repeat hepatectomy has also been reported as a risk factor for bile leakage, especially after three or more repeat hepatectomies have been performed. 37The results of this study showed that the incidence of postoperative bile leakage was significantly lower in the LRLR group than that in the ORLR group, a finding that may be associated with a high incidence of wound infection after ORLR. 35,36In terms of postoperative recovery, the results of this study showed that surgical site wound pain was significantly reduced in the LRLR group, which may be because of the fact that laparoscopic surgery causes less trauma than open surgery.A large abdominal incision and prolonged exposure of the surgical field during open surgery can easily cause electrolyte disturbances.A large surgical incision and extensive debridement of previous surgical adhesions can increase the risk of postoperative incisional infection.Zhang et al. showed that the LLR group had low postoperative pain ratings and patients were able to get out of bed soon, thus accelerating the recovery of postoperative gastrointestinal function and improving their postoperative quality of life. 38his systematic meta-analysis is the first to compare the effects of LRLR and ORLR on wound infection and pain in recurrent hepatocellular carcinoma.The studies included involved several countries and regions worldwide, which increases the generalisability of the present findings and provides a reference basis for clinical treatment.However, this meta-analysis has some limitations.(1) The included studies were all non-randomised controlled trials, and although they were all high-quality studies, this factor limited the generalisation of the findings to some extent.(2) Significant heterogeneity was observed for some outcome indicators.This may be due to differences in the number of cases, study design, baseline patient characteristics, and surgical techniques among the included studies.No randomised controlled trials were included, and most studies were cohort or case-control studies.(3) The sample size of the included studies was small, and the resulting lower statistical power may have hindered the generalisation of the study results.
| CONCLUSION
In conclusion, compared with ORLR, LRLR, as a treatment for recurrent hepatocellular carcinoma, significantly reduced the incidence of postoperative wound infection and improved postoperative surgical site wound pain.Therefore, we believe that LRLR is a safe and feasible alternative to open hepatectomy for recurrent hepatocellular carcinoma in clinical centres with experience in laparoscopic and liver surgery.
F
I G U R E 1 Flow chart of study identification and selection.
Forest plot of comparison of LRLR versus ORLR for superficial wound infection.F I G U R E 4 Forest plot of comparison of LRLR versus ORLR for deep wound infection.F I G U R E 5 Forest plot of comparison of LRLR versus ORLR for organ/space wound infection.
Characteristics of included studies.
|
2023-04-23T06:17:37.139Z
|
2023-04-22T00:00:00.000
|
{
"year": 2023,
"sha1": "3eda4ba3a1af0764195fcddc11e912a8622e1379",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/iwj.14206",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "5a31ede7be2edd0c8f56b40d202d016e82f45fdc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210236564
|
pes2o/s2orc
|
v3-fos-license
|
ANALYSIS OF MICROSTRUCTURE AND HARDNESS OF WELDED JOINTS OF DISSIMILAR STEEL OF AISI 1018 - AISI 304
ANALYSIS OF MICROSTRUCTURE AND HARDNESS OF WELDED JOINTS OF DISSIMILAR STEEL OF AISI 1018 - AISI 304. This research studies the microstructure and hardness property of shield metal arc welding (SMAW) from dissimilar metals between austenitic stainless steel (SS) AISI 304 and low carbon steel (LCS) AISI 1018 using E308 filler metal. The procedure used was LCS-to-LCS welding carried out without post weld heat treatment (PWHT) and SS-to-SS welding followed by PWHT at a temperature of 1000ÚC and holding time for 12 minutes. Then, it was followed by shock cooling in aqueous media. The difference in PWHT stages in the two procedures is expected to affect the microstructure and hardness of the welding results. This was conducted to find out more precise procedures in the SMAW technique for welding dissimilar metals like AISI 304 with AISI 1018 so that the risk of chromium carbide precipitate formation and the low hardness of welded joints can be reduced. The results showed there were chromium carbide precipitates in the heat-affected zone (HAZ) of AISI 304, grain enlargement in the HAZ area of both steels, as well as the formation of the ferrite delta phase in the welding area by LCS-to-LCS welding. While the value of hardness in the HAZ area of AISI 304 has decreased, increases occur in the HAZ area of AISI 1018 in all welding conditions. In addition, PWHT treatment can increase the hardness on the AISI 1018 side due to the formation of the martensite phase, decrease in the hardness value on the AISI 304 side, and the reduced ferrite delta phase and the number of chromium carbide precipitates.
INTRODUCTION
In the industrial world, dissimilar welding between austenitic stainless steel (SS) and low carbon steel (LCS) is widely used in industrial components, such as pressure vessels, boilers, heat exchangers in power plants, nuclear reactors, petrochemicals, as well as in the oil and gas industries [1,2].The combination of both types of steel results in changes in physical and chemical properties in the joint area.The energy given during SS welding process with LCS can activate the formation of compounds consisting of the two alloying elements; consequently at the grain boundary chromium depletion occurs to less than 12% [3].Usually, in these areas, chromium carbide precipitates and solidification cracking can be formed hence they may influ-ence corrosion resistance properties [2][3][4][5].
Several studies have been conducted to reduce the above issues by using different welding procedures.Wu W., et al. use laser beam welding (LBW) technique to combine SS and LCS, by comparing two different welding speeds, namely 12 mm/s and 24 mm/s [6].This study shows that the greater the welding speed, the smaller the heat-affected zone (HAZ) on LCS and the highest hardness is in the welding area around the LCS side edge with weld metal (WM) (about 0.6 mm from the joint boundary on both sides), while the tensile strength is the same, but for greater elongation at a higher rate.Unfortunately, in dissimilar welding joints, a large corrosion current density arises due to the effect of galvanic corrosion [6].Areas that have the potential to experience corrosion can be avoided by reducing chromium carbide production.This is what was conducted by Prabakaran, et al. namely by optimizing laser power parameters, welding speed, and focus distance on LBW techniques for dissimilar welding of SS (AISI 316) and LCS (AISI 1018) [1].In this study, it is known that post weld heat treatment affects the quality of the mechanical properties of welded joints with the tensile strength reaching 475,112 MPa at 960oC and can reduce the grain size and chromium carbide production.As a result, at such temperature, the sample exhibits high corrosion resistance.The parameters used are 2600 W laser power, 1.5 m/min welding speed with a focus distance of 20 mm [1].This study shows that with the proper procedures and parameters, mechanical strength and corrosion resistance can be improved.
Welding with LBW technique is still rarely used in in-dustries due to its high cost.In general, the welding pro-cess that is widely used is the shields metal arc welding (SMAW) technique [2].This technique is used as it has several advantages; more flexible, lower production costs, and produce joints with strong metallic bonds [7].However, in the study of dissimilar welding between SS (AISI 304) and LCS (AISI 1020) conducted by Wichan et al., it shows that this technique produces lower mechanical strength and corrosion resistance when compared to gas tungsten arc welding (GTAW) technique [2].Similar to LBW welding technique, the mechanical strength and corrosion resistance of the results of dissimilar welding using SMAW technique can be improved by using the proper welding procedure.
Based on the Welding Procedure Specification (WPS), LCS and SS welding with SMAW technique has a different stage.In BKT welding with SMAW technique, there is a post weld heat treatment (PWHT) stage while none applies for LCS.Problems due to dissimilar welding with SMAW technique mentioned ear-lier are expected to be overcome by one of the welding pro-cedures.Therefore, the two welding procedures will be ap-plied to dissimilar welding of SS and LCS using SMAW technique in this study.
The microstructure and joints of dissimilar metal welds with both procedures above will be compared and analyzed to find out which welding procedure is better applied to dissimilar welding with SMAW technique so as to increase the strength of the welded joints and reduce the risk of corrosion.The types of steel that will be used in this study are AISI 304 and AISI 1018.Both types of steel were chosen as they are widely used.In addition, AISI 1018 is a low carbon steel that has good welding and mechanical properties and is easy to obtain.It's just that the weld is easy to crack and the toughness is reduced [5].Meanwhile, AISI 304 is a stainless steel with good corrosion resistance and ability, and can be applied at low and high temperatures [4].
difference in PWHT treatment in the welding procedure gives different characteristics of the welded joint results, this is confirmed through micro structure analysis and joint hardness test.So far, there have been no scientific publications on studies of microstructure analysis and hardness of welded joints of AISI 304 and AISI 1018 with SMAW welding techniques, especially by comparing the use of LCS and SS welding procedures in SMAW technique.
Materials and Instruments
The materials used are LCS AISI 1018 (hardness 101.1 HRB) and SS AISI 304 (hardness 78.1 HRB) with the same thickness, which is 10 mm.Electrode AWS A5.4 E308 (diameter 3.2 mm, welding speed 2.5 mm/s, 23.5 V, DCEP polarity, and heat input of 45.12 j/mm) from the NIKKO Steel company, as for the chemical composition of the materials at both types of steel and electrodes can be seen in Table 1 below.
The equipment used in this study consists of SMAW weld-ing machine Derlikon AD 250 WR, Nabertherm furnace for post weld heat treatment (PWHT) process, sandpaper machine Streus Labpol-21, milling machine E KunzmanBelgium, cutting machine Viebahn The MS25, polishing machine Struers Labpol-22, Optical Microscope which is equipped with Image Analyzer Omnimet software to take images of test samples, hardness test machines Rockwell (Zwick/Roell ZHR) to test sample hardness, and others.
SMAW Welding Procedures
AISI 304 and AISI 1018 are cut to 120 mm x 40 mm x 10 mm and grinded on each side.Then specimen of each steel is created by milling machine each of two pieces.Each side of the specimen is mashed and cleaned with alcohol.The process of dissimilar welding of 2 sets of specimens is carried out in accordance with Welding Procedure Spesification (WPS).The parameters used as in Table 2 with the merging design scheme diagram shown in Figure 1.
PWHT Procedures
In this process, for SS-to-LCS welding procedure, a set of welded specimens is heated at a temperature of 1000oC for 12 minutes at a furnace heating rate of ± 9.8ÚC/minute.Af-ter that, the specimens are removed and shock cooling is conducted in tap water media.
Metallographic Test
Metallography was conducted to find the microstructure of both sides of AISI 1018 and AISI 304 and weld metal.For preparation, the weld sample was cut, then sanded using silicon carbide emery paper, polished on a velvet cloth on a polishing machine using toothpaste containing alumina and mixed with water.Then, etching of the specimen was conduct-ed with 2% nital to bring up the microstructure on the side of low carbon steel and etching aqua regia to bring up the microstructure on the side of stainless steel AISI 304 and weld metal.After preparation, the microstructure was observed using an optical microscope.
Hardness Test
This test was conducted to determine the hardness value of the initial specimens and the distribution of hardness in the weld area both on the sides of low carbon steel and stainless steel.The instrument used is a hardness test ma-chine Rockwell B scale which refers to ASTM (American Society for Testing and Materials) E18 standard with a minimum sample thickness of 0.56 -1.02 mm [8].
RESULTS AND DISCUSSION
Macro photos of the results of dissimilar welding between AISI 304 and AISI 1018 with different procedures can be seen in Figure 2. In Figure 2(a), the welding results of LCS-to-LCS procedure appear that there is a HAZ area in the sides of LCS AISI 1018 so that it does not look homogeneous.This occurs because of the thermal cycle in the area.While on the side of SS AISI 304, the HAZ area is not visible due to the etching process that is not done simultaneously, AISI 304 first then AISI 1018, so that the macro photos on the side of SS AISI 304 are not the actual etching results.In the specimens of welding results with SS-to-SS welding procedure (Figure 2 (b)), the HAZ area on both sides is not visible because of the PWHT process so as to make the microstructure more uniform.However, there are defects in the weld metal section in the form of a smooth hole.The possibility of the formation of this defect can occur due to the release of gas due to differences in the solubility limit between liquid metal and solid metal at freezing temperatures, gas formation due to chemical reactions in the welding metal, and/or gas infiltration into the arc atmosphere.In addition, electrodes that are not preheated can also cause this type of defect.
In order to find out the effect of welding procedures on the microstructure of the two types of metals, the shape of the microstructure must be known before the welding pro-cess.The microstructure before the process is shown in Fig-ure 3. The microstructure in Figure 3(a) is the microstruc-ture of stainless steel AISI 304 composed of austenite (γ) and twin matrices.The twins are annealing twins that are formed due to deformation and are followed by annealing pro-cess when they are made.Whereas in Figure 3(b), the microstructure of BM low carbon steel AISI 1018 consists of two phases namely the ferrite phase (α) which is light colour (white), and the pearlite phase (α+Fe3C) which tends to be dark in colour.The microstructure is commonly found in LCS.
The microstructure of the specimens that have undergone each welding process, LCS to LCS and SS to SS, can be seen in Figure 4.In the area of the side of BM AISI 304, LCS-to-LCS welding results (Figure 4(a)), BM AISI 304/WM -LCS to LCS, there are differences in grain size.Larger grains are HAZ areas while smaller grains are base metal areas or BM AISI 304.Welded LCS is usually in a solution-annealed con-dition or hot rolling, so the heat from welding will result in recrystallization and grain growth which may soften heat-affected areas (HAZ).On the other hand, the formation of the ferrite phase that occurs along the grain boundary of the HAZ will prevent grain growth and cracking on the HAZ.In addition, the ferrite phase was seen in the weld or dilution (FL) boundary area.This happens due to diffusion, be-tween the elements of the weld metal (WM) and the base metal or BM, which is not perfect, resulting in the accumulation of ferrite delta phase at the welding boundary.Whereas in SS-to-SS welding process (Figure 4(b)), BM of AISI 304/WM -SS to SS, the microstructure formed after PWHT is fully aus-tenitic and there are twins.In the WM area, the microstruc-ture of the ferrite delta phase decreases when compared with LCS-to-LCS welding process, this occurs because the ferrite delta phase is dissolved back into the austenite phase during the solidification process [9].
The freezing procedure in LCS-to-LCS procedure results in WM microstructure, WM -LCS to LCS, which is black in color and has a ferrite phase (Figure 4(a)).This phase arises due to the fill metal content, namely E 308 electrode, having a high Creq/Nieq ratio, so generally the solidification formed is the ferrite delta phase.Therefore, during the freezing process, chromium is a forming element in the ferrite phase [10][11][12].Conversely, in the WM area -SS to SS area (Figure 4(b)), the structure of the ferrite delta phase is decreasing.It has been mentioned earlier that this occurs because the ferrite delta phase dissolves back into the austenite phase due to PWHT process.
The microstructure on the side of AISI 1018 in LCS-to-LCS process, BM of AISI 1018/WM -LCS to LCS, shows the op-posite phenomenon with BM of AISI The picture shows the occurrence of grain en-largement in the HAZ area, this is due to the heating of the multi-phase welding process.While in the results of SS-to-SS process, on the same side, a fine martensite is formed.This phase is formed due to LCS heating process in the aus-tenite area which is then followed by a high speed cooling process [1].
To find out the possibility of corrosion in the specimen can be seen from the formation of chromium carbide in the weld.The difference between the two processes, LCS to LCS and SS to SS, can be seen in Figure 5.
In Figure 5, with the same magnification, we can see differences in welding results of LCS-to-LCS process and SS-to-SS.The results of welding through LCS-to-LCS process form more chromium carbide precipitation when compared with SS-to-SS on the same specimen side.In BM 304/WM -LCS to LCS, black inter-connected precipitates are visible and they occupy grain boundaries on the side of AISI 304.The type of precipitate formed depends on the composition and type of heat treatment, but in general carbide precipitation will form on austenitic stainless steels due to the presence of chromium in large quantities and as a strong carbide forming [12].Whereas with SS-to-SS process, in AISI 304, it appears that the precipitate of chromium carbide is decreasing.This shows that the chromium carbide precipitate is dissolved back into the austenite phase [1].In addition, high carbon and Ni content in BM and metals can stabilize the austenite phase when cooling from 1100ÚC to room temperature without martensite formation [13].
Similar to the side of AISI 304, the side of AISI 1018 also has differences after undergoing different welding pro-cess.In LCS-to-LCS welding process, multiphase is formed, ferrite and pearlite.Whereas in SS-to-SS process, a fine martensite is formed.
Hardness test of welded specimens uses Rockwell machine with indentation points as shown in Figure 6.Before receiving weld treatment, the hardness value of AISI 1018 samples was 78.1±0.35HRB and AISI 304 was 101.1±1.08 HRB.After the LCS-to-LCS and SSto-SS welding processes, the results of the hardness test of the two specimens can be seen in Figure 7.
From the graph of the distribution of hardness values in the specimens (Figure 7), BM of AISI 304/WM -LCS to LCS, it appears that welding causes a decrease in the hardness value especially in BM/HAZ area (BM approaching HAZ) and the HAZ area, this occurs because of the grain in these areas experiencing enlargement and formed ferrite phase [9,10].Whereas in the BM area of AISI 304, the distribution of the hardness value is not much different from the initial hardness value because the thermal cycle of the welding process does not sufficiently affect the changes in the microstructure of the area.
For WM area, LCS-to-LCS procedure, the hardness is in the range of 81 -83 HRB.This value is almost the same as welding metal hardness value for electrode E 308 which is 80 -82.On the BM side of AISI 1018, it appears that the hard-ness value increases in HAZ area and (BM/HAZ) area.In this ,area there is an increase in the hardness value caused by grain coarsening and martensitic phase appears slightly [1].
Whereas from the graph of the distribution of hardness values in SS-to-SS procedure (Figure 7) it can be seen that on the side of AISI 304, BM/HAZ and HAZ, the hardness value decreases when compared to the initial hardness value.When connected with its microstructure, this happens because the grains in the austenite phase are bigger when compared with LCS-to-LCS process.In addition, there is the influence of precipitates which dissolve again into the matrix so that the hardness decreases.Ghorbani, S. et al. suggested that the effect of precipitate dissolution can increase tough-ness, which has been known that the relationship of tenacity with hardness is inversely proportional [12].The heat treatment process at AISI 304 will eliminate distortion and residual tension so that the atoms can diffuse to occupy their balance positions.
For other areas of the specimens with SS-to-SS welding procedure, the specimen hardness value in WM area is rela-tively stable as it still has a ferrite phase.Whereas on the side of low carbon steel AISI 1018, there is an increase in the hardness value in the HAZ area or BM.The formation of fine martensite phase due to the shock cooling is the cause of increased hardness in this area [6,14].It is known that the martensite phase is the phase with the highest hardness value when compared to other phases in carbon steel.
The parameters in this study are very simple, which is following the SMAW WPS technique standard which only compares the effect of PWHT on each specimen.For further research, it may be necessary to change other parameters such as current size, electrode type or welding speed.In addition, it is also necessary to do an Energy-dispersive X-ray spectroscopy (EDS) test to identify precipitation and other mechanical tests, such as tensile tests to determine the strength of the welded joints and impact tests to determine toughness.Studies on the relationship of chromium carbide precipitates to corrosion resistance in welded specimens should also be carried out for further research.
CONCLUSION
Dissimilar welding between austenitic stainless steel AISI 304 and low carbon steel AISI 1018 using LCSto-LCS welding procedure produces chromium carbide precipitates in the HAZ area of AISI 304 and in the WM area the ferrite phase is formed.With SS-to-SS procedure, not many precipi-tates are formed.This shows the influence of PWHT on the formation of precipitation so that with this procedure the possibility of corrosion can be avoided.The phases formed through this procedure are austenite and twin.In the WM ar-ea, the microstructure of the ferrite delta phase decreases when compared with LCS-to-LCS welding process.The microstructure on the side of AISI 1018 in LCS-to-LCS process oc-curs in grain enlargement in the HAZ area.While the results of SS-to-SS process formed fine martensite.
All welding procedures result in a decrease in the hardness value in the HAZ area of AISI 304.Whereas in the WM area both procedures result in a lower hardness value than BM of AISI 304, SS-to-SS procedure is more stable.Meanwhile, the PWHT process increased hardness in the HAZ/BM area of AISI 1018.
Figure 2 .Figure 3 .
Figure 2. Macro photos of specimens from SMAW welding technique (a) LCS to LCS (b) SS to SS.
Figure 4 .Figure 5 .
Figure 4. Microstructure of the results of welding process, (a) LCS to LCS (b) SS to SS (a) (b)
Table 2 .
Welding parameters with SMAW technique (according to WPS).
|
2020-01-25T23:59:16.027Z
|
2019-07-30T00:00:00.000
|
{
"year": 2019,
"sha1": "0d203335f6ad7d0586ce96316a55d50f6d7d837d",
"oa_license": "CCBYNCSA",
"oa_url": "http://jurnal.batan.go.id/index.php/jsmi/article/download/5280/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0d203335f6ad7d0586ce96316a55d50f6d7d837d",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
35111009
|
pes2o/s2orc
|
v3-fos-license
|
Atorvastatin and rosuvastatin do not prevent thioacetamide induced liver cirrhosis in rats
AIM: To examine whether the administration of atorvastatin and rosuvastatin would prevent experimentally-induced hepatic cirrhosis in rats. METHODS: Liver cirrhosis was induced by injections of thioacetamide (TAA). Rats were treated concurrently with TAA alone or TAA and either atorvastatin (1,10 and 20 mg/kg) or rosuvastatin (1, 2.5, 5, 10 and 20 mg/kg) given daily by nasogastric gavage. RESULTS: Liver fibrosis and hepatic hydroxyproline content, in the TAA-treated group was significantly higher than those of the controls [11.5 ± 3.2 vs 2.6 ± 0.6 mg/g protein (P = 0.02)]. There were no differences in serum aminotransferase levels in the TAA controls compared to all the groups treated concomitantly by statins. Both statins used in our study did not prevent liver fibrosis or reduce portal hypertension, and had no effect on hepatic oxidative stress. Accordingly, the hepatic level of malondialdehyde was not lower in those groups treated by TAA + statins compared to TAA only. In vitro studies, using the BrdU method have shown that atorvastatin had no effect of hepatic stellate cells proliferation. Nevertheless, statin treatment was not associated with worsening of liver damage, portal hypertension or survival rate. CONCLUSION: Atorvastatin or rosuvastatin did not inhibit TAA-induced liver cirrhosis or oxidative stress in rats. Whether statins may have therapeutic applications in hepatic fibrosis due to other etiologies deserve further investigation. reductase inhibitors (statins) to serum in to reduce cardiovascular and cholesterol-lowering demonstrate via after administration, and to the hepatotoxic reactive metabolite by the mixed function oxidase system.
INTRODUCTION
Liver cirrhosis is one of the leading causes of morbidity and mortality worldwide. Hepatic stellate cells (HSC) play a major role in the pathogenesis of hepatic fibrosis [1] . Injury to hepatocytes results in generation of lipid peroxides, which may have a direct stimulatory effect on matrix production by activated HSC [2] . It has been suggested that aldehyde-protein adducts, including products of lipid peroxidation, modulate collagen gene expression in human fibroblasts [3,4] and may be a link between tissue injury and hepatic fibrosis [2,5] . Several studies demonstrated that activation of HSC in culture is provoked by generation of free radicals and is blocked by anti-oxidants. This activation may involve the transcription factor c-myb and nuclear factor kappa B (NF-κB) [6,7] . Accordingly, antioxidants have been suggested as therapeutic modalities in experimental models [8][9][10] , and in patients with chronic liver injury [11] .
BRIEF ARTICLE
3-Hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors (statins) are used extensively to reduce serum cholesterol in an effort to reduce atherosclerotic cardiovascular morbidity and mortality. In addition to their cholesterol-lowering effect, statins demonstrate other biological effects (pleiotropic effects) that some of them may lead to clinical benefits. Those include antiinflammatory [12] and antioxidant effects [13] , inhibition of PDGF-stimulated proliferation and upregulation of tumor growth factor (TGF)-b signaling in murine mesangial cells and cultured heart cells [14][15][16] .
Over the past decade the potential effect of statins as anti-fibrotic agents has received increasing attention. The rationale for the anti fibrotic efficacy is based on the ability of statin compounds to: (1) decrease the growth of human Ito cells in vitro, independently of their effect on cholesterol synthesis [17] ; (2) inhibit the proliferation rate of HSC and reduction of the collagen protein steady state levels by both simvastatin and lovastatin [18] ; (3) inhibit steatosis, hepatic fibrosis and carcinogenesis in a rat model of non-alcoholic steatohepatitis (NASH) [19] ; and (4) enhance hepatic nitric oxide production and decrease hepatic vascular resistance in patients with cirrhosis and portal hypertension [20] . In addition to inhibiting stellate cell activation, the anti-oxidative activity and the attenuation of inflammation by specific statin derivatives may also contribute to the inhibition of fibrosis. Despite the existing data suggesting that statin derivatives can inhibit fibrosis by various mechanisms, treatment with simvastatin or pravastatin did not decrease fibrosis-induced by bile duct ligation (BDL) or carbon tetrachloride (CCl4) in rats [21,22] . However, one recent study showed that very early atorvastatin treatment inhibits HSC activation and fibrosis in the BDL model in vivo [23] .
To further elucidate the anti-fibrotic activity of statins in the liver, we examined the effects of both atorvastatin and rosuvastatin in a well characterized model of chronic thioacetamide (TAA)-induced administration in which cirrhosis is mainly produced via the formation of reactive oxygen species. TAA undergoes an extensive metabolism to acetamide shortly after administration, and to the hepatotoxic reactive metabolite thioacetamide-S-oxide by the mixed function oxidase system [24][25][26] . We hypothesized that inhibition of HSC activity in addition to the anti inflammatory and anti oxidative effects induced by statins may prevent the hepatic damage induced by TAA in rats.
Our results indicate that both atorvastatin and rosuvastatin did not diminish neither oxidative stress nor the development of TAA-induced cirrhosis in rats, and also had no effect on the proliferation of cultured HSC.
Materials and animals
TAA, atorvastatin and rosuvastatin was purchased from Sigma (Sigma Chemical Co., St. Louis, MO). Male Wistar rats (250-300 g), obtained from Tel-Aviv University Animal Breeding Center, were kept in the animal breeding house of the E. Wolfson Medical Center and fed a Purina chow ad libitum. Animals were kept with a 12-h light-dark cycle at constant temperature and humidity and the rats had free access to tap water during the study period. Use of animals was in accordance with the National Institutes of Health Policy on the care and use of laboratory animals and was approved by the Animal Use and Care Committee of the E. Wolfson Medical Center.
Induction of liver cirrhosis
For induction of liver cirrhosis, rats were given intraperitoneal injections of thioacetamide, 200 mg/kg, twice a week for 12 wk, as previously described [27] . Control rats were treated with intraperitoneal injections of NaCl 0.9%.
Analysis of liver histopathology
The rats were sacrificed at the completion of the treatment protocols, their livers were removed, and midsections of the left lobes of the livers were processed for light microscopy. This processing consisted of fixing the specimens in a 5% neutral formol solution, embedding the specimens in paraffin, making sections of 5 µm thickness, and staining the sections with hematoxylin and eosin and Masson Trichrome. The tissue slices were scanned and scored blindly by two expert pathologists. The degree of fibrosis was expressed as the mean of 10 different fields in each slide, which had been classified on a scale of 0-4 according to Batts and Ludwig [28] .
Measurement of hepatic hydroxyproline levels
Quantitative determination of hepatic hydroxyproline content was performed as previously described [29] .
Measurement of hepatic malondialdehyde
For the determination of the hepatic content of malondialdehyde, liver tissue (5 g) was cut into small pieces using a razor blade, and homogenized after dilution in H2O 1:10 w/v. Liver homogenate was centrifuged in 900 g for 5 min, and then the supernatant was collected and centrifuged in 20 000 rpm in Sorvall for 30 min using plastic tubes. The clear supernatant was obtained and malondialdehyde was measured and expressed as nmole/g wet tissue using the thiobarbituric acid method [30] . Briefly, to 1 mL of 10% liver homogenate with 1.15% KCl were added 2 mL of fresh solution 15% w/v TCA, 0.375% w/v thiobarbituric acid, 0.25 mL/L HCl. The mixture was heated at 95 ℃ for 15 min. The solution was cooled to room temperature using tap water and centrifuged at 300 rpm for 10 min. Absorption of the pink supernatant was determined spectrophoto-metrically at 532 nm.
Effect of atorvastatin on proliferation of primary hepatic stellate cells
Hepatic stellate cells were isolated from male rat using sequential pronase-collagenase digestion followed by Nycodenz (Sigma-Aldrich, Inc., St. Louis, MO, United States) density gradient centrifugation essentially as described previously with minor modifications [1] . Briefly, the liver of male rat Wistar (300-400 g) was minced with scalpels and incubated with 100 mL of freshly prepared and filtered-GBSS with 65 mg pronase (Roche Molecular Biochemicals, Indianapolis, IN, United States), 50 mg collagenase (Worthington Biochemical Corporationv, Lakewood, NJ, United States), and 0.5mL 2.7%CaCl2 for 15 min at 37 ℃ with 200 rpm shaking. Then 50 mL of freshly prepared and filtered-GBSS was added containing 12.5 mg pronase, 12.5 mg collagenase, 20 μg/mL DNAse I (Sigma-Aldrich, Inc., St. Louis, MO, United States) and 0.25 mL 2.7% CaCl2 for 30 min at 37 ℃ with 200 rpm shaking. The digested tissue was filtered through a sterile 150-µm metal mesh, and the cells were centrifuged at for 2000 rpm for 7 min. The digested liver hepatic stellate cells were isolated on a 17.5% Nycodenz gradient centrifuged at 2700 rpm for 20 min. A stellate cell-enriched fraction was present in the upper layer. Cells were washed twice by centrifugation (1200 rpm, 4 ℃, 5 min) in DMEM with 10% fetal calf serum (FCS), 100 µg/mL penicillin and 100 µg/mL of streptomycin.
Reagents
Atorvastatin (Sigma-Aldrich, Inc., St. Louis, MO, United States) stock solution was dissolved in DDW to a concentration of 5 × 10 -4 mol/L. PDGF-BB (Peprotech Inc. NJ, United States) was dissolved in DDW to a concentration of 1μg/mL. All the reagents were aliquot and stored in -20 ℃ until use.
Proliferation assays
The proliferation of HSC was examined by BrdU method (Exalpha biological, Inc. Watertown, MA, United States). Primary HSC were cultured for 14 d, after which they were trypsinized and plated at a density of 20 000 cell/well in 96 well plates in DMEM containing 10% FCS. The cells were incubated for 24 h, and then they were serum starved in DMEM + 0.5% FCS overnight. At the following day, the various stimuli were added in medium containing 0.5% FCS. HSC were exposed to either 30 ng/mL of PDGF (Peprotech, Inc. NJ, United States), and different concentration of Atorvastatin (5 × 10 -8 mol/L, 10 -7 mol/L, 5 × 10 -7 mol/L) alone, or combination of the two. After 24 h the cells were tested for proliferation according to the manufacture instruction.
Western blotting
HSCs were plated on 100 mm plates at a density of 2 × 10 6 cells/plate. After 24 h, the medium was changed to starvation medium (DMEM + 0.5% FCS) overnight. The following day, cells were incubated for 24 h with the different treatments according to which experiments were performed. Total proteins were extracted by incubating the cells for 30 min on ice in RIPA buffer containing a 1:100 dilution of a protease inhibitor cocktail (Sigma-Aldrich, Inc., St. Louis, MO, United States). After 20 min centrifugation at 14 000 rpm at 4 ℃, extracts were normalized to total protein content, determined using the BCA Reagent (Sigma-Aldrich, Inc., St. Louis, MO, United States). Equal amounts of total proteinwere separated in 4%-12% bis-tris (BT) gels (NuPAGE, Gibco-BRL Life Technologies, Grand Island, NY), blotted onto Hybond C extra membranes, blocked overnight in 5% milk, and incubated with antibodies against α smooth muscle actin (αSMA), glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Santa Cruz Biotechnology, Santa Cruz, CA, United States) and then incubated with horseradish peroxidase-conjugated secondary antibody. Afterwards, signals were detected by chemiluminescent. Expression of proteins was normalized to the expression of GAPDH. Each subgroup included 6-7 rats. The statins were given po, started concurrently with TAA and continued during the study. When the treatments were completed after 12 wk the rats were sacrificed, their livers were removed and the weights of their spleens measured.
Statistical analysis
The data are presented as median (range). The significance of differences among different groups was determined by a student's t-test.
Induction of liver cirrhosis by thioacetamide
Intraperitoneal administration of TAA for 12 wk, resulted in a uniform coarse granulation of the surface of the rats' liver. Microscopic analysis revealed liver cirrhosis, characterized by mixed-sized fibrotic nodules in these TAA-treated rats. Neither atorvastatin nor rosuvastatin had any effect on liver enzymes when given alone or in addition to TAA. treated rats [median (range)], n = 6-7.
Effect of Atorvastatin on hepatic stellate cells proliferation and smooth muscle actin expression
Atorvastatin in different concentrations had no effect on HSC proliferation as examined by the BrdU method (Figure 1) and no effect on the expression of smooth muscle actin determined by western blot analysis (Figure 2).
DISCUSSION
Our major finding in the present study is that atorvastatin and rosuvastatin do not have a therapeutic value as a potential anti-oxidant or anti-fibrotic agents targeting increased oxidative stress or liver fibrosis induced by TAA in rats.
There are various experimental observations regarding the direct anti-fibrotic activity of the statins by the inhibition of stellate cell proliferation; lovastatin inhibits pancreatic stellate cell activation and alpha-smooth muscle actin expression [31] and both simvastatin and lovastatin interferes with HSC activation in vitro [20,21] . Fluvastatin reduces renal fibroblast proliferation and collagen type Ⅲ production [32] and suppresses oxidative stress and kidney fibrosis after ureteral obstruction [33] . It is also possible that the anti-fibrotic effects of statins are mediated through mechanisms that stimulate fibroblast apoptosis in vitro and in vivo as it was shown in lung and renal fibroblasts [32,34] . Alternatively, the antifibrotic effects of statins may be mediated through their newly recognized anti inflammatory [12] and antioxidant mechanisms [35][36][37] . These include the inhibition of myeloperoxidase derived and nitric oxide derived oxidants [36] ,
Inhibition of thioacetamide-induced liver cirrhosis by statins
Compared to the rats which received TAA only, neither atorvastatin nor rosuvastatin had protective effect on the histopathologic score after 12 wk and TAA-induced liver cirrhosis was not inhibited by all the doses that were examined. Hepatic fibrosis was also quantitated by the measurement of hepatic hydroxyproline levels. The mean hydroxyproline levels of the TAA-treated group were similar to those of the TAA plus atorvastatin or rosuvastatin at all doses used.
Spleen weights
An indirect measure of portal hypertension was obtained by measuring the weights of the rats' spleens at the end of the experiment. Characteristic hemodynamic changes have previously been shown after 3 mo of TAA administration, i.e., upon TAA-induced liver cirrhosis [27] . These changes include portal hypertension and hyperdynamic circulation which are accompanied with a significant increase in spleen weight. After 12 wk, the mean spleen weight of rats receiving TAA was about 30% higher than those receiving injections of 0.9% NaCl or statins only. The mean spleen weight of rats that received statins in addition to TAA was not lower than that of TAA alone (Table 1).
Hepatic levels of malondialdehyde and lipid peroxides
The hepatic levels of malondialdehyde measured after 12 wk, were not significantly different in the rats treated with TAA and statins compared to TAA only. Table 1 summarize the rosuvastatin and atorvastatin treatment effects on oxidative stress and liver fibrosis in TAA- S-nitrosylation and activation of thioredoxin in endothelial cells [37] , decreased expression of essential NAD(P)H oxidase subunits and upregulation of catalase expression in vascular smooth muscle cells [35] . Statins also induce the expression of a protein with antioxidant and antiinflammatory functions, heme oxygenase-1 (HO-1), in vitro and in vivo [33,38] . However, this effect is tissue specific demonstrating significant increase in liver HO-1 with simvastatin and lovastatin but not atorvastatin and rosuvastatin. The involvement of hydroxyl radicals and oxidative stress in TAA induced cirrhosis [39] , the antioxidative effects of different statins, including atorvastatin, and the antifibrotic effect of atorvastatin in a rat model if BDL as was reported recently by Trebicka et al [23] , provided a good rationale for the assumption that statins may reduce or prevent fibrogenesis in this specific animal model. The explanation for the unexpected failure of atorvastatin and rosuvastatin to inhibit fibrosis in this model of cirrhosis are not clear. Two studies demonstrated anti fibrogenic effects of atorvastatin in a CCl4 induced fibrosis [40,41] . Gardner et al [41] have shown that atorvas-tatin exhibited statistically significant, although modest, suppression of CCl4-induced fibrogenesis after 3 wk of treatment. This was shown only using a novel technique for measuring hepatic collagen synthesis in vivo through metabolic labeling with heavy water ( 2 H2O). Histopathology of the same tissues revealed no significant differences in fibrosis scores among groups that received cotreatment with atorvastatin, emphasizing the importance of the fibrosis measuring method.
ALT MDA nmole/g liver Hydroxyproline (mg/g protein) Fibrosis score (0-4) Spleen weight (mg)
Nevertheless the evidence that fibrosis was not inhibited by atorvastatin and with a more potent cholesterol metabolism pathway inhibitor, rosuvastatin, suggests that this effect is not selective and occurs independently of their HMG-CoA inhibition. Since we used much higher doses than those used clinically (up to 20 mg/kg) it does not seem that we are dealing with under dose treatment. Appropriate timing for the administration of the statins may also be critical. Later therapy in the BDL model, for example, lacked significant effects on fibrosis with no change in hepatic inflammation [23] . The statin treatment in our study began in parallel to the administration of TAA. Although, it is possible that pretreatment by statins would be more effective, the fact that there was no improvement, neither in MDA levels nor in fibrosis after three months, do not support this hypothesis. Another possibility to explain these negative results might be the selection of the wrong statins. Indeed, the two statins that inhibited HSC proliferation in vitro were lovastatin and simvastatin [18] and atorvastatin in vivo. In addition pitavastatin was the one that inhibited hepatic fibrosis in a choline-deficient L-amino acid defined diet liver fibrosis [19] . However, despite these effects, even the latter mechanisms were not efficient to prevent or inhibit liver fibrosis, as it was demonstrated with simvastatin or pravastatin in two different animal models of cirrhosis (bile duct ligation and CCl4), in rats [21,22] . Moreover, antioxidant therapy in human clinical trials also lack or have minimal effect in chronic liver disease [42,43] . Finally, statin-induced protein kinase C (PKC) activation in activated HSCs may interrupt statin-induced HSC apoptosis, thereby reducing its antifibrotic efficacy. Indeed Yang et al [44] recently demonstrated that simultaneous treatment with pravastatin and PKC inhibitor may synergistically enhance antifibrotic efficacy in hepatic fibrosis induced by intraperitoneal injection of carbon tetrachloride or thioacetamide in mice. Finally, we also have to consider the possibility that the negative results may be due to the small numbers (type 2 error).
To further explore the effect of statins on liver fibrosis we examined the effect of several atorvastain concentrations on the proliferation of primary HSC. We observed that atorvastatin had no inhibitory effect on HSC proliferation either in the presence or in the absence of PDGF (Figure 1). This finding is in discordance with several previous studies that demonstrated decreased HSC activation by statins [17,18] . Moreover, using western blot analysis we found that atorvastatin (5 × 10 -8 -10 -9 mol/L) had no effect on the expression of α smooth muscle actin further supporting the lack of effect shown in the proliferation assay.
It is of interest that treatment with atorvastatin or rosuvastatin in all doses did not reduce spleen weights, a parameter of portal hypertension. This is in contrast to the recently described therapeutic effects of simvastatin in patients with cirrhosis and portal hypertension. In patients with cirrhosis and portal hypertension simvastatin enhanced hepatic nitric oxide production and decreased hepatic resistance [20] . Similarly, in cirrhotic rats, induced by BDL, atorvastatin has been shown to lower intrahepatic resistance and to decrease portal hypertension [45] .
Finally, oral atorvastatin and rosuvastatin were both well tolerated with no side effects, toxicity or mortality. Furthermore, transaminase levels, hepatic MDA and hydroxyproline and liver histopathology score did not aggravate. These data suggest that the use of atorvastatin and rosuvastatin is not associated with an increased risk of hepatoxicity in damaged liver.
In summary, our results show that atorvastatin and rosuvastatin have no effect on TAA-induced liver cirrhosis. Obviously further studies are required to evaluate whether statins may have therapeutic applications, in the development of hepatic fibrosis induced by other etiologies.
Background
Liver cirrhosis is one of the leading causes of morbidity and mortality worldwide. Hepatic stellate cells (HSC) play a major role in the pathogenesis of hepatic fibrosis. Several studies demonstrated that activation of HSC in culture is provoked by generation of free radicals. Accordingly, antioxidants have been suggested as therapeutic modalities in experimental models, and in patients with chronic liver injury.
Research frontiers
Previous studies mainly focused on the potential effect of statins as anti-fibrotic agents. However, whether the anti-oxidative activity and the attenuation of inflammation by specific statin derivatives may also contribute to the inhibition of stellate cell activation and fibrosis remain unclear. The authors hypothesized that inhibition of HSC activity in addition to the anti inflammatory and anti oxidative effects induced by statins may prevent the hepatic damage induced by TAA in rats.
Innovations and breakthroughs
To further elucidate the anti-fibrotic activity of statins in the liver, the authors examined the effects of both atorvastatin and rosuvastatin in thioacetamide (TAA)-induced liver fibrosis in which cirrhosis is mainly produced via the formation of reactive oxygen species. The major finding was that both statins do not have a therapeutic value as a potential anti-oxidant or anti-fibrotic agents targeting increased oxidative stress or liver fibrosis induced by TAA.
Applications
The results show that atorvastatin and rosuvastatin have no effect on cirrhosis induced mainly by oxidative stress. Further studies are required to evaluate clarify the mechanism by which statins may have therapeutic applications, in hepatic fibrosis induced by other etiologies.
Terminology
3-Hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors (statins) are used extensively to reduce serum cholesterol in an effort to reduce atherosclerotic cardiovascular morbidity and mortality. In addition to their cholesterol-lowering effect, statins demonstrate also anti-inflammatory and antioxidant effects. TAA-induced cirrhosis is mainly produced via the formation of reactive oxygen species. TAA undergoes an extensive metabolism to acetamide shortly after administration, and to the hepatotoxic reactive metabolite thioacetamide-S-oxide by the mixed function oxidase system.
Peer review
Well written manuscript which presented that atorvastatin or rosuvastatin did not inhibit the formation of TAA-induced liver cirrhosis in rats. Further, no induction of oxidative stress or effect on HSC proliferation have been noted. The model is well presented, the experiments seems to be correct. Even if results are negative, and in partial or total contrast with previous studies, the study appears well conducted and the results discussed with honesty, and caution.
|
2018-04-03T02:47:15.057Z
|
2013-01-14T00:00:00.000
|
{
"year": 2013,
"sha1": "1d653402b8f823720eeda68b2ce7ba60a4fd230d",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v19.i2.241",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "871f9f00801f1ec60e8d25f2e145af4a9b6190cd",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
201654690
|
pes2o/s2orc
|
v3-fos-license
|
Visual short-term memory binding deficit with age-related hearing loss in cognitively normal older adults
Age-related hearing loss (ARHL) has been posited as a possible modifiable risk factor for neurocognitive impairment and dementia. Measures sensitive to early neurocognitive changes associated with ARHL would help to elucidate the mechanisms underpinning this relationship. We hypothesized that ARHL might be associated with decline in visual short-term memory binding (VSTMB), a potential biomarker for preclinical dementia due to Alzheimer’s disease (AD). We examined differences in accuracy between older adults with hearing loss and a control group on the VSTMB task from a single feature (shapes) condition to a feature binding (shapes-colors) condition. Hearing loss was associated with a weaker capacity to process bound features which appeared to be accounted for by a weaker sensitivity for change detection (A’). Our findings give insight into the neural mechanisms underpinning neurocognitive decline with ARHL and its temporal sequence.
were taking certain medications for a psychiatric condition, if they had possible cognitive impairment (based on a global cognitive z-score of <−1.5 SD on the neuropsychological assessment tests) or if they had a congenital/pre-lingual hearing loss or loss due to injury or disease. The Faculty of Health Sciences Research Ethics Committee of Trinity College Dublin approved all study protocols. The study was conducted in accordance with the 1964 Declaration of Helsinki, and its later amendments. Written informed consent was obtained from all participants. Testing with the VSTMB task took place between October 2016 and January 2017.
Background assessment. Demographic data collected included age, sex, and education (both years and highest attainment). Self-rated measures were included of physical and mental health, alcohol consumption and smoking. Sleep quality was assessed using the Pittsburgh Sleep Quality Index (PSQI) S1 ; pre-morbid IQ using the National Adult Reading Test (NART) S2 ; frailty with the Survey of Health, Ageing and Retirement in Europe (SHARE) Frailty Instrument S3 ; depression with the 10 item Center for Epidemiologic Studies Depression Scale (CESD-10) S4 ; anxiety using the Hospital Anxiety and Depression Scale-Anxiety subscale (HADS-A) S5 ; apathy with the Apathy Evaluation Scale -Self-rated (AES-S) S6 ; social network with the Lubben Social Network Scale (LSNS) S7 ; loneliness with the 6-item De Jong Gierveld Loneliness Scale (DJGLS) S8 ; boredom proneness using a self-report question with a four-point scale S9 ; perceived stress with the Perceived Stress Scale-4 item (PSS-4) S10 . The Hearing Handicap Inventory for the Elderly Screening Version (HHIE-S) assessed self-reported hearing loss S11 .
Audiometric assessment. Pure-tone audiometry was used to assess peripheral ear function. The assessment was conducted by audiologists and followed the standards of the British Society of Audiology and of the American National Standards Institute. Participants' ears were checked by otoscope. Pure-tone air conduction decibel thresholds were obtained in each ear at frequencies 0.5, 1, 2, 3, 4, 6, and 8 kilohertz with calibrated audiometers (Grayson Sadler GSI 61 or Interacoustics Callisto) and TDH 39 supra-aural earphones (Telephonics, Huntington, New York). The World Health Organization (WHO) criteria for hearing loss were used: pure-tone average (PTA) ≥ 26 dB for 0.5, 1, 2 & 4 kHz in the better ear 13 . Participants meeting these criteria were allocated to HLG and those below this threshold were allocated to CG. We also calculated the PTA of these frequencies for the worse ear. The PTA for low (0.25, 0.5 & 1 kHz) and high frequencies (3,4, & 6 kHz) for both ears were included to provide an estimate of low and high frequency loss.
Neuropsychological assessment. We conducted a neuropsychological assessment of the main cognitive domains. General cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) S12 and a composite z-score was calculated from tests of the following domains: episodic memory was assessed using the Free and Cued Selective Reminding Test (FCSRT) S13 with immediate and delayed recall (after 30 minutes) subsets and Wechsler Memory Scale-III (WMS-III) spatial span forward subset S14 ; executive function was assessed using the Visual Reasoning subtest of the Cambridge Mental Disorders of the Elderly Examination (CAMDEX) battery S15 , the Sustained Attention to Response task (SART) S16 , the phonological fluency test from the MoCA e12 and the WMS-III spatial span backward subset S14 ; processing speed was assessed using a computer-based choice-reaction time test (CRT) which included motor and cognitive components S17 and mean response time (RT) from the SART S16 ; language was assessed using the Boston Naming Test 60-item version S18 and the semantic (animals) fluency S19 and visuospatial ability was assessed using the Medical College of Georgia (MCG) Complex Figure test (copy only) S20 . None of the tests used auditory stimuli except the MoCA (we used scores both including and excluding audiological items) S21 . VSTMB test. Using a computer, participants were administered a screening test (to ensure capacity to form bindings in perception) and the VSTMB test which was the same as that used by Parra et al. (2010) 9 . Participants were asked to remember two study visual arrays (2000 ms) and after a brief pause (900 ms) to detect if a change has occurred when visually prompted with a test array (Fig. 1). The first condition consisted of two shapes-only arrays. The second condition consisted of two colored shapes arrays. In both conditions, participants were instructed to state verbally whether or not the stimulus in the test display was the 'same' (as) or 'different' (from) the stimulus in the study display. Participants were allowed to respond in their own time. At the beginning of each trial, a fixation screen appeared for 250 ms. Changes in the test arrays consisted of new features replacing studied features (shape-only) or features swapping across items (shape-color binding). For the first condition, the two arrays were randomly selected from a set of eight six-sided random polygons shapes. For the second condition, the two arrays were selected from the same selection of shapes and from a set of eight colors. Both the shapes and binding conditions consisted of 15 practice trials followed by 32 test trials. Of these 32 trials, 16 were 'same trials' and 16 were 'different trials. ' Stimuli were presented at 1° of visual angle and fell within an area of 10°. Participants were instructed to ignore the location of the stimulus on the screen which varied randomly across trials and between study and test displays. The test took approximately 16 minutes to complete. Statistical analysis. We compared background and neuropsychological data, using the unpaired t-test or the χ 2 test. Normality was examined using the Kolmogorov-Smirnov test and by visual inspection of the Q-Q plots and the data distribution in the histograms. Non-normal data were either transformed or analyzed using non-parametric tests, as appropriate. All statistical analyses were conducted using the Statistical Package for Social Sciences version 22 (SPSS Inc., Chicago, IL, U.S.A.).
We used a linear mixed model to conduct the primary analysis to assess difference between groups across VSTMB conditions (shapes to binding). As fixed effects in the model, we entered condition, group and a condition by group interaction term. Subject was entered as a random effect. Age, sex and years of education were entered as covariates. Residual plots were inspected for deviations from homoscedasticity or normality. We constructed another model with the slope added as a random factor. Models were fitted and compared based on the −2 Restricted Log Likelihood and Akaike's Information Criterion. The first model was deemed the better fit. We selected a diagonal structure as the covariance structure for the error terms based on the above criteria.
As a secondary analysis, we assessed the differences between groups on all VSTMB outcomes using ANCOVA with the same covariates. We conducted an additional analysis assessing sensitivity for change detection 9 following Signal Detection Theory measures 14 . A' was selected as the sensitivity measure 15 and was calculated according to the formulas provided by Xu 16 which do not have indeterminacy when a participant does not make false alarms. Poor performance accounted for by low sensitivity would suggest difficulties in keeping the signal separate from the noise in working memory 9 .
Using Pearson's r or Spearman's correlation coefficient, we explored associations between shapes and binding accuracy with hearing loss (WHO PTA for entire sample) along with age and other variables recognized as potential modifiable dementia risk factors (depression, level of education, physical inactivity, smoking, and social engagement) 3 . We explored associations between shapes and binding accuracy with outcomes on several tests recommended for AD assessment (FCSRT delayed free recall, phonemic/semantic fluency, BNT and MoCA) 4 across groups 17 . We made adjustments for false discovery rates. We also compared VSTMB high and low HLG performers and the CG on background and neuropsychological data.
Results
Group characteristics. Groups were well matched on background factors (Table 1). A significant difference existed between groups on all audiological outcomes (P < 0.001). Seventeen (68%) of the participants in the HLG and none in the CG wore hearing aids. Thirteen (52%) participants in the HLG and thirteen (72%) in the CG reported having previously experienced tinnitus. No participants reported difficulty with vision. All participants passed the perceptual binding screening assessment. No significant difference was observed between groups on any traditional neuropsychological test except for visuospatial ability where the HLG performed more poorly (mean [SD], 24.22 [4.38] vs 27.06 [4.5]; P = 0.045) ( Table 2). VSTMB results. Prior to adding the interaction term, there was no significant effect for any variable except condition (Table 3). When the interaction term was added to the model, it was the only significant variable, with HLG demonstrating a greater drop in accuracy from the shapes to the binding condition (β = −0.064, 95% CI = −0.125 to −0.003; P = 0.04).
Results of the secondary (ANCOVA) analyses for each VSTMB outcome (Table 3) showed no significant difference between groups on the shapes-only condition outcomes. For the binding condition, we found no significant difference in reaction time. The HLG demonstrated poorer performance compared to CG on binding accuracy (0.86 [0.11] vs 0.93 [0.06]; P = 0.03). We found no significant difference for the sensitivity measure (A') on shapes-only condition; however, a lower sensitivity for the HLG approached significance on the binding condition (0.8 [0.23] vs 0.92 [0.08]; P = 0.06).
VSTMB associations with dementia risk factors and assessment tools. Compared to age and other, modifiable, dementia risk factors only hearing loss was associated with binding accuracy whereas only social engagement was significantly associated with shapes accuracy. (Supplementary Table S1). When compared with other AD assessment tools, only phonemic fluency was significantly correlated with binding accuracy in the HLG (Supplementary Table S2). These findings remained after removal of low performers from the CG. None of the above associations remained significant after adjustment for false discovery rate 17 . We included correlations between shapes/binding accuracy and all background and neuropsychological variables in Supplementary Tables S3 and S4. Table S5 and S6). Outcomes for the three groups on background measures were the same (P > 0.10) except NART scores which trended toward significance (HLG-low = 110.88 [6.25], HLG-high = 115.39 [6.48], CG = 115.17 [5.38]; P = 0.09). For neuropsychological tests, outcomes were the same across groups (P > 0.10) with the exceptions of phonemic fluency (HLG-low = 13.29 [4.2], HLG-high = 17.27 [4.74], CG = 14.22 [4.17]; P = 0.07) and the MCG complex figure copy task (HLG-low = 23.5 [4.26], HLG-high = 25.14 [4.57], CG = 27.06 [4.5]; P = 0.09) which also trended toward significance. These findings remained unchanged with removal of low performers from the CG (N = 4). When only HLG-high performers and HLG-low performers were compared, there were no differences (P > 0.10) with the exceptions that the HLG-low had greater low-frequency hearing loss (both P < 0.10) and poorer NART scores (P = 0.09) and phonemic fluency (P = 0.04).
Discussion
Compared to controls, the HLG showed poorer capacity to process bound features in visual short-term memory. We found no difference in accuracy between groups on the shapes-only condition. The two groups were otherwise matched for background characteristics and neuropsychological performance (with the exception of the MCG complex figure copy task). All participants passed the perceptual binding screening assessment. Therefore, decline in processing bound features was more likely due to a weaker capacity to maintain a strong signal-to-noise www.nature.com/scientificreports www.nature.com/scientificreports/ ratio in working memory than to perceptual difficulties. This pattern has been observed previously only in asymptomatic carriers of the E280A single presenilin-1 mutation which leads in 100% of cases to autosomic dominant familial AD 9 . In this study, Parra and colleagues 9 also reported poorer (but not significantly poorer) performance for the asymptomatic carriers compared to controls on an identical complex figure copy task. AD and stroke studies indicate that performance on drawing tasks is modulated by several frontal and temporal-parietal cortex regions including the right temporal and parahippocampal gyri [18][19][20][21][22] in which atrophy has been observed with ARHL 23,24 .
A meta-analysis of epidemiological studies reported that ARHL was associated with decline in multiple domains of cognition including working memory and visuospatial ability 2 . However, there is limited research into what initial changes may occur in neurocognitive function with ARHL prior to a stage where decline may be observed in multiple domains of cognition. The results of this study suggests that altered VSTMB may be a feature of such early changes in neurocognitive function with ARHL. Our findings are consistent with previous research. It is known that in ARHL the brain undergoes functional reorganization and that this might negatively impact on Table 2. Neuropsychological data for the two groups of participants. Means (M) and standard deviation (SD) for the control group (CG) and the hearing loss group (HLG) on the neuropsychological data. + Composite global z-score calculated from the mean of the composite scores for episodic memory (except FCSRT total scores), executive functions (except SART and WMS-III spatial span total scores) and language, and from processing speed (CRT total MRT), and visuospatial ability. BNT, Boston Naming Test 60-item version; www.nature.com/scientificreports www.nature.com/scientificreports/ the ability to retain information in memory (i.e. maladaptive plasticity) 23,[25][26][27] . A small number of neuro-imaging studies have reported atrophy in neural regions that are important for memory with ARHL 23,24,26,27 .
Two studies that examined data from the Baltimore Longitudinal Study of Aging reported a faster decline in the temporal lobes in regions that are critical for memory 23,26 . One of these studies reported that ARHL was associated with accelerated atrophy (comparable to those developing mild cognitive impairment) in the parahippocampal gyrus 23 which is part of the ventral stream and contributes to the encoding and maintenance of bound information in working memory [28][29][30] . The other study reported that poorer midlife hearing was associated with atrophy in the right hippocampus and in the entorhinal cortex 26 . Another recent study using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database also reported that ARHL was associated with elevated cerebrospinal fluid tau levels and atrophy of the hippocampus and entorhinal cortex 27 . The entorhinal cortex is affected in the early stages of AD 31 but cortical thickness of this region has also been linked with memory scores independent of the level of β-amyloidosis and tauopathy 32 .
A limited number of studies have been conducted examining the link between neural changes with ARHL and changes in cognitive function in humans [33][34][35] . One such study reported a correlation of poorer function in several cognitive domains including episodic memory and visuoconstructive ability with atrophy in the cingulate cortex 35 , a neural region important for maintenance in working memory 36,37 . Support for a causal relationship between ARHL and neurocognitive decline comes from several mouse studies which report brain atrophy, impaired neurogenesis (including in the hippocampus) and increased expression of phosphorylated tau following hearing loss along with impaired learning and memory [38][39][40][41][42] . If we consider that VSTMB relies on a network which involves regions known to be functionally disrupted in ARHL individuals and in prodromal AD 7 , then the selective VSTMB deficits observed in this study may be indexing such a negative functional reorganization which is thought to be a potential mechanism linking ARHL to dementia. Such a hypothesis will need investigation.
Multiple hypotheses exist as to how ARHL and dementia may be connected. There may be a common causal mechanism such as vascular determinants, a mechanistic pathway such as neural reorganization due to hearing loss or a mediating factor such as social isolation following ARHL 6 . Neuro-imaging evidence suggests that this functional reorganization may be driven by an impoverished auditory input or by the attentional load associated with difficulties in perceiving speech following ARHL 25,43 . Findings from our exploratory analyses are consistent with this. Those in the HLG who performed poorly on the VSTMB task had greater hearing loss in the lower frequencies (crucial for speech) indicating further advancement in the ARHL pathophysiological process. Table 3. VSTMB task outcomes for the two groups of participants. The visual short-term memory binding (VSTMB) task outcomes for the control group (CG) and the hearing loss group (HLG). Linear mixed models were used to examine the primary outcome of change in accuracy from shapes to binding conditions between groups. The ANCOVA models were used to assess all the outcomes of the VSTMB test as a secondary analysis. Age, sex and years of education were included in linear mixed and ANCOVA models as covariates. Mean Reaction Time (MRT) for shapes transformed to inverse of square root to account for non-normality. Binding A' data transformed to a squared scale. *Assessed using rank analysis of covariance. (2019) 9:12600 | https://doi.org/10.1038/s41598-019-49023-1 www.nature.com/scientificreports www.nature.com/scientificreports/ Additionally, they had lower phonemic fluency scores, possibly reflecting the decline in phonological abilities previously observed in ARHL 5 .
Higher cognitive load in auditory working memory when processing speech may draw resources from ventral stream regions 44 which maintain feature binding 45 . Also, altered visual attention to assist speech perception following early stage ARHL may drive cross-modal reorganization along the ventral visual stream in temporal regions associated with auditory processing 25,46 . Interestingly, mild AD patients present altered visual attention when processing bound (but not unbound) features, possibly reflecting inefficient cortical mechanisms responsible for encoding bindings 47 .
Alternatively, a common pathophysiological mechanism may affect both the inner ear and neural regions sub-serving feature binding. While the primary risk factor for both ARHL and AD dementia is age 48 , the VSTMB task has been demonstrated to be insensitive to ageing 49 . Additionally, pathophysiologic features of AD have been observed in central auditory neural regions but not in the peripheral auditory structures 50 . Genetic risk factors may account for such an association. For example, ApoE e4 (apolipoprotein E-epsilon4) is strongly linked in isoform-dependent manner with sporadic AD 51,52 and ARHL 53,54 , possibly through changes in cholesterol homeostasis 55 or hypercholesterolemia in the main vasculature and associated atherosclerosis 56,57 . Other possible common mechanisms include the metabotropic glutamate receptor gene which is linked to both ARHL and AD via the glutamatergic pathway or mitochondrial dysfunction via the SIRT3 pathway 48 . Limitations. The primary limitation of our study is small sample sizes and a small number of VSTMB trials which may have resulted in an underestimation of the difference between groups. Additionally, while we found a weaker capacity to form visual bindings with ARHL, we cannot deduce from these findings how ARHL and impaired VSTMB are connected. Our findings provide some support for the hypothesis that ARHL mechanistically affects cognitive function based on prior literature as reported here. Limited research has been conducted on changes in cognitive processing with ARHL prior to decline in performance on more general cognitive tests such as the MoCA as observed in epidemiological studies. Further research is warranted to examine if altered visual short-term memory processing is a feature of early cognitive decline following ARHL. Neuro-imaging studies examining the neural correlates of binding in an ARHL sample compared to controls and AD samples would be informative. Any differences or similarities in neural correlates of binding across ARHL and AD groups matched in behavioral performance would help to elucidate the underlying pathophysiological processes linking ARHL with dementia. Genetic markers for both ARHL and AD could also be assessed. Furthermore, longitudinal studies are required to assess the validity of impaired VSTMB in predicting future risk of dementia with ARHL.
The VSTMB test is purely visual making it appropriate for use with ARHL patients. In our sample, maintained executive resources could not compensate for weaker binding capacity. Also, the VSTMB test does not have any linguistic components meaning that it can be used globally and in developing countries which are preferentially affected by both ARHL and dementia. It is insensitive to normal cognitive ageing, education and cultural background 45 . Furthermore, VSTMB is not impaired in other age-related clinical conditions including depression, vascular dementia, dementia with Parkinson's disease, dementia with Lewy bodies and frontal lobe dementia 45 .
Clinical trials aimed at maintaining or rehabilitating cognitive function in ARHL could include VSTMB as a target for therapeutic success or as a preclinical marker to identify potential participants. Hearing aids can reduce attentional costs, particularly when equipped with algorithms to improve speech-in-noise perception 5 . Also, benefits for visuospatial working memory have been noted 58 . However, the majority of the HLG reported wearing hearing aids suggesting that additional interventions may be required.
Conclusions
In conclusion, we found a decline in VSTMB with hearing loss which has only previously been reported in AD samples. To the best of our knowledge this is the first study to link ARHL with a potential preclinical cognitive test for AD. Further research is warranted to examine the mechanism underpinning the relationship of ARHL with VSTMB and examine it as a potential biomarker for future dementia.
Data Availability
Following publication, anonymized data will be shared by request from any qualified investigator.
|
2019-08-29T15:14:43.844Z
|
2019-08-29T00:00:00.000
|
{
"year": 2019,
"sha1": "63ae4e7f0cf5dbaad17f81f7d5a08a0a486380fa",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-49023-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63ae4e7f0cf5dbaad17f81f7d5a08a0a486380fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258230102
|
pes2o/s2orc
|
v3-fos-license
|
Growth and Physiological Responses of Six Species of Crassulaceae in Green Roof to Consecutive Water Deficit Conditions
At present, the plant species used for green roof are limited. In order to further expand the plant species used for green roof, it is particularly important to explore the adaptability of different plant species. In this study, 6 species of Crassulaceae ( Kalanchoe blossfeldiana ( K. blossfeldiana ), Sedum lineare ( S. lineare ), Hylotelephium erythrostictum ( H. erythrostictum ), Phedimus aizoon ( P. aizoon ), Rhodiola rosea ( R. rosea ) and Sedum sarmentosum ( S. sarmentosum )) were selected and planted on the stainless-steel planting rack on the roof with a substrate thickness of 15 cm. After normal maintenance and management for 1-month, continuous water deficit treatment was carried out. The physiological properties were measured at 0, 10, 20, 30 and 40 days, respectively. The results showed that: (1) The 6 species of Crassulaceae grew normally and well within 0-20 days. At 30 days, K. blossfeldiana had the most severe wilting rate (53%) and S. lineare had the least wilting rate (14%), but still maintained basic ornamental value. After 40 days of water deficit treatment, all the plants lost their ornamental value. (2) Among the 6 species under different water deficit treatment times, S. lineare and P. aizoon ranked between 1-3 in the 0-20 days when the water deficit degree was mild, and still ranked first and second in the 30-40 days when the water deficit was severe, with the best performance throughout the whole process, which could adapt to extensive green roof and did not need too much management. S. sarmentosum and K. blossfeldiana ranked 4-6 in the membership function under 40-day water deficit stress, with the worst performance. They are not suitable for extensive green roof and need careful management. In addition, considering the ornamental effect and ecological benefits of green roof, except K. blossfeldiana and S. sarmentosum , the other four plant species are suitable for the application of extensive green roof.
Introduction
In the process of rapid urbanization, many significant achievements have been made in urban construction [1], but a series of urban problems have also emerged, among which ecological deterioration and environmental destruction are the most typical ones [2]. In recent years, ecological protection has received unprecedented attention, green is becoming the direction of the general trend [3]. As an important content of ecological civilization construction in China, urban greening has been paid more and more attention [4,5]. On the other hand, with the growing need for a better life, people begin to pay more and more attention to the restoration of ecological environment and the improvement of living environment [6]. With the acceleration of urbanization and the increase of population, the per capita urban green space is becoming smaller and smaller [7]. In this era, it is not realistic to increase the green area in the limited urban space only by expanding the urban green space [8,9]. In high-density cities, the urban ground green space decreases, which promotes the extension of greening to three-dimensional space [10]. Green roof has become one of the effective measures to increase the urban green coverage rate and alleviate urban energy and ecological environment problems [11,12].
Green roofs add many ecological benefits to the city, including stormwater management [13], energy conservation [14], mitigation of urban heat island effects [15], noise and air pollution reduction [16], carbon sequestration [17], and increasing biodiversity [18], etc. Under the special condition of green roof, plants are planted isolated from the ground and lack of underground water supply [19]. At the same time, the light is strong, the summer temperature is high, the wind speed is high, and the water evaporation is fast [20]. Therefore, the water deficit resistance of green roof plants is extremely high. Crassulaceae is known for its strong growth, short plants, dense branches and leaves, long green period, strong stress resistance, wide suitable range, extensive management and other advantages in urban landscape greening [21,22]. So, it has been gradually promoted and applied, especially in the green roof, three-dimensional greening and other harsh living environment [23]. Based on these traits, it has an irreplaceable position. As commonly used plants in green roof, research on the drought resistance and comparative screening of Sedum is of great significance for the application of green roof and increasing the ecological benefits of green roof [24].
Crassulaceae has good adaptability to the harsh conditions of green roof [25]. Oztan and Arslan studied the drought resistance of different short evergreen succulent plantlets prone to flatness. By observing external morphology and carrying out physiological tests, 27 species of Sedum, Silene, Euphorbia and Sempervivum were screened out [26]. 11 species of Sedum and a variety of plants of Sempervivum were selected with the condition of the cold and drought resistance under the thin soil layer. The growth of 18 native tree species and 9 species of Crassulaceae in the roof simulator during the winter were investigated by measuring chlorophyll fluorescence and related physiological indicators. It found that all Crassulaceae plants survived the winter and grew well. Among them, 4 plant species can adapt to extensive green roof without irrigation in Michigan [27]. Li screened plants suitable for green roof in Fuzhou, China through physiological indexes and proved that Rhodiola rosea, Sedum lineare and Sedum sarmentosum have strong resistance to high temperature [28]. Sedum alfredii, Brunfelsia latifolia, and Gardenia jasminoides Ellis var. grandiflora have weak heat tolerance.
Green roof is one of the important ways to increase urban green area and improve the ecological environment [29,30]. The selection of plant species directly determines the management cost, popularization degree and ecological benefits of green roof [31]. As a common plant used in green roof, the Crassulaceae has been the focus of attention [32]. Previous studies have studied the growth status and stress resistance of some Crassulaceae plants in some areas under different conditions [33], but the effects of continuous water deficit on Crassulaceae plants for green roof have not been reported. Prior to this study, the other research work was carried out at the same site and the related results have been published. Interestingly, after the experiment was over, we found that the Crassulaceae plants that were left over from the experiment performed the best over the long term. Therefore, in this study, to more fully verify the performance of Crassulaceae, we added species of Crassulaceae, 6 species of Crassulaceae commonly used in green roof were selected and subjected to continuous water deficit treatment for 40 days. The growth and physiological parameters were analyzed to provide scientific and reasonable reference for the selection of green roof plants. By comparing the performance of six species of Crassulaceae on green roofs, it is expected to select suitable plants for application. Finally, we hope to improve the urban ecological environment and promote the sustainable development of city.
Overview of the Experimental Area
The experiment was conducted from March 11, 2022 to May 29, 2022 Fig. 1a) and b), respectively. The data was obtained by software WheatA. The average of maximum daily temperature was 22.63ºC, the average of minimum daily temperature was 13.32ºC, and the average temperature was 18.02ºC. The average relative humidity was 56.25%. Rainfall was excluded due to water deficit treatment.
Experimental Methods
In previous experiments, we found that some Crassulaceae plants that were left over from the experiment performed the best over the long term. Then we selected six species of Crassulaceae which was performed well for the experiment. They were Kalanchoe blossfeldiana (K. blossfeldiana), Sedum lineare (S. lineare), Hylotelephium erythrostictum (H. erythrostictum), Phedimus aizoon (P. aizoon), Rhodiola rosea (R. rosea) and Sedum sarmentosum (S. sarmentosum). A total of six groups, were planted on a 60*60*20 cm stainless steel planting shelf on the roof of the Jianyi Hall of Shandong Jianzhu University. Among them, treatment and control were set up in each group. Using 15 cm thick substrate, perlite: vermiculite: turfy soil = 1:1:1, as shown in Fig. 2. These plants were watered once every two days, 10 liters each time, and lasting for a month. Water deficit treatment was carried out after that management. Accordingly, the control group of each plant species was managed normally. Furthermore, canopy (Polyethylene PE rain-proof cloth suspended by a rope) was built in rainy days to ensure water deficit conditions. Malondialdehyde (MDA), catalase (CAT), peroxidase (POD), and superoxide dismutase (SOD) of each plant were measured at 0, 10, 20, 30 and 40 days respectively, and biological replicates of each index were performed three times. For the determination of each index, 2 g fresh leaf samples were taken, flash-frozen with liquid nitrogen after removing large leaf veins, and put into ziplock bags of different types, which were kept fresh on dry ice and sent to Nanjing Cavence Testing Technology Co., LTD. The indexes of MDA, CAT, POD and SOD were measured with the kit of Nanjing Mufan Biotechnology Co., LTD. The relative chlorophyll content (RCC) was measured with RN-YL01 handheld chlorophyll meter.
The experimental principle of index determination is as follows: CAT can decompose H 2 O 2 , which is quickly terminated by adding excessive ammonium molybdate. The remaining H 2 O 2 reacts with ammonium molybdate to form a light-yellow complex. The activity of CAT can be calculated by measuring its change at 405 nm.
Under the catalysis of POD, hydrogen peroxide oxidizes guaiacol to a brown substance, which has the maximum light absorption at 470 nm, so the activity of peroxidase can be measured by the change of absorbance at 470 nm. The reaction system of xanthine and xanthine oxidase was used to generate superoxide anion (O 2̇̅ ), and the superoxide anion reducible azolium tetrazolium was used to form blue methyl, which was absorbed at 560 nm. Superoxide anion can be removed by SOD, so as to inhibit the formation of methyl black. The deeper the blue of the reaction solution, the lower the SOD activity, and the higher the activity.
Under acidic and high temperature conditions, MDA can react with thiobarbituric acid (TBA) to produce redbrown products with a maximum absorption wavelength of 532 nm. In addition, the maximum absorption wavelength of sucrose and TBA color reaction products in plant tissues is 450 nm, but there is also absorption at 532 nm. This interference should be removed during measurement. Thus, the content of malondialdehyde in the sample can be accurately calculated.
Statistical Analysis SPSS Statistics 26.0 and Microsoft Excel 2010 were used for statistical analysis. One-way analysis of variance and Duncan's multiple comparison method were used for data analysis with P≤0.05. As there was almost no significant difference in the control group of each plant species itself during the experiment. Therefore, the value on 0 day was taken as reference for the control group. In addition, GraphPad Prism 9.4.1 was used to draw the charts.
The determined indexes were calculated by fuzzy mathematics subordinate function value method (SFV) or anti-subordinate function value method (ASFV) for comprehensive evaluation at each time point under different water deficit treatment time conditions according to the method of Li et al [34]. The subordinate function value is calculated as follows: (1) (2)
Effects of Continuous Water Deficit on the Growth
States of Six Plants After water deficit treatment, the growth status of plants in each treatment period was photographed and recorded (as shown in Fig. 3). On the 10 th and 20 th day, all six plant species grew well, and on the 10 th day the growth was better than that on the 0 day. K. blossfeldiana had fully blossomed on the 10 th day, and the density of growth of other plants was obviously better than that on the 0th day. On the 30 th day, wilting appeared in different degrees in each plant, K. blossfeldiana's flowers and leaves partially withered. The color of S. lineare changed from green to yellow. The leaves of H. erythrostictum, P. aizoon, R. rosea and S. sarmentosum were partly dehydrated and withered, but those changes did not affect the overall viewing. research results, the normal growth time of plants under water deficit stress was relatively short, which may be caused by differences in temperature, humidity, latitude and the selection of Crassulaceae plants between the two studies.
Effects of Continuous Water Deficit on Physiological Characters of Six Plant Species
Under water deficit stress, MDA content of each plant at different time points showed significant differences (P≤0.05). According to Fig. 4a), with the increase of water deficit time, MDA contents in K. blossfeldiana, H. erythrostictum, S. lineare, P. aizoon and S. sarmentosum decreased first and then increased, and R. rosea showed an increasing trend on the 10th day. Compared with the MDA content of the six plants on day 0 and day 40, the MDA content of R. rosea increased the most, which was 56.19 nmol g -1 fresh weight, and P. aizoon showed the minimum increase with 0.66 nmol g -1 fresh weight.
Under treatment condition, POD activity of each plant was significantly different at different time points except P. aizoon (P≤0.05). However, there is no significant difference in POD activity of P. aizoon. According to Fig. 4b), with the increase of water deficit time, POD activities of K. blossfeldiana, H. erythrostictum, S. lineare at different time points decreased first and then increased. Among them, K. blossfeldiana, S. lineare declined on day 10, but H. erythrostictum decreased on day 20. In addition, POD activity of R. rosea and S. sarmentosum has been on the rise. The POD activity of the six species on day 0 was compared with that on day 40, except no significant difference in P. aizoon, POD activity of R. rosea increased the most, and the increased amount was 216.88%. Conversely, the POD activity of K. blossfeldiana increased the least, which was only 28.10%.
In the process of water deficit stress, CAT activity of each plant at different time points also showed significant differences (P≤0.05). According to Fig. 4 c), with the increase of water deficit time, CAT activity of each plant decreased on the 10 th day and increased On the 40 th day, all the plant species were almost severely wilted and lost their ornamental and ecological value.
The growth of plant species was observed, and the wilting rate of each plant in different periods was recorded (as shown in Table 1). No plants appeared wilting state from day 0 to day 10. On the 20 th day, K. blossfeldiana showed mild wilting, with a wilting rate of 10%. On the 30 th day, K. blossfeldiana had the most severe wilting rate (53%) and S. lineare had the least wilting rate (about 14%). On day 40, K. blossfeldiana had the most severe wilting rate with 90%, and P. aizoon had the least wilting rate with 49%.
Green roof plays a direct role in improving urban ecological environment and is a bridge between buildings and natural ecological environment [35]. As an indispensable part of green roof, plants' stress resistance is an important condition for whether they can be selected as green roof plants [36]. At present, most of the green roof in Jinan, China is extensive, lacking in management and relying on natural rainfall irrigation. In this respect, drought resistance of plants is necessary. Therefore, effects of prolonged water deficit stress on six species of Crassulaceae in green roof were discussed and compared in this study.
In order to explore the growth status of Crassulaceae plants under drought stress, the growth of Crassulaceae plants under several substrate thicknesses was investigated [37]. The results show that irrigation must be carried out at least every 14 days when the substrate thickness is 2 cm. When the substrate thickness is 6 cm, it should be irrigated at least once every 28 days. When the substrate thickness was 10 cm, the experimental plants could still grow normally without watering for up to 88 d. In this experiment, the conditions for growing plants were observed after each time period of water deficit treatment, and we found that plant growth status could be evident in normal. All plant species grew well on 0-20 days. However, plants appeared different degree of wilting on 30 days, but they could still maintain the basic ornamental value and ecological value. As the water deficit continued to increase, when up to the 40 th day, all the plants basically lost their ornamental value and showed severely wilting. Compared with Vanwoert's Fig. 4d), there were significant differences in SOD activity of each plant at different time points (P≤0.05). SOD activity of K. blossfeldiana, S. lineare, P. aizoon, R. rosea and S. sarmentosum decreased on the 10 th day and ascended afterwards, showing a trend of first decreasing and then increasing. The SOD activity of H. erythrostictum showed an increasing trend firstly, although decreased on the 40 th day, it still showed a higher activity compared with the initial value. Compared with SOD activity on day 0 and day 40 of the six species, SOD activity of S. sarmentosum increased the most with 284.39%. On the contrary, the activity of SOD of P.aizoon decreased 13.71%.
As can be seen from Fig. 5, there were significant differences in RCC of six species at different time points (P≤0.05). RCC contents of H. erythrostictum and S. lineare decreased from 0 to 10 days, increased on the 20 th day, and decreased later. However, K. blossfeldiana, P. aizoon, R. rosea and S. sarmentosum showed an overall downward trend. Compared with the RCC of the six species on day 0 and day 40, S. sarmentosum decreased the most (up to 34.67 SPAD) and R. rosea decreased the least (only 8.2 SPAD).
In this experiment, the content of MDA, activities of CAT, POD and SOD of the six species of Crassulaceae generally showed an upward trend with the increase of water deficit degree, while the content of RCC generally showed a downward trend, which was basically consistent with the research conclusion of Lu et al [38]. Moreover, Lu et al. showed that water shortage in the early planting stage of Sedum can lead to thinner root size and larger root to stem ratio. These changes in root traits may have a positive impact on the root activity and drought tolerance of green roof plants under water deficit stress, which is the same as the trend of decreasing first and then increasing in this experiment. These results indicated that water deficit stress was beneficial to the growth of Crassulaceae. In the later experiment, the optimal growth state of each plant could be explored by controlling the gradient of the irrigation amount and water deficit degree of plants [39].
Dean and Claire showed that plants with "slow" traits such as relative growth rate, biomass and leaf
Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy
area need less water and have better drought tolerance than plants with "fast" traits [40]. By setting moderate drought stress, severe drought stress and sufficient water control test, it was found that Cosmos bipinnata and Meehania urticifolia are not suitable for moderate and severe drought stress and cannot survive [41]. Lu et al. studied the effect of substrate depth on the growth morphology and drought tolerance of Sedum lineare, and found that the shallower substrate leads to poor plant growth status and drought tolerance under continuous drought stress [38]. Although in these works the response of plants to drought stress from different perspectives was investigated, they did not involve the change of drought resistance of plants under the continuous drought treatment time under the green roof environment. The changes of MDA content, CAT, POD, SOD activity and RCC of six species of Crassulaceae plants through different water deficit treatment times were determined in this experiment. It was found that with the deepening of water deficit degree, there was no significant difference in POD activity of P. aizoon; furthermore, the degree of change of MDA and SOD was also the smallest among the six plant species. Its total SFV ranking was 1, 2, 3, 2 and 2, respectively from 0 to 40 days, and it's pretty high on the list. Besides, through observation of growth state, its state performance was also better. There may be three reasons for this phenomenon: (1) Its own regulatory mechanism enables it to adapt to the increasing degree of water deficit; (2) The increase of ambient air humidity on the roof can significantly affect P. aizoon compared with other plants, P. aizoon has a stronger ability to absorb water vapor in the air; (3) The sample size of the experiment is not large enough, the next experiment should increase the sample size, and control the relative humidity of the air to further explore and verify the adaptability of P. aizoon.
Comprehensive Evaluation of Six Plant Species under Consecutive Water Deficit Conditions
On day 0 of the water deficit treatment (in the state of just watered), the SFV indicated that S. lineare was the best performer, the least affected by roof environment. Conversely, K. blossfeldiana was the worst performer. Among six plant species, the total SFV of S. lineare and P. aizoon was much higher than that of the other four plants, furthermore, there was no significant difference in the SFV of the other four plants. On day 0 of water deficit treatment, the growth states of the six plants were: P. aizoon>S. lineare>H. erythrostictum>S. sarmentosum>K. blossfeldiana>R. rosea (as shown in Table 2).
The SFV showed that on the 10 th day of water deficit treatment, S. lineare performed best and was least affected by water deficit. Instead, K. blossfeldiana showed the worst performance, the total SFV was much lower than the other five plants, and the growth status of six plants on the 10 th day of water deficit treatment as follows: H. erythrostictum>P. aizoon>S. lineare>R. rosea>K. blossfeldiana>S. sarmentosum (Table 3).
The SFV showed that on the 20 th day of water deficit treatment, S. lineare performed best and was least affected by water deficit; and H. erythrostictum fared worst and was most affected by water deficit. There was no significant difference in SFV between S. lineare and R. rosea (P≤0.05), which was much higher than that of the other four plants. Furthermore, there was no significant difference in SFV of the other four plants.
The growth status of the six plants on the 20 th day of water deficit treatment as follows: R. rosea>S. lineare >P. aizoon>H. erythrostictum>K. blossfeldiana>S. sarmentosum (see Table 4).
On the 30 th day of water deficit treatment, the SFV showed that S. lineare performed best and was least affected by water deficit. Accordingly, K. blossfeldiana received the worst performance, the most affected by water deficit. Compared with each other, the SFV of S. lineare was much higher than the other five plants, the six-plant growth status on the 30 th day as follows: S. lineare>P. aizoon>H. erythrostictum>R. rosea> K. blossfeldiana>S. sarmentosum ( Table 5).
The SFV showed that on the 40 th day of water deficit treatment, S. lineare performed best and was least affected by water deficit. Moreover, K. blossfeldiana had the worst performance and most affected by water deficit. In addition, the SFVs of S. lineare and R. rosea were much higher than those of the other four plants. On the 40 th day of water deficit treatment, the growth states of six plants were: S. lineare>P. aizoon>R. rosea> H. erythrostictum>K. blossfeldiana>S. sarmentosum (see Table 6).
It could be seen from Fig. 6 that the SFV of S. lineare and P. aizoon ranked between 1-3 when the water deficit degree was light from day 0 to day 20. Furthermore, the SFV of the both plant species still ranked first and second when the water deficit stress was severe from day 30 to day 40. They performed the best adaptability in the whole process and the best ability to adapt to extensive green roof without excessive management. S. sarmentosum and K. blossfeldiana ranked 4-6 in the SFV under 40-day water deficit stress, with the worst performance. The both were not suitable for extensive green roof and needed further careful management. R. rosea ranked first in the SFV on day 20 and performed the best, nevertheless, it demonstrated only moderate performance at the rest of the time. In addition, H. erythrostictum only performed the best Table 2. Comprehensive evaluation of six plant species on day 0 based on SFV method. Table 3. Comprehensive evaluation of six plant species on the 10 th day based on SFV method. In this experiment, considering the ornamental effect and ecological benefits of green roof, it is recommended that the irrigation frequency should be within 10-30 days. From 0 to 30 days, except K. blossfeldiana and S. sarmentosum, the growth state of the other four plants is observed to be in good condition, which has certain ornamental and ecological value, and so the four plant species are suitable for the application of extensive green roof.
Drought resistance of plants is a characteristic of the interaction between the heritability of plants and the external environment, which needs to be comprehensively reflected by measuring multiple indicators [42,43]. The application of SFV can remove the one-sidedness caused by a single index. The SFV ranking of the six species demonstrated that K. blossfeldiana and S. sarmentosum were ranked 4-5 with poor performance. S. lineare and P. aizoon ranked 1-2 for a long time with good performance. The other four plants were resisted or not resisted at each time point, which was the same as the observed results. It can be seen that the external morphological characteristics of plants can directly show the strength of drought resistance of the tested plants. Water use is closely related to plant biomass, total leaf area and leaf traits, whereas water deficit response is independent of morphological traits. The natural Table 5. Comprehensive evaluation of six plant species on the 30 th day based on SFV method. distribution of species does not correspond to their water requirements or water deficit, suggesting that plants from less arid climates may also be suitable for use on green roofs [44]. Selection of species based on traits rather than climate of origin can improve green roof performance and biodiversity by expanding the current plant palette [45]. Through experiments, the external morphological characteristics of plants can be further quantified, their growth indicators can be measured, the mathematical model of drought resistance and external morphological characteristics of plants can be established, the drought resistance of plants can be estimated by the external morphological characteristics of plants, and the database can be established for the plant selection of green roof in various regions [46]. Under the condition of water shortage in summer, some Crassulaceae plants can increase the stress resistance of neighboring plants and reduce the peak soil temperature by 5-7ºC [47]. These results suggest that Crassulaceae can be used to increase the diversity of roof plants. In the future, we can screen out the Crassulaceae plants with better performance, rationally alternate the Crassulaceae plants with common plants, and finally, improve the quality of urban environment.
Conclusions
Among the six species of Crassulaceae used for green roof, S. lineare and P. aizoon showed the best performance in the whole process. It can be considered to be popularized and applied in extensive green roof. K. blossfeldiana and S. sarmentosum received the worst performers, the both plant species need certain management measures if considering the application in extensive green roof. In general, they are not suitable for planting in extensive green roof. To increase urban green area, optimize the quality of urban environment and improve residents' living environment, promoting the development of green roof is imminent. Especially, further screening for high resistance of plants applied in the green roof environment is necessary. On the basis of reducing management costs, further exploration is needed to obtain more data to promote the development of green roof in the future.
|
2023-04-20T15:18:31.018Z
|
2023-04-18T00:00:00.000
|
{
"year": 2023,
"sha1": "ebf245600c5c1285b7da11648affe3e53ee34edc",
"oa_license": null,
"oa_url": "http://www.pjoes.com/pdf-161869-89128?filename=Growth%20and%20Physiological.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d300ded23d04036a60a5afe3b37786839e276780",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
59336824
|
pes2o/s2orc
|
v3-fos-license
|
Secure UAV-Based System to Detect Small Boats Using Neural Networks
This work presents a system to detect small boats (pateras) to help tackle the problem of this type of perilous immigration. The proposal makes extensive use of emerging technologies like Unmanned Aerial Vehicles (UAV) combined with a top-performing algorithm from the field of artificial intelligence known as Deep Learning through Convolutional Neural Networks. The use of this algorithm improves current detection systems based on image processing through the application of filters thanks to the fact that the network learns to distinguish the aforementioned objects through patterns without depending on where they are located. The main result of the proposal has been a classifier that works in real time, allowing the detection of pateras and people (whomay need to be rescued), kilometres away from the coast. This could be very useful for Search and Rescue teams in order to plan a rescue before an emergency occurs. Given the high sensitivity of the managed information, the proposed system includes cryptographic protocols to protect the security of communications.
Introduction
According to research in the area of political geography, EU governments are immersed in a difficult battle against irregular migration [1]. This phenomenon was fuelled by the 9/11 attacks and is becoming identified as a "vector of insecurity," so some countries are using it to justify drastic acts of immigration measures [2]. On the other hand, the so-called Transnational Clandestine Actors [3] operate across national borders, evading state laws, becoming rich at the cost of the despair suffered by many people living in "poor countries" and violating their basic human rights. Thus, this scenario leads to catastrophic consequences most times, with innumerable loss of human lives, mainly because of the vulnerability of the means used to travel [4]. Data from the European External Borders Agency FRONTEX [5] indicate that between 2015 and 2016 more than 800,000 people irregularly passed through the Mediterranean to Europe seeking refuge [6]. The number of irregular immigrants who cross the sea increases every year compared to the number of them who do it on foot [7]. To face this situation, the EU has been investing more and more resources in the detection of these flows [5].
Various advances in research and various works have been recently presented that deal with the problem of irregular immigration, [8][9][10][11][12]. These works make use of various image processing techniques for the detection of people and boats in the sea, demonstrating that it is feasible to use technology in combination with UAV systems to face these problems.
This work presents a system to cope with the aforementioned problem, making use of a system based on a UAV for capturing several sequences of images with a smartphone on it. The UAV uses an optimal route planning system such as the one presented in [13] adapted to marine and coastal environments. These images are sent in real time through antennas using LTE/4G coverage to a remote cloud server, where they are processed by a Convolutional Neural Network 2 Complexity Original 1st Step 2nd Step 3rd Step Figure 1: 3-step process on 5x5 kernels for noise removal.
(CNN) that has been previously trained to detect three types of objects: ships, pateras (or cayucos), and people (on land or at sea). These images may be used in a system for the detection and alert of various security and emergency. For this purpose, an Automatic Identification System (AIS) is used to compare each image of a detected shop, according to its GPS position, with a marine traffic database in order to find out whether it is a registered ship or not. The security of the application against manipulation or attacks is structured in different levels depending on the used technology. For transmission via LTE (4G) in coastal areas with coverage, the SNOW 3G [14] algorithm is used for integrity protection and flow encryption [15]. Furthermore, in order to avoid image manipulation by inserting watermarks that may disable the ability to identify images [16], an algorithm has been designed that first adds white noise to the image and then compresses it using a JPEG compression [17]. This proposal not only prevents the attack but also, according to performed tests, increases the accuracy of the network.
Furthermore, to protect data transmission systems, an Attribute-Based Encryption (ABE) is used. In the bibliography, several proposals can be found that use ABE as a light cryptographic technique to deal with problems different from the one described in this work. On the one hand, in the paper [18], ABE is used to access scalable media where the complete subcontracting process returns plaintext to smartphone users. On the other hand, in [19], ABE is proposed to access health care records using a mobile phone with decryption process outsourced to cloud servers.
The present document is structured as follows: Section 2 discusses the use of neural networks, particularly convolutional ones. Section 3 defines the different stages of the proposed system and some experiments during data collection, training, and obtaining results. Section 4 describes the security layer, with emphasis on possible attacks and countermeasures applied to this type of system. Finally, Section 5 closes the paper with some conclusions.
Image Processing
Image processing is the first essential step of the proposed solution to the aforementioned problem. Image processing is a methodology that has been widely applied in the field of research for the identification of objects, tracking of objects, detection of diseases, etc. For many years in the field of artificial intelligence, neural networks have gained strength and, in image processing, the CNN have been used.
CNNs are a type of network created specifically for image and video processing. The relationship between CNNs and neural networks is quite simple because both have the same elements (neurons, weights, and biases). Mainly, the operation in these networks is based on taking the inputs and encoding some properties of the architecture CNNs passing the results from layer to layer in order to obtain classification data.
The special thing about is the mathematical convolution that is applied. A convolution is a mathematical operation on two functions and to produce into a new function that represents the magnitude in which is a superimposed and a transferred and inverted version of . For example, the convolution of and is denoted by * and is defined as the integral of the product of both functions after moving one of them at a distance of . Thus, the convolution is defined as = ( * ) = ∫ ∞ −∞ ( ) ( − ) . In CNN, the first argument of the convolution usually refers to the input and the second argument refers to the kernel (a fixed-size matrix with positive or negative numerical coefficients, with an anchor point within the matrix that, as a general rule, is located in the middle of the matrix). The common output of applying a convolution with a kernel is treated as a new feature map such that ( , ) = ∑ ( + − , + − ) ( , ). Figure 1 illustrates an example of the evolution when applying a 3-step process on a 5x5 kernel with values of 0.04 for removing residual noise. Although at first glance there are no significant differences, the image becomes blurrier as the different steps (from 1 to 3) are applied (this can be seen better on the edges of the pateras).
The layers of compression or pooling layers are applied along the neural network to reduce the space of the representation by making use of a number of parameters and the same computation of the network. This process is applied independently in each step in depth within the network, taking as reference the inputs. It is also used for the reduction of data overfitting. There are different types of pooling although among the most best is the one known as Max-Pooling. In the Max-Pooling process, having as input an array of NxN and MxM grids is taken such that ⊆ {1.. }. The resulting number of horizontal and vertical steps will determine the discard threshold for the new layers. In order to get the model used in this work performance improvements, modern object detector based on CNNs knowing as Faster R-CNN [20,21]. This model depends in part on an external region used for selective search [22]. The Faster R-CNN model has a design similar to that of Fast R-CNN [23], so that it jointly optimises classification and bounding box regression task. Moreover, the proposed region is replaced by a deep learning network and the Region of Interest (RoI) is replaced by features maps. Thus, the new Region Proposal Network (RPN) is more efficient for the generation of RoIs because for every window location, multiple possible regions are generated based on a bounding box ratio. In other words, a visualisation is made on each location in the characteristics map, considering a number of different boxes centred on it (a longer area, a fatter one, a longer one, etc.). This is shown in Figure 2 in an example, where a softmax classifier composes a Fully Connected (FC) Layer.
Proposed System
This section describes the procedures performed after the acquisition of the data, explaining the processing of the images as well as the detailed training process, providing information on each of the obtained results and discussing why to choose one or other result to continue the experiments. Finally, some conclusions results are provided through a demonstration image with correct classification ratios.
Data Collection and Classification.
For the collection of images of pateras, due to the lack of accessibility to boats of the type patera or cayuco in a massive way, we have opted for the gathering of information from of an image, by using Google Earth software, always looking for a height with respect to sea level of 100 meters (height at which the drone would fly in the experiment) and maintaining a totally perpendicular view. After obtaining a dataset of 3,347 images corresponding to three classes of the problem, we opted for a classification of each of the objects of the various images, taking into consideration that an image can have one or several objects. According to the applied dataset, complexity, and capabilities and based on the available documentation, the majority of the authors refer to the Pareto Principle [24] as the most convenient. Thus, the ratio 80/20, which is the most used has been considered an adequate proportion for the train/test neural network.
For the classification of each image, we have used the software known as LabelImg (see Figure 3), created in Python, for supervised training. This software creates a layer that separates each image into different objects limited by a bounded box corresponding to the position ( , ), width, and height of the box. This process has been performed for the three types of object considered in this work.
As a result of this classification, the XML files corresponding to all the objects within the images are obtained. These XML files are used later in the training. 3.2. Training. The total time for each training stage of the neural network with the conditions described above has been an average of 16 hours using a GPU Nvidia 1050. The following guidelines were followed in order to obtain the best possible network for detection: (i) Train the same neural network three times with the same learning coefficients, regulation coefficients and activation function. The reason for doing only three trainings instead of 5, 10, 30, or more is because there is no rule of thumb that shows an exact trend in the result of the network. Therefore, to rule out strange behaviours, each network was trained 3 times to see empirically that the three results (even starting from a random vector in the direction of the gradient descent) gave similar results. With this, false positives can be discarded when compared with other networks.
(ii) In all training sessions, a number of 200.0K iterations was established for each of the networks.
(iii) The reflected values have a tendency of 0.95, which means that they are not real values but that is the value of tendency in each instant calculated from previous values (so it can be higher or lower).
(iv) In each training, the network changed the dataset on the basis that the dataset has a total of 3347 images divided approximately between a ratio of 80% training and 20% for testing resulting in a distribution as shown in Table 1. For each training of each neural network, the initial set of training and test has been altered to demonstrate the efficiency of the neural network from different datasets. This type of randomness has been applied to demonstrate the functionality and efficiency of the system in methods based on stochastic decisions. Given that the applied methodology is stochastic and random, performing permutations on the dataset allows obtaining different results, which is used to obtain better datasets to be used as a basis for other trainings.
(v) The used programming language was Python 3 for the machine learning, and the TensorFlow software library for the neural network oriented environment [25].
The use of tools such as TensorFlow (among other frameworks for analysis in convolutional networks) has been widely used in recent years for the detection of patterns in images. One of the most notable current works is its use in medical environments to face deadly diseases such as cancer [26], which slightly improves the performance obtained by specialists in dermatology. Among others, neural networks have been used in the marine environment [27] to identify marine fouling using the same framework. Although according to several studies [28,29] the use of unbalanced datasets in neural networks is detrimental, we have opted for an approach to a real problem where a balanced data network is not available. At the end of this section, a comparison was be made with a balanced network. It can be appreciated that although the results of the balanced network are better, they do not differ too much.
Once the three neural networks finished training (see Figure 4) the final results shown in Table 2 were obtained based on the Classification Loss (CL). The CL equation [21] is optimised for a multitask loss function and the loss of regression of the bounding boxes where an object is found. When interpreting Figure 4, it must be taken into consideration that the used parameter was the CL. This value is better the closer it gets to zero. Initially, the graph starts with discrete values between 0 and 1, where 1 is a total loss and 0 is a no loss.
Considering these results, the next step in obtaining an improved network was to copy the data from the best network (number 2) and perform a sensitivity analysis on 4 new trainings varying the training coefficients. In the image of Figure 5 we can see the behaviour of the different networks (including the original network number 2). The variation of the coefficients was of multiplicative type, altering the different coefficients of learning rate according to the distribution shown in Table 3.
The learning rate is a measure that represents the size of the vector that is applied in the descent of the gradient when applying the partial derivatives. On the one hand, if the learning rate is very large, the steps will be larger and will approach a solution faster. However, this can be a mistake because it could jump without coming to a good approximation to the solution. On the other hand, if it is very small, it will take longer to train but it will come up with a solution. That is why the study was carried out with different learning rates, to check which learning rates come closest to a good solution in less time. Thus, using these results, the best final coefficient (see Table 4) with respect to the classification loss was the Network 2-04 (grey line in Figure 5), with a coefficient lower than 0.02310 which means that in 97.8% cases it produces correct classifications. Afterwards, the best parameters of the best Network (2-04) were exported to the initial sets of networks 1 and 3 to see if a better result could be obtained by applying the coefficients of the best network so far (see Figure 6). In that image we can see how, although for a short time, the best network is still Network 2 with the fourth training (2-04). However, with these parameters, Networks 1 and 3 (1-0204 and 3-0204) improve slightly with respect to the initial Networks 1 and 3 values (see Table 5 and Figure 7).
After having obtained a result that is feasible in terms of experiments, we decided to make an analysis on a neural network with balanced data (80% training and 20% test), this time having the following random distribution of images (see Table 6). After hours of training with the balanced network, we got a better result than with what had been the best detection network until now (see Figure 8). The results shown in Table 7 mean that the network trains well with these training coefficients and it even improves the results with a balanced network type (although in a real environment it is difficult to find it).
Results.
In order to check the efficiency of the best neural network obtained in the previous section, different random frames have been extracted from a video showing different scenarios where pateras and people are seen from a real drone (see Figure 9). It should be noted that all these frames have never been previously seen by the neural network (not even in the testing stage), but are completely new to the neural Complexity 7 Figure 9: Image detection test from validation dataset. network. This set of frames is known as validation dataset.
On the one hand, as a main result, the proposal produces a correct classification of boats and pateras between 94 and 96 % (although these ratios can vary from 92 to 99 % depending on the frame). On the other hand, a correct classification index for people has been obtained, which is around 98-99 %, although, in certain frames (a video has thousands of frames), this ratio can drop to 73 % due to interference with other objects in the video and the environment. From the obtained results, we conclude that the defined procedure based on the Faster R-CNN proposed for training can be successfully used to detect boats, people and pateras.
Security
In a system, like the defined above whose the results can be the difference between saving a human being saving or not it is essential to have the appropriate mechanisms to ensure that the information is not modified or accessed by illegitimate parties. It is for this reason, a study of possible attack vectors related to neural networks for image detection and problems in wireless communications has been performed, paying special attention in adversarial and Man in the Middle attacks.
Adversarial Attacks.
Neural networks are one of the most powerful technological algorithms in the field of artificial intelligence. Among the various networks we can find some specifically oriented to image detection (as seen throughout this work). Sometimes, the simple behaviour of a network fed with inputs (pixel's images) where the output is a type of classification can lead to error, so that it can be inferred that the network does not act correctly. An adversarial attack [30] is a type of attack within the rising field of artificial intelligence consisting in introducing an imperceptible perturbation that leads to an increased probability of taking the worst possible action.
In the case analysed in this work, this attack involves using a type of images that can be supplied to the network that though represent a certain type of object (for example a ship), for the network they mean something else (like a dog, a toaster.).
In environments where there are thousands or millions of types of classes and classifications it could be a problem. That is the case, for example of Google's Inception V3 [31], could be used to alter the driving of an autonomous vehicle that uses this type of network for altering the images of its environment by applying stickers [16] on traffic signs for the purpose of changing the maximum speed in a road.
The way in which this type of attacks act is through the excitation of the neural network inputs through the inclusion of new figures or noise (generally not perceptible to the human eye) making modifications in the input image (with gradient descent and back propagation techniques) making the network suffer something similar to an optical illusion.
The answer to the question of what this type of attack is looking for is how to maximise the error that can be achieved by entering erroneous information. That is to say, to do the opposite that the neural network expects to do this to minimise the error with the input parameters, all this, taking into account the fact that a formula must be applied to minimise the difference between the added disturbance and the original image with respect to the human eye. In the neural network that has been presented in this work, the number of classes has been limited to classify a total of 3 types of objects (ships, pateras, and people) so the margin of error within the possible classification could mean a sort of security system against this type of attack. Because of this, it can be said that, in a controlled environment, this type of attack would have no effect on the proposed system.
However, as a proof of concept, an adversary attack has been created that could modify the behaviour of our network. To do this, we have taken a random frame from a video sequence where we can see a whole patera, a part of another one and people (who could be castaways) in the sand. In Figure 10 it is possible to appreciate two main cases: (1) Starting from the frame extracted from the video, it has been processed directly by our neural network.
(2) As a result, we have been obtained a detection of the patera with an index of 0.96 and of the people with an index variant between 0.98 and 0.99.
Case 2.
(1) Based on the same starting image seen in Case 1, training has been carried out with a different neural network to the original one. With this, we demonstrate that adversarial attacks also fulfill a transition property that can affect other networks. The result of this step is the generation of an image with noise. The noise shown in the image has been modified by enlarging the brightness of the image in 10 steps because the original was a black image with little visible noise.
(2) By applying the original noise to the initial image, a new resulting image is obtained that, with the naked eye, as can be seen in the image 2a of Figure 10, it has some pixels different from the original image. (3) To soften the effect appreciated in point 2, a series of mathematical operations are applied to each pixel to soften the textures and obtain a finished image.
(4) The image generated in step 3 is sent to the neuronal network for the detection of pateras.
(5) Finally it can be seen that by applying this new image, which at first sight is the same as the original, the system does not detect the patera.
Attack Vectors.
In a possible scenario where an attacker wants to bypass the security measures that have been implemented, he/she could follow one of the following two ways (see Figure 11).
(1) As discussed in the previous section, there is a type of attack called an adversarial attack that is designed to confuse the neural network. The aforementioned technique that has been used in a real environment of the inclusion of stickers [16] could be applied to the pateras in order to avoid the drone control by pretending the patera look like an unrecognised object or other object. Among the possible countermeasures to mitigate the attack, we have (i) JPEG compression method: This method is based on the hypothesis that the input image (i.e., the one taken by the drone) can be manipulated by the aforementioned attack so that the generated image has a noise that confuses the network. For the removal of this malicious noise it is possible to go for an 85% compression using a JPEG compression format [17] that will make the embedded noise blur, while maintaining the basic characteristics of shape in the image. (ii) Noise Inclusion: The drone could have a simple internal image manipulation system to apply a Gaussian random noise so that the noise is imperceptible in the image before being sent to a server for processing. To do this we use an image of noise previously generated (or created in the moment) and then apply the formula of the blending method known as "screen" described with the formula: 1 − (1 − ) * (1 − ). The advantage of this compared to the method described above is that the loss of image quality is not affected (depending on the weight and size of the noise). However, it could include a slightly visible noise (see Figure 12).
(2) Man in the Middle attack (MITM) is a sort of attack where an attacker is placed between sender and receiver. In this case the sender would be the drone and the receiver the server that will do the image processing through the neural network. The communication media can vary depending on the coverage in the area of emission. It is always a wireless connection like 2, 3, 4, or even 5G. In this case, the attacker can intercept the signal with the image in order to modify it on the fly including the necessary noise to make the image undetectable. To deal with such attacks, the system protects the security of the communication system through the cryptographic scheme described in the following section.
Attribute-Based Encryption.
In the proposal described in this paper, an encryption is used to protect from unauthorised attackers the confidentiality of the database of the images captured from a smartphone on a UAV, which are labelled with the date when the image was taken, the GPS location of the photograph along with other selected metadata. Smartphones are less powerful than other systems in computations such as image transmission, key generation, and information storage and encryption. In order to reduce the overload of the security protocol, we propose the use of a light cryptographic technique. In addition, to offer the remote server the ability to securely examine all the images captured by UAVs in a region, an Attribute-Based Encryption is proposed. This is a type of public-key encryption in which private keys and encrypted texts depend on certain attributes, and decryption of encrypted text is only accessible to users with the satisfactory attribute configuration. In the proposal described in this document, the used attributes are related to date/time, geopositioning location, linked UAV, etc., so that the private key used in the remote server is restricted to be able to decipher encrypted texts whose attributes coincide with the policy of attributes linked to the UAVs it controls. This private key can be used to decrypt any encrypted text whose attributes match this policy but have no value in deciphering others. This means that each operator in a remote server has a set of UAVs assigned to him/her, so the images captured by any UAV cannot be decrypted either by an unauthorised attacker or by a server operator unrelated to that UAV. Since the used encryption is public-key encryption, its security is based on a mathematically hard problem, and security holds even if an attacker manages to corrupt the storage and obtain any encrypted text. The operations associated with the proposal involve the following phases.
(1) Setup phase: this phase is where the algorithm takes the implicit security parameter to generate the Public Key (PuK) and Master Key (MaK).
(2) KeyGen phase: in this phase, a trusted part generates a Transformation Key (TrK) and Private Key (PrK) linked to the smartphone, which are used to decrypt the information sent from it.
(3) Encrypt Phase: in this phase, the smartphone encrypts the image using PuK and MaK before sending it to the remote server.
(4) Transformation phase: this phase is where the remote server performs a partial decryption operation of the encrypted data using TrK to transform the encrypted text into a simple encrypted text (partially decrypted) before sending it to the operators. If the operator's attributes satisfy the access structure associated with the encrypted text, he/she can use the decryption phase to retrieve the plaintext from the transformed ciphertext. (5) Decryption phase: as the transformation phase transforms the encrypted text into a simple encryption, finally, the server operator uses this phase to retrieve the plaintext of the transformed ciphertext, using the PrK.
Conclusions
In this work, a novel proposal has been defined to provide a solution to the problem of the detection of small boats, which are used many times by irregular immigration. For this purpose, a Convolutional Neuronal Network has been created, specifically trained for the detection of three types of objects: boats, people and pateras. This system is used in coordination with a UAV that sends the signals via wireless connection (LTE) to a server that will be responsible for processing the image in the neural network and detecting if it is an anomalous situation. This work describes and includes several security systems that allow us to guarantee the stability of the data so that they cannot be altered either before or after being sent. As a complement to protect data transmission systems using the ABE algorithm, a novel mechanism has been implemented to mitigate adversarial attacks by overlapping Gaussian noise to the possible attacking image noise. In addition, to discard false positives, a compendium of the GPS coordinates of the UAV is made with an AIS system of geolocalised ships. The main contribution is a light neural network with a high rate of detection of objects (reaching up to 99% accuracy), which would be a great help for Search And Rescue or border patrol teams in case of having to perform a rescue. A study with thousands of frames could be done to see the detection ratio and the accuracy of each object, to determine which object is better detected.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
|
2019-01-31T15:49:49.011Z
|
2019-01-02T00:00:00.000
|
{
"year": 2019,
"sha1": "a268aeaed26080762559e082f62ff94fd74a80ea",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2019/7206096",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a268aeaed26080762559e082f62ff94fd74a80ea",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
11346360
|
pes2o/s2orc
|
v3-fos-license
|
A Ratio-dependent Eco-epidemiological Model Incorporating a Prey Refuge
The present paper deals with the problem of a ratio-dependent predator-prey model incorporating a prey refuge with disease in the prey-population. We assume the predator population will prefer only infected population for their diet as those are more vulnerable. Dynamical behaviours such as boundedness, local and global stability are addressed. We have also studied the effect of discrete time delay on the model. Computer simulations are carried out to illustrate our analytical findings.
87
of prey and predator. Various forms of functional responses has becomes the focus of considerable attention from time to time in ecological literature. Much early works were concerned with the way in which this function varies with prey density (e.g., the so called Lotka-Volterra [25,29] and Holling Type-II [15] functional responses). A predator-dependent functional response, which is a function of the ratio of the prey and predator, is known as a ratio-dependent functional response. Predator-prey models with such ratio-dependent functional response are strongly supported by numerous field and laboratory experiments [3,7,28,31].
The research of the hiding behaviour of preys have been incorporated as a new ingredient of prey-predator models and its consequences on the dynamics of prey-predator interactions can be recognized as one of the major issues in both applied mathematics and theoretical ecology. In nature, prey populations often access to areas where they are safe from their predators. Such refugia are usually playing two significant role, serving both to reduce the chance of extinction due to predation and to damp prey-predator oscillations. These are therefore a potentially important means of increasing species richness in natural communities and of stabilizing population sizes, biomass and productivity. It is well known that many more attentions have paid on the effects of a prey refuge for predator-prey system. Predator-prey interactions often exhibit spatial refuge which afford the prey some degree of protection from predation and reduce the chance of extinction due to predation [16,19,26,30]. Hassel [12] showed that adding a large refuge to a model, which exhibited divergent oscillations in the absence of refuge, replaced the oscillatory behaviour with a stable equilibrium. These mathematical models and a number of experiments indicate that refuge have a stabilizing effect on predator-prey interactions.
Time delays of one type or another have been incorporated into epidemiological models by many authors [8,10,21,27]. In general, delay-differential equations exhibit much more complicated dynamics than ordinary differential equations since a time-delay could cause a stable equilibrium to become unstable and cause the populations to fluctuate.
In this paper, we have investigated the dynamical behaviour of a ratio-dependent predator-prey systems with infection in prey population, and the effect of refuge in the infected prey. Here we have studied the boundedness, local and global stabilities of the non-equilibrium points of this system. We have also considered a discrete timedelay in the interaction term of the predator equation.
The rest of the paper is structured as follows: In section 2, we present a brief sketch of the construction of the model, which may indicate the epidemiological relevance of it.In section 3, boundedness of the basic deterministic model (2.2) is discussed. Section 4 deals with the boundary equilibrium points and their stability. In section 5, we find the necessary and sufficient condition for the existence of the interior equilibrium point E * (s * , i * , y * ) and study its local and global stability. Computer simulations of some solutions of the system (2.2) are also presented in this section. The occurrence of Hopf bifurcation is shown in section 6. The effect of discrete time-delay on the system (2.2) is studied in section 7. In section 8, computer simulation of variety of numerical solutions of the system with delay is presented. Section 9 contains the general discussions of the paper.
The Basic Mathematical Model
Before we introduce the model, we should like to present the brief sketch of the construction of the model which may indicate biological relevance of it.
1. We have two populations. Let N denotes the population density of the prey and P denotes the population density of the predator, respectively, in time T.
2. We assume that in the absence of infection, the prey population density grows according to a logistic curve with carrying capacity K(K > 0), with an intrinsic growth rate R(R > 0).
3. When the prey population is infected, then we assume that the total prey population N divided into two classes: one is the class of susceptible preys, denoted by S, and the other is the class of infected preys, denoted by I. Therefore at any time T , the total prey population is N (T ) = S(T ) + I(T ). 4. We assume that the disease is spread among the prey population only and the disease is not genetically inherited and the infected populations do not recover or become immune. We assume that the infection mechanism follows the response function ϕ (S, I) = ASI S + I with A as the transmission rate.
5. The infected prey I(T ) is removed by death (say, its death rate is positive constant D 1 )or by predation before having the possibility of reproducing. However, the infected prey population I(T ) still contribute with S(T ) towards the carrying capacity of the system. 6. The infected prey is more vulnerable than susceptible prey. We assume that the predator population consumes only infected prey with ratio-dependent Michaelis-Menten functional response function It is assumed that the predator has the death rate constant D 2 (D 2 > 0), and the predation coefficient C (C > 0). The coefficient in conversing prey into predator is e.
7. To protect the prey populations, we construct our model by incorporating a refuge protecting mI of the infected prey, where m ∈ [0, 1) is constant. This leaves (1 − m) I of the infected prey available to the predator.
The above considerations motivate us to introduce an eco-epidemiological model under the framework of the following set of nonlinear ordinary differential equations : The model we have just specified has ten parameters, which makes the analysis difficult. To reduce the number of parameters and to determine which combinations of parameters control the behaviour of the system, we nondimensionalize system (2.1). We choose Then system (2.1) takes the form (after some simplification)
Boundedness
In theoretical eco-epidemiology, boundedness of a system implies that a system is biologically well-behaved. The following theorem ensures the boundedness of the system (2.2): Applying a theorem on differential inequalities, we obtain Thus, all the solutions of (2.2) enter into the region Hence the theorem.
Boundary Equilibria and their Stability
In this section, we study the stability of the boundary equilibrium points of the system (2.2). In the following lemma we have mentioned the boundary equilibria of the system (2.2) and the condition of their existence.
2) always has two boundary equilibrium points, namely the trivial equilibrium E 0 (0, 0, 0) and the axial equilibrium exists if and only if a > 1 and When this condition is satisfied,ŝ,î are given bŷ In terms of original parameters of the system, the condition a > 1 becomes A > R and This implies that if the intrinsic growth rate is less than the transmission rate and the death rate of the infected prey is less than the transmission rate B, then the predator becomes extinct and conversely.
Remark 4.1 :
When a = b, the necessary and sufficient condition for the existence of the predator-free equilibrium is that the force of infection lies in the interval (d 1 , 1 + d 1 ).
The system (2.2) cannot be linearized at E 0 (0, 0, 0) and E 1 (1, 0, 0) and therefore local stability of E 0 and E 1 cannot be studied [8]. Therefore, we are only interested in the stability in the predator-free equilibrium point ) .
90
A Ratio-dependent Eco-epidemiological Model Incorporating a Prey Refuge The variational matrix V (E 2 ) at the equilibrium point E 2 is given by The eigen values are This implies that E 2 is locally asymptotically stable in the si-plane if and only if M > 0. Now E 2 is asymptotically stable or unstable in the y-direction according as l < or > d 2 .
Remark 4.2 :
When a = b, the above result indicate that if the predator mortality is so high that the conversion factor can't control or manage it, then the system stabilizes to the predator-free equilibrium. This is quiet natural from practical point of view.
The Interior Equilibrium Point : Its Existence and Stability
First we consider the existence and uniqueness of the interior equilibrium point E * (s * , i * , y * ).
. Furthermore, s * , i * , y * are given by From Lemma 5.1, we can observe that the interior equilibrium point E * (s * , i * , y * ) exists if and only if both the conditions (i) and (ii) are satisfied. If any one of the condition is violated then E * (s * , i * , y * ) does not exists. Now from condition (i) of the lemma 5.1, we have, Hence, to exists the interior equilibrium point E * (s * , i * , y * ) the refuge constant m should lies in the interval The variational matrix of (2.2) at E * is given by The characteristic equation is By the Routh-Hurwitz criterion, it follows that all eigenvalues of characteristic equation have negative real part if and only if Theorem 5.1. E * is locally asymptotically stable if and only if A 1 > 0 and ∆ > 0.
Global Stability Analysis of E *
Now, we shall study the global dynamics of the system (2.2) around the positive equilibrium E * (s * , i * , y * ). We use Liapunov function to prove the global result.
Proof : Let us consider the following positive definite function about E * : where M, N are positive constant to be specified later on. Differentiating L with respect to t along the solution of (2.2), a little algebraic manipulation yields A Ratio-dependent Eco-epidemiological Model Incorporating a Prey Refuge We now choose Now the condition a(A − lb) + lb > 0 implies that Therefore dL dt is negative definite and consequently L is a Lyapunov function with respect to all solutions in ℜ 3 + . Hence the theorem.
Model with Discrete Delay
It is already mentioned that time-delay is an important factor in biological system. It is also reasonable to assume that the effect of the infected prey on the predator population will not be instantaneous, but mediated by some discrete time lag τ required for incubation. As a starting point of this section, we consider the following generalization of the model (2.2) involving discrete time-delay : , y(0) ≥ 0 (6.1) All parameters are the same as in system (2.2) except that the positive constant τ represents the reaction time or gestation period of the predator y.
The system (6.1) has the same equilibria as in the previous case. The main purpose of this section is to study the stability behaviour of E * (s * , i * , y * ) in the presence of discrete delay (τ ̸ = 0). Before we get into the depth of the things, we would like to consider the following theorem, which gurantees the boundedness of the time-delayed system (6.1). and for t → ∞, 0 ≤ bs + ai ≤ 2b α . Therefore, it is possible to find two positive numbers κ and T such that i(t) < κ for t > T . Since Universal Journal of Applied Mathematics 1(2): 86-100, 2013 93 Therefore, for t > T + τ , we have Hence the theorem.
We now study the stability behaviour of E * (s * , i * , y * ) for the system (6.1). We linearize system (6.1) by using the following transformation : Then linear system is given by and It is well known that the signs of the real parts of the solutions of (6.3) characterize the stability behaviour of E * . Therefore, substituting λ = ξ + iη in (6.3) we obtain real and imaginary parts, respectively as and A necessary condition for a stability change of E * is that the characteristic equation (6.3) should have purely imaginary solutions. Hence to obtain the stability criterion, we set ξ = 0 in (6.4) and (6.5). Then we have, Eliminating τ by squaring and adding (6.6) and (6.7), we get the equation for determining η as Substituting η 2 = σ in (6.8), we get a cubic equation given by By Descartes' rule, the cubic (6.9) always have at least one positive root. Consequently the stability criteria of the system for τ = 0 (i.e. of the system 2.2) will not necessarily ensure the stability of the system for τ ̸ = 0. In the following theorem, we have given a criterion for switching the stability behaviour of E * .
Also let σ 0 = η 2 0 be a positive root of (6.9). Then there exists a τ = τ * such that E * is locally asymptotically stable for 0 ≤ τ < τ * and unstable for τ > τ * , provided and the minimum is taken over all positive η 0 such that η 2 0 is a solution of the equation (6.9). Or, in other words, the system (6.1) exhibits a hopf bifurcation near E * for τ = τ * . (Expressions for A 1 , A 3 , ∆ are given in section 5).
Proof : Notice that η 0 is a solution of (6.8). Solving (6.6) for cos τ η 0 and substituting in (6.7), we find that for τ = τ 0 , the characteristic equation (6.3) has purely imaginary roots, ±iη 0 , where τ 0 = g(η 0 ). Again it may be noted that if ±iη 0 is a solution of (6.6) and (6.7), then η 2 0 is a solution of (6.8). Then smallest value of such a τ 0 gives the required τ * . The theorem will be proved if we can show that To show this, we differentiate (6.6) and (6.7) with respect to τ and then set ξ = 0to obtain Solving (6.11) and (6.12) with τ = τ * and η = η 0 , we get which is positive under the condition (6.10), and thus the theorem established.
Numerical Simulation
Analytical studies can never be completed without numerical verification of the results. In this section we present computer simulation of some solutions of the system (2.2) and (6.1). Beside verification of our analytical findings, these numerical solutions are very important from practical point of view.
We choose the parameters of system (2.2) as a = 1. Fig. 1a. The si-plane and iy-plane projections of the solution are shown in Fig. 1b and 1c respectively. Clearly the solution is a stable spiral converging to E * . Fig. 1d shows that s,i and y populations approach their steady-state values s * , i * , and y * respectively, in finite time. It is mentioned before that the stability criteria in the absence of delay (τ = 0) will not necessarily guarantee the stability of the system in presence of delay (τ ̸ = 0). Let us choose the parameters of the system as a = 1.2, b = 0.75, d 1 = 0.2, c = 1.0, d 2 = 0.8, l = 2.0, m = 0.6 and (x(0), y(0), z(0)) = (0.5, 0.5, 0.5). It is already seen that for such choices of parameters E * = (0.2957, 0.2083, 0.1250) is locally asymptotically stable in the absence of delay. Now for these choices of parameters, it is seen from Theorem 6.2 that there is a unique positive root of (6.9) given by σ 0 = η 2 0 = 0.2526 for which f (η 0 ) = 0.0330 > 0 and τ = τ * = 2.7379. Therefore by Theorem 7.2, E * = (s * , i * , y * ) losses its stability as τ passes through the critical value τ * . We verify that for τ = 2.47 < τ * , E * is locally asymptotically stable, the phase portrait of the solution (presented in Fig. 2a) being stable spiral. Fig. 2b shows that for the above choices of parameters, s, i, y populations converge to their equilibrium values s * , i * , y * , respectively. Keeping other parameters fixed, if we take τ = 3.0 > τ * , it is seen that E * is unstable and there is a bifurcating periodic solution near E * (see Fig. 3a). Fig. 3b, 3c, 3d depicts the oscillations of the populations s, i, y in finite time.
Discussion
In this paper, we have studied an eco-epidemiological model incorporating an prey refuge with disease in the prey population which is governed by modified logistic equation [32]. Incorporating a refuge into system (2.1) provides a more realistic model. A refuge can be important for the biological control of a pest, however, increasing the amount of refuge can increase prey densities and lead to population outbreaks. It is shown (in Theorem 3.1) that the non-dimensionalized system (2.2) is uniformly bounded, which in turn, implies that the system is biologically well behaved. In deterministic situation, theoretical epidemiologists are usually guided by an implicit assumption that most epidemic models we observe in nature correspond to stable equilibria of the models. From this viewpoint, we have presented the most important equilibrium point E * (s * , i * , y * ). The stability criteria given in Lemma 5.1 and Theorem 5.1 are the conditions for stable coexistence of the susceptible prey population, infected prey population and predator population. It is mentioned by several researchers that the effect of time-delay must be taken into account to have a epidemiologically useful mathematical model [8,10,21,27]. From this viewpoint, we have formulated (6.1) where the delay may be looked upon as the gestation period or reaction time of the predator. Then a rigorous analysis leads us to Theorem 7.2 which mentions that the stability criteria in absence of delay is no longer enough to guarantee the stability in the presence of delay, rather there is a value τ * of the delay τ such that the system is stable for τ < τ * and become unstable for τ > τ * .
All our important mathematical findings without and with time-delay are numerically verified and graphical representation of a variety of solutions of system (2.2) and (6.1) are depicted using MATLAB. Our analytical and numerical studies show that, using the delay τ as control, it is possible to break the stable (spiral) behaviour of the system and drive it to an unstable (cyclic) state. Also it is possible to keep the levels of prey population (susceptible and infectious) and predator at a required state using the above control. Finally, our model can be generalized in obvious ways to food chains and competitive systems.
|
2019-04-20T13:08:58.178Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "fd23980261f4a18f8d358451998a5699f414426e",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/201309/ujam.2013.010208.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ff2737d0376bb40362551043345eb5daec9b737d",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
259924157
|
pes2o/s2orc
|
v3-fos-license
|
A perceptual field test in object experts using gaze-contingent eye tracking
A hallmark of expert object recognition is rapid and accurate subordinate-category recognition of visually homogenous objects. However, the perceptual strategies by which expert recognition is achieved is less known. The current study investigated whether visual expertise changes observers’ perceptual field (e.g., their ability to use information away from fixation for recognition) for objects in their domain of expertise, using a gaze-contingent eye-tracking paradigm. In the current study, bird experts and novices were presented with two bird images sequentially, and their task was to determine whether the two images were of the same species (e.g., two different song sparrows) or different species (e.g., song sparrow and chipping sparrow). The first study bird image was presented in full view. The second test bird image was presented fully visible (full-view), restricted to a circular window centered on gaze position (central-view), or restricted to image regions beyond a circular mask centered on gaze position (peripheral-view). While experts and novices did not differ in their eye-movement behavior, experts’ performance on the discrimination task for the fastest responses was less impaired than novices in the peripheral-view condition. Thus, the experts used peripheral information to a greater extent than novices, indicating that the experts have a wider perceptual field to support their speeded subordinate recognition.
www.nature.com/scientificreports/ "orange" breast) 4 . Hagen et al. 27 found that experts' recognition of birds at the subordinate level is disproportionately impaired when color information is removed or altered compared to bird novices. In a follow-up study, bird novices underwent species-level training of naturally colored birds 28 . Following training, the trained novices showed increased sensitivity to bird color, which was also reflected in the N250 ERP component at occipitotemporal channels associated with higher-level visual processes. Experts also have knowledge of bird shape and parts at a finer grain of detail than novices. For example, bird experts typically name beak shape as a diagnostic feature. The granularity of visual detail in an image can be represented by the spatial frequency (cycles per image [cpi]) in different frequency bands. Whereas low spatial frequencies (in cpi) generally convey coarse-grain level information about the global shape of the object, higher spatial frequencies contain information about finer detail, such as internal part structure 29 . Hagen et al. 30 masked the external contour of birds and filtered them at different spatial-frequency bands to examine if experts show higher sensitivity to internal parts than novices. They found that both novices and experts were disproportionately more accurate categorizing birds displayed in a middle range of spatial frequencies (8-32 cpi). However, only the experts were also faster categorizing the birds when displayed in this range, indicating an increased sensitivity to the information contained in the middle range of spatial frequencies in experts than novices 30 , also see31, 32 . These mid-range spatial-frequency bands are also critical for face recognition 33,34 , a form of naturally acquired expertise 35 , indicating that the shape and part information captured by these frequencies are important for other forms of expert subordinate recognition. Overall, these findings indicate that expert recognition is achieved by an increased sensitivity to visual dimensions containing the cues useful for discriminating the subordinate bird categories 4 .
It has been claimed that whereas novices perceive objects in terms of their individual parts, experts see objects in their domain of expertise as unified wholes e.g., 23 . Holistic expert perception has been measured in the composite paradigm where participants are instructed to focus on the top (or bottom) half of an object and to ignore information in the bottom (or top) half. The difficulty of selectively attending to the task-relevant top (or bottom) half of the object, while ignoring the task-irrelevant opposite object half, is interpreted as evidence of a holistic representation that makes it difficult to decouple a whole object into its constituent halves 36 . A composite effect has been shown to depend on real-world expertise, including car experts recognizing car halves 37 , chess experts recognizing chess-board configurations 38 , and in laboratory trained experts recognizing artificial objects 7,39,40 . The holistic percept is thought to be specific to the canonical orientation of the objects. Consistent with the holistic view, the expert recognition of animal experts (dog show judges 41 ; Budgerigar experts 42 ), expert radiologists 43 and car experts 44 is disproportionately impaired when objects in their domain of expertise are turned upside-down. Thus, standard assessments of holistic processing (i.e., composite task, inversion task), indicate that experts recognize their objects of expertise more holistically than novices.
Overall, studies indicate that the fast and accurate subordinate expert recognition is facilitated by increased sensitivity to diagnostic visual dimensions (e.g., color or spatial frequencies) and holistic perception, as defined by an inability to selectively inhibit peripheral object parts in a task irrelevant object half. However, it is unknown if this inability reflect a difference in the ability to perceive information in the periphery away from fixation, or an impairment in the ability to selectively disengage from diagnostic object parts.
Perceptual fields and object expertise. The field of view where the observer encodes task-relevant visual cues has been referred to as the "perceptual field" 45,46 . Gaze-contingent masking is a technique used to directly test the observer's perceptual field by systematically manipulating the visual information that is available for any single glance. For example, to assess the perceptual field in face recognition, Van Belle and colleagues 47 presented faces across three different conditions. First, faces presented in the central-view condition restricted the view to one fixated feature (e.g., mouth) using an oval window centered on the gaze position. Second, in the peripheral-view condition the oval gaze-contingent window was masked while image regions outside the window were visible (i.e., the non-fixated face features). Finally, in an unrestricted full-view control condition, participants viewed the whole image. They found that for recognition of upright faces, accuracy was good and roughly equivalent in the full-view and peripheral-view conditions and recognition in the central-view condition was poor. In contrast, for inverted faces, accuracy was the worst in the peripheral-view condition, but comparable in the full-and central-view conditions. A similar pattern was found for reaction times. Thus, the "non-expert" inverted orientation constricted the perceptual field, consistent with the notion that upright faces are perceived holistically while inverted faces are processed in a feature-by-feature fashion.
Perceptual fields can be influenced by learning and experience. Employing gaze-contingent eye-tracking, studies have shown that expert chess players make better use of peripheral vision to encode a larger span of the chess board than novices 48,49 . Moreover, radiology experts exhibit decreased search times with increasing expansion of the peripheral view for review, see50 . Increased reading skill is associated with a larger perceptual field [51][52][53][54] , and more densely packed languages are associated with a smaller perceptual window [55][56][57][58][59] . Some studies report an asymmetry around fixation that depends on the reading direction of the language. For example, readers of left-to-right languages (e.g., English) show a right-biased asymmetry with a larger field to the right compared to left of fixation 59-62, for review, see63 . Finally, brain injury causing impairments of face recognition (i.e., acquired prosopagnosia) also constricts the perceptual field of face recognition to single face features [64][65][66] . Across a range of domains with very different visual task requirements, previous work indicates that the size of the observer's perceptual field expands with learning and experience or expertise.
In the current study, a gaze-contingent paradigm 47,64 was used to test whether the speeded subordinate-level recognition of the expert is influenced by the visual information that is available in their perceptual field. We selected bird experts because expert bird recognition requires quick, accurate subordinate-level recognition 4,67 . Bird experts and novices were presented with two bird images sequentially, and their task was to determine www.nature.com/scientificreports/ whether the two images were of the same species (e.g., two different song sparrows) or different species (e.g., song sparrow and chipping sparrow). All images were shown in grayscale to target shape-based expertise processes 30 and to prevent that the sequential discrimination task was completed by memorizing local color (e.g., red ring around the eye) or global color (e.g., yellow patches around the body and wings) properties. The first study bird image was presented in full view. As shown in Fig. 1, the second test bird image was presented randomly in either the full-view, central-view or peripheral-view condition. If experts have a wider perceptual field than novices, then the peripheral-view condition would impair experts less than novices. Moreover, if expert recognition depends critically on the peripheral parts, then the central-view condition would impair experts more than novices.
Methods
Participants. Fifteen expert participants, ranging in age from 26 to 68 years (7 females, M = 46.20 years, SD = 16.52 years) were selected based on nominations from their bird-watching peers or from bird watching forums. Fifteen additional age-and education-matched participants who had no prior experience in bird watching, ranging in age from 28 to 66 years (7 females; M = 44.40 years, SD = 13.22 years), were selected to serve as the novice control group. Power analysis indicated that we had 80% power to detect a between-groups effect of at least Cohen's d = 1.06. Nine out of the 15 expert participants, previously participated in our studies on bird recognition 27,30 . Informed consent was obtained from all participants. The study was approved by the University of Victoria Human Research Ethics Office. All methods were carried out in accordance with their guidelines and regulations. Next, the participant fixated the "obligatory fixation point" that appeared either left, right, top, or bottom of the ellipse to trigger a bird image to replace the black ellipse. The "study bird" appeared on the screen for 3000 ms, after which the participants once again fixated an "obligatory fixation trigger" next to the ellipse to display the second "test" image. The bird always appeared facing one direction, allowing the participants to prepare saccades to a specific region and the second "test" image was presented randomly in either of the three viewing conditions. This shows an example of a "same" trial where both images display the same bird species. www.nature.com/scientificreports/ Bird recognition skill-level was assessed with an independent bird recognition test 11,27,30,68 in which participants judged whether two sequentially presented bird images belonged to the same or different species. In this test, data from one expert was lost due to technical issues, yielding data from 14 experts and 15 novices (this expert was nominated as expert by bird-watching peers, and therefore included in the main analysis). Two (self-nominated) experts recruited from an online forum performed low on this test (d′ < 0.66, SE < 0.43), were removed and replaced by two experts recommended by peers. Thus, while the expert sample size was 15 for the main study, a total of 17 experts were tested all together. Applying a Welch's two-sample t-test to adjust for the unequal sample sizes and unequal variance, we found that the experts obtained a significantly higher discrimination score (d′ = 1.86, SE = 0.14) than the novices (d′ = 0.87, SE = 0.09), t(22.42) = 5.95 p < 0.001).
Apparatus. Using a custom MATLAB script (https:// github. com/ simen hagen/ gazeC ontin gent_ eyeTr acking), stimuli were presented on a 21″ Viewsonic Graphic Series G225f monitor at a viewing distance of 82 cm with a spatial resolution of 1024 × 768 pixels and a refresh rate of 85 Hz. The birds subtended a visual angle of approximately 13.75° horizontally from head to tail. Eye movements were recorded with an SR Research Eye-Link 1000 system (SR Research, Osgoode, ON) at a sampling rate of 1000 Hz using a 35 mm lens and a 940 nm infrared illuminator. A chin rest was used to constrain head movements and accuracy of gaze position between 0.25° and 0.50°. Fixations were defined as the period between a saccade onset and offset, using the following parameters for event detection: a motion threshold of 0.0 deg, velocity threshold of 30 deg/s and acceleration threshold of 8000 deg/s 2 .
Stimuli. The stimuli consisted of different bird species from the Warbler (n = 8), Finch (n = 8), Sparrow (n = 4), and Woodpecker (n = 4) families, with each species represented by 12 exemplars for a total of 288 bird images. The stimuli were in part collected from previous studies with experts 11,27,30 , and supplemented with images collected from the Internet. No bird images were repeated in the experiment and therefore each condition consisted of a unique set of bird images. All images were greyscale, cropped and scaled to fit within a frame of 450 × 450 pixels and pasted on a gray background using Adobe Photoshop CS4. All stimuli are available on GitHub (https:// github. com/ simen hagen/ gazeC ontin gent_ eyeTr acking/ tree/ main/ gc_ eyetr ack_ exp/ stimu li_ birds_ gray). All images were shown in grayscale to target shape-based expertise processes (Hagen et al. 30 ) and to prevent that the sequential discrimination task was completed by memorizing local color (e.g., red ring around the eye) or global color (e.g., yellow patches around the body and wings) differences. Fig. 1A, a gaze-contingent paradigm was used to create three different viewing-conditions for the second test bird image. In the full-view condition, the bird image was fully visible (Fig. 1A, left). In the central-view condition, a gaze-contingent circular window was centered on the participants' gaze position, which restricted their view to the central region of the visual field while masking the peripheral region (Fig. 1A, middle). In the peripheral-view condition, a gaze-contingent circular mask was centered on participants' gaze position, which masked the central region while allowing the peripheral region of the visual field to be visible (Fig. 1A, right). The window and mask subtended 5.81° horizontally and 5.17° vertically of visual angle (pixel diameter = 190).
Design. As illustrated in
Unlike previous studies 47, 64 , the size of the window and mask was determined in a pilot study with a different group of novice participants to find the size that yielded approximately equal performance in the full-view and central-view conditions and a substantial impairment in the peripheral-view condition. The rationale was that this size would approximate the spatial range from which cues are perceived by novices and to which experts can be compared. This approach was taken since bird parts are challenging to define and have different sizes (e.g., small beak compared to large wing-pattern), thereby preventing a window size that contained single object parts (as possible for facial parts).
Procedure. Participants were tested in a sequential same-different matching task while their gaze positions were monitored. They were shown a sequence of two bird images and instructed to respond "same" ("c" on the keyboard) if the bird images were of the same species or respond "different" ("m" on the keyboard) if the bird images were of different species. For the same trials, the birds were different images of the same species (e.g., two field sparrows), and for the different trials, the birds were images of different species from the same family (e.g., field sparrow versus a song sparrow). The participants were instructed to respond as quickly and accurately as possible.
As illustrated in Fig. 1B, each trial began with a red fixation dot at the center of the screen that served as a drift check, by measuring deviations relative to calibration. Large deviations (i.e., > 2.0°) prompted recalibration. Acceptable drift deviations were followed by a new red fixation dot that appeared either to the left, right, above, or below a centered black oval shape (16.16 deg. horizontally from the center point of the screen). The location of this red dot was randomly determined on each trial. The oval shape served as a cue to where the bird would appear. Once participants fixated on the red dot (i.e., a fixation was registered in a small window surrounding the dot), the first study bird image was presented in full view and remained on the screen for 3000 ms. It was then replaced by another black oval shape paired with a red fixation dot that appeared randomly on either of its sides, or above or below. Again, once participants fixated on the red dot, the second test bird image was randomly presented in either of the three viewing conditions until a manual (button) response was made. This procedure ensured that every participant fixated off the bird before it appeared on the screen. The participants were also informed that the three viewing conditions would appear at random with an equal probability, and that the birds would always be presented with the head in the same left facing direction. www.nature.com/scientificreports/ There were 48 trials (24 same trials, 24 different trials) each for the full-view, central-view, and peripheral-view conditions for a total of 144 trials. Trials from the two trial types and three viewing conditions were presented in a random order, to prevent participants from adopting any strategies for the different viewing conditions. In addition, participants completed 6 practice trials with images not used during the experimental phase. Data analysis. Our primary analysis of interest for the gaze-contingent paradigm was the effect of expertise and viewing condition on recognition performance when participants were presented with the test bird image. The performance measures included sensitivity (d′) and correct response times (RTs). Following our previous work 27, 30 , we also analyzed sensitivity for different RT bins to test whether viewing conditions differentially affected experts and novices in the fastest and slowest responses.
We also conducted secondary analyses for the eye-tracking data during the presentation of the study bird image. Eye-tracking data from one expert was lost due to a technical error, yielding eye-tracking data for 14 experts and 15 novices (in contrast to behavioral data for 15 experts and 15 novices). For the results, we present the viewing patterns first, followed by our primary analyses of interests. In the SI, we present additional analyses for the test image related to fixation count, fixation duration, etc., for completeness.
Transparency and openness. The study was not preregistered. The experimental code and stimuli can be found on GitHub (link provided above).
Eye movements during the study bird. Defining bird regions of interests (ROIs). Five regions of interest
(ROIs) were manually drawn on each bird image, corresponding with the bird's head, wings, body, tail, and feet. Figure 2A illustrates these ROIs for an exemplar bird image. Any fixations outside of the bird (i.e., not in any ROI) were excluded from further analyses. Proportion looking time was computed for each ROI as the time fixated in each ROI divided by the total fixation duration across all five ROIs (i.e., the whole bird). Figure 2B presents mean proportion fixation duration as a function of group (experts, novices) and ROI (head, wings, body, tail, feet). The fixation duration within each ROI was divided by the total fixation duration across all ROIs (i.e., only including fixations within the bird) separately for each participant. The fixation data was analyzed in a 2 × 5 mixed design ANOVA with group as a between-subjects factor and ROI as a within-subjects factor. The main effect of group was not significant, Figure 2C shows the temporal unfolding of fixations across ROIs separately for experts and novices, by extracting 100 ms time windows relative to stimulus onset and computing within each time window the proportion of viewing time in each ROI (ROI fixation duration / total fixation duration within the bird in that time window). There was a strong correlation between the experts' and novices' temporal unfolding of viewing time for each ROI (e.g., the head ROI temporal trajectory for experts correlated strongly with that of novices') (all ROIs, rs > 0.86, all ps < 0.001). For illustrative purposes, we also plotted the time course corresponding to the obligatory fixation point that "triggered" the bird image.
Time course of viewing times by ROI.
Manual responses to the test bird. Next, we analyzed the manual response data, and the corresponding eye-tracking data for the test image (second bird image), which was subject to the gaze-contingent manipulation. This was response contingent with eye-tracking terminated upon the manual response. The main aim was to examine recognition performance as a function of viewing condition (full-view, central-view, peripheral-view) and group (expert, novice). Note that the size of the window/mask applied in the central and peripheral view conditions was calibrated through pilot testing to approximate the perceptual window of novices. The rationale was that if experts perceived the birds holistically, then their recognition should be less impaired by masking central view.
Sensitivity analysis for manual responses. Trials with RT 3 SD (1.92% of total trials) greater than each participant's grand mean was excluded from this and all subsequent analyses. Figure 3A (left) presents mean d' scores as a function of viewing condition (full-view, central-view, peripheral-view) and group (experts, novices) (see SI for ACC data). For this study, hits were defined as responding "same" on same trials, and false alarms were defined as responding "same" on different trials. The sensitivity measure (d′) was computed as: Z(hit rate) -Z(false-alarm rate), with hit rate calculated as hits + 0.5/(hits + misses + 1) and false alarm rate as false alarms + 0.5 / (false alarms + correct rejections + 1) 69 Response times for correct manual responses. Figure 3A www.nature.com/scientificreports/ Response time distribution analysis. Next, we examined how viewing condition affected expert and novice recognition during their fastest and slower reaction times. This analysis was motivated by the reasoning that faster trials reflect to a larger degree automatic responses than slower trials, and that a hallmark of expertise is rapid and automatic recognition e.g., 22,23,71 . Indeed, we previously showed that experts and novices differed in their sensitivity to color and spatial-frequency information during their fastest responses 27,30 .
We analyzed d' scores as a function of response speed. Specifically, each participant's trials were sorted from fastest to slowest separately for each viewing condition and trial type. Next, the trials were grouped into five bins containing both the fastest 20% of responses from same trials and the fastest 20% of responses from different trials (i.e., quintile bin 1), the next 20% of responses from both trial types (i.e., quartile bin 2), and so on. Within each bin, mean d' scores for each condition for each participant were computed. Figure 3B presents mean d' as a function of group (experts, novices), viewing condition (full-view, centralview, peripheral-view) and quintile bin (1,2,3,4,5). The data were first analyzed in a mixed-design ANOVA using viewing condition and bin as within-subjects factors, and group as a between-subjects factor. The main effects of group, Given the three-way interaction, we examined the effect of viewing condition on group separately for each bin. In Bins 2 and 3, the two-way interaction between group and viewing condition was significant, F(2, 56) = 3.29, 3.35, p = 0.005, 0.042, generalized eta 2 = 0.07, 0.06, respectively This interaction was marginally significant in Bin 1, F(2, 56) = 2.58, p = 0.085, generalized eta 2 = 0.04. We accepted this interaction at the one-tailed level given that our previous research indicated a general pattern of differences between experts and novices for fast responses (Hagen et al. 27,30 ; see also SI for group x viewing condition interaction for these bins in the accuracy data). A separate ANOVA per group within each Bin (1, 2) revealed a significant effect of the viewing condition for the novices, but not the experts (Novices: all Fs > 6.79, ps < 0.004, all general eta 2 > 0.14; Experts: all Fs < 2.34, ps > 0.115). Post-hoc paired t-tests showed that the novices had higher d' in the full-view and the central-view www.nature.com/scientificreports/ than the peripheral-view (Bins 1 and 2: uncorrected ps < 0.018), while full-view did not differ from centralview (Bins 1 and 2: uncorrected ps > 0.193). In contrast, a separate ANOVA per group within Bin 3 revealed a significant effect of the viewing condition for the experts, but not the novices (Experts: F(2, 28) = 7.0, p = 0.003, generalized eta 2 = 0.22; Novices: F(2, 28) = 0.94, p = 0.403). Post-hoc tests showed higher d' for the experts in the full-view than the central-view (uncorrected p = 0.022) and the peripheral-view (uncorrected p = 0.003), while recognition did not differ in the central-view and the peripheral-view (uncorrected p = 0.199). Finally, in Bins 4 and 5, the two-way interaction between group and viewing condition was not significant (Bins 4 and 5: all Fs < 1.0, ps > 0.526). Separate analysis presented in the SI confirmed that the expert peripheral-view advantage was not explained by a speed-accuracy trade-off, nor did novices' accuracy in the peripheral-view condition increase with longer RTs (e.g., to strategically shift attention to the periphery). Moreover, the advantage was not explained by differences in average fixation duration (e.g., longer fixations to divert attention away from fixations; SI). Finally, the viewing condition did not differentially impair recognition in experts and novices in terms of average fixation durations or fixation rate (see SI).
In summary, the gaze patterns during free-view (study image) of the experts and novices were strikingly similar (see SI for Bayes factor analysis). However, while the gaze-contingent central-view did not differentially impair the recognition of the experts and novices, the gaze-contingent peripheral-view impaired the recognition of experts less than novices for the fast responses. Thus, while the novices used largely central-view information, the experts used both central-and peripheral-view information for speeded recognition.
Discussion
The aim of this study was to examine whether real-word expert object recognition changes the perceptual field for objects in the domain of expertise. Using gaze-contingent eye tracking and a discrimination task, bird experts and age-matched novice participants made "same/different" within-species (i.e., subordinate category) judgements to sequentially presented pairs of bird images. The first study image was always presented in full view, and the second test image was presented randomly in either a full-view, central-view or peripheral-view condition. If experts have a larger perceptual field or processed information differently in the field than novices, then the bird experts' discrimination performance would be less impaired than the novice's performance in the peripheralview condition. Moreover, the degree to which the peripheral information is critical to their recognition would be reflected in the interference caused by the central-view condition.
Overall, the results showed that the experts discriminated the birds more quickly and accurately than novices, consistent with previous work 4,27,30 . While the overall analysis showed no difference between experts and novices as a function of viewing condition, group differences emerged in the quintile distribution analyses in which gaze-contingent effects were examined as a function of recognition speed. These analyses showed that the peripheral-view condition disrupted recognition relative to the full-and central-view conditions for the novices but not for the experts in the fast trials (Bins 1 and 2). Moreover, the central-view condition generally showed comparable sensitivity performance to the full-view condition for both groups in most quintile bins. Thus, during speeded recognition, the experts recognized the birds using peripheral information better than the novices, but their recognition did not decline when limiting the view to information only in central view. We used a one-tailed significance level for the fastest responses (Bin 1) as the current findings are in line with our previous work using similar distribution analyses 27,30 . Furthermore, control analyses ruled out alternative explanations including speed-accuracy trade-offs and differences in single fixation durations (see SI for details).
These findings are consistent with studies reporting that expertise influence the width of the perceptual field in other domains of expertise, including chess, radiology, reading, and face recognition (as discussed in the introduction). Within all of these domains, expertise is associated with better use of peripheral vision to perceive task-relevant information. The current results, combined with the previous work, suggest that widening of the perceptual field size is a general visual learning phenomenon that cuts across a range of domains with different task demands (e.g., visual search in radiology vs. object categorization in bird watching). The development of a wider perceptual field could result from the need to rapidly and accurately detect and recognize complex taskrelevant cues within a visual domain. With regard to object expertise, future work using in-lab training paradigms could test how subordinate discrimination experience with homogenous object domain influence the perceptual field size or how visual information is processed in the perceptual field.
The expert peripheral advantage in the fast responses suggest that the experts utilize a wide perceptual field, whereby both central and peripheral information is available, specifically for birds that are rapidly recognized. In contrast, the lack of expert peripheral advantage in the relatively slower responses, indicate that the experts use a more focused strategy in which local cues are attended to a larger degree for birds that are recognized more slowly. Previous studies analyzing response time distributions also show expert-novice differences during fast responses. For example, bird experts use object color for family-level recognition in both fast and slow responses, while novices use it only for slower responses 27 . Moreover, bird experts use a middle range of spatial frequencies in fast and slow family-level recognition, while novices show no spatial-frequency advantage in fast or slow trials 30 . Collectively, these studies suggest that different perceptual strategies are employed by experts depending on whether recognition is fast or slow, with fast recognition instances deviating the most from novice recognition. One possibility is that fast expert recognition reflects the subcategories for which the expert has the most refined knowledge of diagnostic object parts and colors (beak, wings, breast of a bird), allowing the retinal input to activate the object memory despite blocking a subset of the diagnostic information in the central-view condition in the current study.
How does the current results relate to previous reports of holistic expert recognition? While the composite effect for experts show that they find it difficult to ignore irrelevant object parts 37 www.nature.com/scientificreports/ part binding for experts than novices within an equally sized perceptual field. In other words, the experts could automatically select multiple features, while novices selectively focus on single/fewer features, within an equally sized perceptual field. Our design allowed us to test whether experts and novices have a different perceptual field size independent of being tasked to suppress task-irrelevant object cues. Thus, the observation that experts use peripheral cues for rapid recognition to a larger extent than novices add to the previous reports of holistic recognition using the composite effect: Experts show both holistic recognition (previous studies) and a wider perceptual field (current study), while novices show non/less-holistic recognition (previous studies) and narrower perceptual field (current study). Future studies on real-world object recognition can compare composite and inversion paradigms with gaze-contingent eye-tracking to examine if similar processes underlie holistic perception and changes to perceptual fields. In contrast to the expert and novice differences we report for the viewing condition, we found no differences between the groups when examining their fixations to different bird regions during the presentation of the study image in full view. Specifically, both groups fixated the same bird regions, with most of their fixations in the head, wing and chest regions, respectively. Moreover, the temporal unfolding of their fixations did not differ, with the initial fixation mostly in the head region. Similar analyses of the test image showed identical patterns. However, supplementary analysis of the fixation behavior to the test image revealed that experts and novices differed to some extent in the last fixation point before making a response (see SI). Thus, while the overall gaze behavior is strikingly similar, there can be subtle differences that can be investigated in future work.
The lack of substantial difference in eye movements between experts and novices is consistent with studies of face recognition that report no differences for conditions that preserve expertise versus those that do not. For example, for naturally acquired expertise 35,72 , upright and inverted faces show similar eye-movements 47,73 , as do prosopagnosics and controls 65,66 , but see 74 . In contrast, for studies on chess expertise, expert chess players display fewer fixations and have more fixations between pieces than less experienced players during recognition of chess configurations 48,49,75,76 . Similarly, expert radiologists have longer saccades and fewer fixations than less experienced observers while searching for tissue abnormalities in x-rays [77][78][79][80] . A recent study also showed that naïve participants who learn to categorize novel objects at a subordinate level exhibit an increase in average fixation duration and saccadic amplitude pre-to post-training 20 . It is possible that in our current task, perceptually salient object regions overlap with regions that are diagnostic for recognition, thereby masking eye-movement differences between experts and novices. Moreover, eye-movement differences are likely to be observed between bird experts and novices if they were asked to search for the birds in a visual scene, consistent with findings showing that car detection in visual scenes correlate strongly with car expertise 81 , although this may depend on the distractor category used [82][83][84] . Importantly, the current study shows that the gaze-contingent effect appears despite highly similar overall eye-movement behavior.
In summary, we found that bird experts can recognize birds using visual information relatively far away from central fixation compared to non-experts. This is consistent with findings from other visual expertise domains, where expertise is associated with a relatively wide perceptual field (as discussed in the introduction). While the lack of substantial differences in eye movements suggest that domain expertise depends on how a retinal input is processed, such null results should be interpreted with caution, as perhaps a more sensitive paradigm and analysis could result in differences between experts and novices. We focused on shape processing in the current study. Future work can investigate if surface color modulate how experts process peripheral information, given past reports of experts' sensitivity to color information 27 . Moreover, future work can examine how expert recognition relates to spatial processing in the human ventral-occipito-temporal cortex 85 , neural sensitivity to different object parts and color patches 86 , and sensitivity to whole birds presented beyond central vision 87 .
Data availability
The data can be requested upon emailing the corresponding author. www.nature.com/scientificreports/
|
2023-07-17T06:17:53.663Z
|
2023-07-15T00:00:00.000
|
{
"year": 2023,
"sha1": "eec5a64a61c3fbf52bc56620b768927122275759",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c7ceee11eb17ff72198a62268e333630201c63dc",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218945215
|
pes2o/s2orc
|
v3-fos-license
|
Experimental studies of benthos resistance to mechanical burying under the dredging material
The goal of the fulfilled research is the study of the influence of the sedimentary suspension from the dredging material on the benthic hydrobionts. In laboratory aquariums six series of experiments were carried out with triple replication on the ability of different benthic organisms to dig out the soil strata due to their periodic burying by marine sand. These experiments were oriented on biota of Eastern Gulf of Finland and the Neva Bay, where a number of deposit sites for dredging material are located presently. Representatives of three main groups of hydrobionts of these areas were selected for experiments – Chironomidae, Gastropoda and Oligochaeta. The ability of different hydrobiont species to overcome the stress effect of the burying under the dredged material was estimated quantitatively. The results show, that the resistance of the studied species to mechanical burying is decreasing in accordance to the scheme: Chironomus plumosus > Melanoides tuberculata > Tubifex tubifex, and depends from the thickness of the layer of dredged material.
Introduction
Burying under a sedimentary suspension of dredging material is the main type of impact on benthic communities in dumping areas and on the sites of dredged material deposits [1][2][3][4][5][6]. It is of great scientific and practical importance to estimate quantitatively the ability of individual benthic species and the bottom community as a whole to overcome this kind of anthropogenic stress. Laboratory experiments on the effects of benthos overlapping with dredging material allow quantify the endurance limits of individual hydrobiont species and bottom communities in general to this type of anthropogenic impact. A series of laboratory experiments provided quantitative data on the stability of benthos to dumping. The resulting data can be used in planning the placement of dredging material to calculate the maximum allowable load on the natural environment, for the terms and conditions of Eastern Gulf of Finland (EGoF) and Neva Bay.
In vitro experiments were conducted on the ability of benthic organisms to dig out the soil strata due to their periodic burying in process of dumping. Three most abundant groups of benthic communities of EGoF and Neva Bay were selected [1][2][3][4][5][6][7]. There were Chironomidae «bloodworms» Chironomus plumosus, Gastropoda mollusks (marine snails) Melanoides tuberculate, and Oligochaete Tubifex tubifex. All selected objects can be observed as a model of the freshwater component of the EGoF fauna [7][8][9][10]12]. The fauna of the Neva Bay is of the freshwater type [9,10].
The sand was brought from the northern coast of the EGoF. Before the experiments, sand was ignited and sieved through sieves set with different cells for obtaining grain size less than 0.25 mm.
Clear freshwater was used for filling aquariums with the hydrobionts. Before the experiment, water was exposed in the tank for two days for cleaning from chlorine.
The aquarium (35x10x25 cm) was filled by sand, 2 cm sick layer, and after that, the clean water was filled to the aquarium till 10 sm.
There were six series of experiments with triple replication. Every series included experiment on one type of organisms, with one quantity of sand, which means: 1) filling of Ch. plumosus every 3 hours with 1 cm sand layer; 2) filling of Ch. plumosus every 4 hours with 2 cm sand layer; 3) filling of M. tuberculata every 3 hours with 1 cm sand layer; 4) filling of M. tuberculata every 4 hours with 2 cm sand layer; 5) filling of T. tubifex every 3 hours with 1 cm sand layer; 6) filling of T. tubifex every 4 hours with 2 cm sand layer. Every replication was made during one day. The base of methodology was passed an assumption that the hopper dredger loaded dredged material in underwater deposit site three times in a day.
Results
The problem of prediction of dredging and dumping influence on benthic communities is important for sustainable development of deposit sites regions and adjacent waters [5,[11][12][13][14]. Laboratory tests with different types of organisms were carried out, and this data was used for estimation of dredging works influence on benthic communities. Three types of organisms were used as testing objects: Chironomidae (Ch. plumosus), Gastropoda mollusks (M. tuberculata) and Oligochaete (T. tubifex). Results of experience are shown on the figures 1-8. On the axis «x» the time of experiment is shown; оn the axis «y» -the number of organisms on the sand surface. The number of tested individuals of different species varied in different series from 10 to 30. 10 individuals of bloodworms were put in a container with water and covered with 1 cm of sand. The sand covered 100 % of the bloodworm individuals, so at the beginning of the experiment, the number of individuals on the sand surface was zero, and the graph starts from the beginning of the axis. Every hour, the presence of bloodworms on the surface was noted, as well as the presence of active bloodworms in the upper 0.5 cm surface layer, due to the fact that individuals of this species can burrow into the sand or into other soft soil. After three hours from the beginning of the experiment, the total number of bloodworms on the surface of the sand and in the upper 0.5 cm was calculated. After that, 1 cm of sand was poured into a container, what is reflected in the graph by reducing the number of bloodworms on the surface of the sand to «0».
The third time the sand was filled in 3 hours after the second filling, or 6 hours after the start of the experiment. After that, the observation of the number of bloodworms on the surface and the upper 0.5 cm layer of sand lasted 3 hours. Thus, the total duration of the experiment was 9 hours.
The appearance of bloodworms on the surface between the sand overlapping was unevenly, decreasing or increasing both over time and during the repetition of the experiment.
As can be seen from the Fig. 1, within 1 hour after the beginning of the experiment, at least 5 individuals out of 10 appeared on the surface of the sand (50 % survived organisms), and in the next two hours the rest of the individuals appeared (100 % survived organisms). Fig. 2 shows the relationship between the survival of bloodworms after covering with sand every 4 hours, with 2 cm thick layer. The x-axis indicates the time of the start and duration of the experiment. When sand layer was 2 cm thick, the amount of the bloodworms which appeared on the surface in the first hour of the experiment was 2-3 individuals on average (20-30 % of total number), in each of the three repetitions, and one hour after repeated covering with the sand. No more than 90% of individuals appeared on the surface of the sand finally. Fig. 3 shows the number of snails M. tuberculata appeared on the surface, when 1 cm layer of sand covered them every 3 hours. The overall thickness of sand layer increased to 3 cm by the end of the experiment. The conditions of the experiment were similar to schedules 1 and 2. The number of snails appeared on the surface in 30 minutes after the first overlap with the ground was at least 6 individuals, making a maximum of 8 individuals for three repetitions of the experiment. Number of individuals appearing on the surface at the second overlap by the ground after 30 minutes ranged from 3 to 6, reaching 7-9 individuals by the end of the 1 st overlapping. After the third overlapping in different repetitions, the number of individuals who came to the surface varied significantly, but by the end of the experiment 100 % of snails appeared on the surface. Fig. 4 shows the relationship between the survival of M. tuberculata and the amount of sand covered the bottom of the aquarium. In this experiment, the sand was covered the aquarium's bottom with a layer 2 cm thick, with a frequency of every 4 hours. The total duration of the experiment was 12 hours. The differences between the experiments presented in Fig. 3 and Fig. 4 could be explained by different thickness of the sand layer (1 or 2 cm) and the frequency of filling (3 and 4 hours, respectively).
The number of M. tuberculate individuals appearing on the surface in the first hour of the experiment was at least 5 for three repetitions (50 % survived organisms). At the end of four hours before refilling and at the end of the experiment, the number of individuals on the surface was no more than 9, while the minimum number of individuals was 5. The average number of individuals appearing on the surface before each re-falling and at the end of the experiment was 8 specimens (80 % of the total number). duration of the experiment was 9 hours. At the beginning of the experiment, 30 individuals of T. tubifex were covered in the aquarium by 1 cm sand layer. After 1 hour of the experiment, in all three repetitions, the number of individuals on the surface was 16-19 individuals. After the second covering by sand, the maximum number of individuals was no more than 15. At the end of the experiment in all three repetitions the number of individuals was no more than 9 individuals. Fig. 6 shows the relationship between the number of surviving Oligochaetes when covered with 2 cm sand layer with a frequency of 4 hours. The total duration of the experiment was 12 hours. The differences between the experiments presented in Fig. 5 and Fig. 6 could be explained by different thickness of the sand layer, overlapping the bottom of the aquarium (1 or 2 cm), and the frequency of filling (3 or 4 hours, respectively). In the first hour of the experiment, the number of individuals on the surface was no more than 8 (from 30) in all series. The number of individuals on the surface before refilling and by the end of the experiment was more than 14 (from 30), however, the surviving individuals usually did not exceed 10. The average number of individuals on the surface averaged 9 specimens (from 30). Fig. 7 shows the correlation between the percentage of survivors for each type of test objects for each repetition when covered with a 1 cm sand layer with a frequency of covering 1 time every 3 hours. As can be seen from the graph, the largest number of survivors was observed by bloodworm and M. tuberculate, amounting to 90-100 %, while the survival of the Oligochaetes was no more than 30 %. Fig. 8 shows the correlation between the percentage of survivors for each type of test object for each repetition when covered with a 2 cm sand layer with a frequency of covering falling 1 time every 4 hours. The highest percentage of survivors was found by bloodworm, amounting to 90 %. The lowest rates were observed by Oligochaetes, amounting to no more than 20 %.
Conclusions
To assess the impact of the effect of falling of the benthic communities by ground, a series of laboratory tests was carried out. The studies have found, that in the areas of deposit sites the status of benthic ecosystems strongly depends from the thickness of the layer which covers the organisms, and from the time intervals between dumpings from the hopper dredger. With increase of sand layer for a few centimeters per one day, some types of organisms, such as Chironomids, are better able to overcome this layer than other species of hydrobionts.
Thus, the most resistant to dig out from filling dumping material is the bloodworm Ch. plumosus. A survival scheme could be shown like: Ch. plumosus > M. tuberculata > T. tubifex.
|
2020-05-07T09:11:20.287Z
|
2020-05-05T00:00:00.000
|
{
"year": 2020,
"sha1": "13ce93878f1970a9f1259bf75d145e6a171948d9",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/24/e3sconf_tpacee2020_01001.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "74f9e8e0dbbbee8f165ef35fd61ae8996bec293e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
258647803
|
pes2o/s2orc
|
v3-fos-license
|
Lateral septum as a melanocortin downstream site in obesity development
SUMMARY The melanocortin pathway is well established to be critical for body-weight regulation in both rodents and humans. Despite extensive studies focusing on this pathway, the downstream brain sites that mediate its action are not clear. Here, we found that, among the known paraventricular hypothalamic (PVH) neuron groups, those expressing melanocortin receptors 4 (PVHMc4R) preferably project to the ventral part of the lateral septum (LSv), a brain region known to be involved in emotional behaviors. Photostimulation of PVHMc4R neuron terminals in the LSv reduces feeding and causes aversion, whereas deletion of Mc4Rs or disruption of glutamate release from LSv-projecting PVH neurons causes obesity. In addition, disruption of AMPA receptor function in PVH-projected LSv neurons causes obesity. Importantly, chronic inhibition of PVH- or PVHMc4R-projected LSv neurons causes obesity associated with reduced energy expenditure. Thus, the LSv functions as an important node in mediating melanocortin action on body-weight regulation.
INTRODUCTION
Extensive studies in the last decades have identified the hypothalamus as a key regulator of feeding and energy expenditure. In particular, the melanocortin pathway has been well established to play a critical role in body-weight homeostasis. 1,2 This pathway involves proopiomelanocortin (POMC)-expressing and agouti-related protein (AgRP)-expressing neurons in the arcuate nucleus (Arc) and their downstream neurons that express melanocortin receptor (mainly melanocortin receptor 4 [Mc4R]). Within this pathway, POMC and AgRP neurons release respective α-melanocyte-releasing hormone (α-MSH, a POMC-derived peptide) and AgRP, which function as respective agonist and antagonist (or inverse agonist) of Mc4Rs to inhibit and promote feeding. 1 Mutations in just a few key genes involved in the melanocortin pathway account for a sizable human population with obesity, with Mc4R representing the most monogenetic human obesity gene, [3][4][5][6][7] suggesting the melanocortin pathway as a conserved pathway in feeding and body-weight regulation shared between rodents and humans.
Despite diffuse projection patterns of both POMC and AgRP neurons as well as the broad expression pattern of the Mc4R gene, the paraventricular hypothalamus (PVH) represents a major site that mediates the melanocortin action on body weight. 8 Targeted deletion of Mc4Rs from the PVH leads to obesity, and selective restoration of Mc4Rs in the PVH of Mc4R null mice reduces the major part of Mc4R obesity. 9,10 Consistently, activation or inhibition of PVH Mc4R neurons (denoted as PVH Mc4R neurons) leads to a potent effect on inhibiting or promoting feeding, respectively. 9 While inhibition of the whole PVH projections to the periaqueductal gray (PAG) promotes feeding more potently than those to the parabrachial nucleus (PBN) or the nucleus of solitary tract (NTS), inhibition of PVH Mc4R neuron projections to the PBN produces stronger effects on feeding than those to the PAG, 9,11 suggesting a unique role of PVH Mc4R neuron projections in feeding regulation. These studies based on optogenetic or chemogenetic manipulation of the activity of PVH Mc4R neuron projections clearly demonstrate a potent effect of PVH Mc4R neurons in acute feeding behavior. However, as shown previously for both hindbrain POMC neurons 12 and arcuate AgRP neurons, 13,14 acute feeding alterations may not necessarily translate into body-weight changes. In particular, the downstream sites that mediate the action of PVH Mc4R neurons on body weight remain to be established. It is important to mention that, in addition to feeding regulation, Mc4Rs are also involved in anxiety-related behaviors. 15,16 Notably, double knockout of Mc4R and SAP90/PSD95-associated protein 3 (SAPAP3) completely rescues the obesity phenotype of Mc4R single knockout and the anxiety-related self-grooming phenotype in SAPAP3 single-knockout mice, 17 suggesting a shared pathway underlying both Mc4R-regulated obesity and SAPAP3-regulated anxiety-related behaviors. Thus, we reasoned that a potential PVH Mc4R neuron downstream brain region known to regulate anxiety-related behaviors also mediates the effect of PVH Mc4R neurons in feeding and body weight. Given our preliminary studies suggesting that PVH Mc4R neurons send abundant projections to the ventral part of the lateral septum (LSv), we hypothesize that the PVH Mc4R -to-LSv projection co-regulates body weight and anxiety-related behaviors.
The lateral septum represents a basal forebrain structure involved in a wide variety of functions, including emotional, motivational, and spatial navigation behaviors. 18 Activation and inhibition of lateral septum (LS) corticotropin-releasing hormone receptor 2 (CRFR2)expressing neurons promotes and reduces anxiety-like behaviors, respectively. 19 LS lesion causes raging behaviors, and activation of LS CRFR2 neuron terminals in the anterior hypothalamic area promotes social aggression. 19 Recent results also implicate LS neurons in feeding and emotionally related feeding behaviors, especially with respect to pathways involving the known feeding-regulation hormones that include glucagon-like peptide 1 (GLP1), neurotensin, and ghrelin. [20][21][22][23][24] In addition, the LS has emerged as a site that mediates the interaction between stress-related emotions and feeding control via connections with the hippocampus and/or the hypothalamus. 25 However, the relevance of LS neurons to obesity development is not clear. In addition, the LS is generally divided into dorsal, intermediate, and ventral subdivisions, 26 and the relevance of the LSv to feeding and bodyweight regulation remains largely unknown.
Here we show that the LSv represents a major downstream projection site of PVH Mc4R neurons. Activation of PVH Mc4R →LSv projections promoted stress-related self-grooming and inhibited feeding. Importantly, mouse models with loss of function in this pathway, including disruption of glutamate release from or deletion of Mc4Rs in LSv-projecting PVH neurons, or disruption of glutamate-AMPA receptors in PVH Mc4R -projected LSv neurons, all develop obesity. Furthermore, chronic inhibition of PVH-or PVH Mc4R -projected LSv neurons result in severe obesity associated with reduced anxiety-like behaviors. Together, these results reveal the LSv as an important downstream mediator of the melanocortin pathway in co-regulating obesity and stress-related behaviors.
RESULTS
Stimulation of PVH Mc4R →LSv projections reduces feeding and promotes aversion PVH MC4R projections have been studied in the hindbrain area including the PAG and PBN. 11,27 To examine whether these neurons also project to forebrain sites, we stereotactically injected an AAV-Flex-ChR2-EYFP virus to the PVH of Mc4R-Cre mice ( Figures 1A and 1B, left). In addition to the known projections found in the hindbrain regions ( Figure S1), we identified strong ChR2-positive fibers in the LSv ( Figure 1B, middle and right) but not in other subregions of the LS ( Figure S1). The body-weight-regulating neurons in the PVH can be broadly divided into PVH Mc4R and PVH Pdyn neurons (neurons expressing prodynorphin). 28 To examine whether PVH Pdyn neurons also project to the LSv, we performed similar viral injections in Pdyn-Cre mice and found only minor projections to the LSv. Consistent with the minor projection pattern, photostimulation of PVH Pdyn →LSv projections caused no obvious changes in feeding or place preference behaviors ( Figure S2). These results, combined with the previous data showing that PVH neurons expressing oxytocin or arginine vasopressin (AVP) do not contribute to the projections to the LSv, 29 suggest that PVH MC4R neurons represent the major contributing subset of PVH neurons that send projections to the LSv.
Photostimulation of PVH Mc4R terminals in the LSv induced c-Fos expression in numerous LSv neurons ( Figure 1B, middle) compared with controls ( Figure 1B, right), suggesting an excitatory effect of the projection. This observation is consistent with the fact that PVH Mc4R neurons are glutamatergic. 30 Given the strong implication of Mc4R neurons in feeding, we first examined the effect of targeted stimulation on feeding. In overnight fasted animals, photostimulation of PVH Mc4R terminals in the LSv induced a strong inhibitory effect on fasting-induced refeeding ( Figure 1C). Interestingly, however, in fed mice the stimulation elicited repetitive self-grooming behaviors ( Figure 1D). To probe the emotional valence associated with the observed self-grooming behavior, we next subjected these mice to real-time place preference (RTPP) tests. The same photostimulation protocol (5 Hz, 100 ms) caused an obvious avoidance to the stimulation ( Figures 1E-1G), which led to a stronger avoidance phenotype ( Figure 1G) than that induced by a lower-strength photostimulation protocol (5 Hz, 20 ms) ( Figure 1F), suggesting a scalable effect of stimulating PVH Mc4R neuron terminals in the LSv toward eliciting an avoidance behavior. To further assess the effect on natural avoidance behavior, we paired photostimulation with mouse stay in the periphery of the open-field arena ( Figure 1H). Interestingly, compared with the control non-stimulation condition, photostimulation resulted in a longer time spent in the non-paired center zone in all mice ( Figures 1H and 1I), suggesting that photostimulation caused an avoidance that dominates over the innate anxiety associated with the center stay in an open field.
To more precisely examine the avoidance behavior elicited by photostimulation on feeding behavior, we fasted mice overnight and placed food in the corner of the testing arena that was paired with or without photostimulation ( Figure 1J). Under this condition, mice could choose to stay in the unstimulated side with hunger or in the stimulated side with food consumption. While mice spent the majority of time in the food zone when not paired with photostimulation, they spent much less time in the food zone when paired ( Figures 1J and 1K). Associated with this, mice ate much less food with photostimulation ( Figure 1K). These results demonstrate that stimulation of PVH MC4R terminals in the LSv causes a strong avoidance that effectively reduces hunger-driven feeding.
To determine whether LSv-projecting PVH Mc4R neurons also send collateral projections to other downstream brain targets, we next performed experiments in Mc4R-Cre mice by delivering a retrogradely trafficked AAVrg-FlpO-mCherry virus into the LSv followed by injecting AAV-con/fon-EYFP viral vectors into the PVH. We observed EYFP-positive fibers in a few other brain regions, including median eminence, PAG area, and PBN ( Figure S3), suggesting that LSv-projecting PVH Mc4R neurons indeed send collaterals to these brain regions.
LSv-projecting PVH neurons in obesity development
Given the effect on feeding by photostimulation of PVH Mc4R terminals in the LSv, we next examined the role of LSv-projecting PVH neurons in body-weight regulation. PVH neurons express vesicular glutamate transporter 2 (Vglut2, also named Slc17a6), a marker for glutamatergic neurons that is required for presynaptic glutamate release in PVH neurons. 30 To selectively target LSv-projecting PVH neurons, we stereotactically delivered a retrograde AAVrg-FlpO-mCherry virus into the LSv, which is able to trace LSv-projecting PVH neurons. 31 We then delivered AAV-fDIO-Cre mixed with AAV-DIO-GFP (to report the Cre expression) vectors into the PVH of either Vglut2 flox/flox or Mc4R flox/flox mice to facilitate FlpO-mediated Cre expression and subsequent deletion of Vglut2 or Mc4R expression exclusively in LSv-projecting PVH neurons (Figure 2A). The specific delivery of AAVrg-FlpO-mCherry to the LSv was confirmed post hoc via viral expression ( Figure 2B, left). As expected, we observed AAV-fDIO-Cre expression in a subset of PVH neurons ( Figure 2B, right), suggesting selective targeting of LSv-projecting PVH neurons. Of note, we also observed GFP-positive fibers in the LSv ( Figure 2B, left), confirming the projection from GFP-expressing PVH neurons. To confirm Cre-mediated deletion of Vglut2, we performed in situ hybridization (ISH) experiments. Compared with control mice injected with AAV-fDIO-GFP virus in the PVH, there was an obvious reduction of Vglut2 mRNA expression in the PVH of Cre-injected mice ( Figure 2C). Of note, since the viral GFP signal will be completely quenched during the pretreatment steps of the ISH procedure, the ISH signal of Vglut2 can be reliably visualized with green fluorescence. These results suggest effective deletion of Vglut2 by FlpO-mediated Cre expression. Cre-mediated deletion of Vglut2 and the resultant loss of glutamate release using the same floxed allele have also been documented previously. 29,30 Interestingly, compared with controls, mice with specific Vglut2 deletion in LSv-projecting PVH neurons develop obesity when fed a chow diet ( Figure 2D). The body-weight gain was about 15 g at 8 weeks after the viral injection, suggesting rapid obesity development.
We also performed ISH and immunohistochemical experiments to validate Cre-mediated deletion of Mc4R. Although no difference in Vglut2 ISH signal was observed between groups ( Figure 2E, left) compared with controls ( Figure 2E Figure S4). Altogether, these results suggested effective deletion of Mc4R expression by FlpO-mediated Cre expression. In these mice, we measured their weekly body weight on chow for 8 weeks followed by another 8 weeks on a high-fat diet (HFD). Although Mc4R knockout (KO) mice showed a trend in obesity development only on normal chow, these mice exhibited sensitivity to HFD-induced obesity. After 8 weeks on HFD, they gained up to 10 g of body weight ( Figure 2F). The milder effect on chow toward obesity development in Mc4R KO mice relative to Vglut2 KO mice may reflect a smaller number of neurons targeted in Mc4R KO models, since the Mc4R-expressing neurons only account for a subset of PVH neurons 32 or reflect the Mc4R as only one of many upstream signals that affect body weight through these neurons. Nevertheless, these results strongly support that glutamatergic PVH MC4R projections to the LSv are required for body-weight regulation.
Glutamate receptors in PVH-projected LSv neurons in body-weight regulation
Although previous studies have implicated LS neurons, especially dorsal LS neurons, in feeding regulation, [20][21][22][23][24]33 the role of LS neurons in body-weight regulation is unknown. Given that LSv-projecting PVH Mc4R neurons also send collaterals to other brain sites, it is imperative to determine the role of LSv neurons in mediating the observed effects. To this end, we aimed to selectively target those LSv neurons that receive inputs from PVH glutamatergic neurons. We first examined the role of glutamate receptors in LSv neurons that receive direct PVH inputs (hereafter defined as PVH-projected LSv neurons). α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-D-aspartic acid (NMDA) receptors represent the major classes of postsynaptic receptors that mediate glutamate action. To selectively manipulate AMPA receptors, we took advantage of a Cterminal fragment of the GluR1 (hereafter defined as ctGluR1) subunit that can be used to disrupt the receptor trafficking from cytosol to postsynaptic membranes and consequent glutamatergic neurotransmission. 34 We generated a conditional AAV-DIO-ctGluR1-mCherry vector, which expresses the ctGluR1 fragment in a Cre-dependent manner. We delivered anterograde AAV1-Cre-GFP viral particles to the PVH to trace direct downstream LSv neurons, and at the same time we delivered the AAV-DIO-ctGluR1-mCherry or control AAV-DIO-mCherry viral particles to the LSv ( Figure 3A). Targeted delivery of AAV1-Cre-GFP was confirmed in the PVH ( Figure 3B, left), as was Cre-mediated viral expression of mCherry in PVH-projected LSv neurons ( Figure 3B, middle and right). To validate ctGluR1 expression, we performed immunostaining with the antibody raised against ctGluR1. Compared with control mice ( Figure 3C, top), we found that PVH-projected LSv neurons exhibited a higher level of ctGluR1 signal in ctGluR1-injected mice ( Figure 3C, arrows in bottom panels), confirming ctGluR1 overexpression in these neurons. Interestingly, we also noted that mice with ctGluR1 overexpression in PVH-projected LSv neurons rapidly developed obesity on the chow diet ( Figure 3D). Given the known role of LS neurons in emotional modalities, we also tested these mice for anxiety-like behaviors. In fact these mice spent more time in the center of open-field tests (OFTs) ( Figures 3E and 3F), suggesting a reduced level of anxiety. These findings support that normal function of AMPA receptors in PVH-projected LSv neurons is required for body-weight regulation. NMDA receptors have been previously shown to be involved in feeding and bodyweight regulation. 34 To directly examine the role of NMDA receptors in PVH-projected LSv neurons, we bilaterally injected the AAV1-FlpO-mCherry vectors to the PVH of Grin1 flox/flox mice (the Grin1 gene encodes the NR1 subunit of NMDA receptors) while also delivering AAV-fDIO-Cre or control AAV-fDIO-GFP vectors to the LSv, which allows us to express Cre or GFP in FlpO-expressing LSv neurons ( Figure S5A). In Cre-injected Grin1 flox/flox mice ( Figure S5B, right), compared with the control GFP group ( Figure S5B, top) NR1 expression was reduced in the LSv as demonstrated by NR1 immunostaining ( Figure S5B, bottom), and electrically evoked currents showed a significant decrease in AMPA receptor/NMDA receptor ratio ( Figures S5C and S5D), suggesting effective Cremediated deletion of the NR1 subunit. We next monitored weekly body weight in these mice on chow for 8 weeks after viral injections, and somewhat surprisingly found no differences in body weight between NR1 KO and control mice in both males ( Figure S5E) and females ( Figure S5F). Together, these data collectively suggest that LSv AMPA receptors but not NMDA receptors mediate the PVH glutamatergic action on body-weight regulation.
Chronic inhibition of PVH-projected LSv neurons leads to massive obesity
One of the consequences of disrupting glutamatergic inputs is reduced activity of targeted neurons. Thus, we next examined the consequence of chronic inhibition of PVH-projected LSv neurons. To this end, we delivered a conditional AAV-Flex-Kir2.1-dTomato virus to the LSv of wild-type mice that also received injections of anterograde AAV1-Cre-GFP vectors in the PVH ( Figure 4A). The Kir2.1 virus expresses a mutated K + channel that has been shown to achieve chronic inhibition of neurons. 14 As evidenced by Cre-dependent expression of Kir2.1-dTomato ( Figure 4B, right), AAV1-Cre-GFP injection in the PVH ( Figure 4B, left) resulted in the expression of Cre-GFP in a subset of LSv neurons. To functionally evaluate Kir2.1 expression, given PVH neurons known to be predominantly glutamatergic and LSv neurons known to be mainly GABAergic, 18,29,30 we also delivered AAV-DIO-hM3Gq-mCherry vectors in the PVH and followed this by injecting a mixture of AAV-Flex-Kir2.1-dTomato and AAV-fDIO-Cre vectors or control AAV-fDIO-mCherry to the LSv in compound Vglut2-Ires-Cre::Vgat-FlpO mice ( Figure S6A). The expression of hM3Gq in the PVH and the expression of control mCherry or Kir2.1-dTomato in the LSv were confirmed in both groups ( Figures S6B and S6C, left). We then examined c-Fos expression in LSv neurons to evaluate their overall activity upon activation of upstream PVH neurons with clozapine N-oxide (CNO). While control mice showed intense c-Fos expression in the LSv, Kir2.1-injected mice exhibited much less c-Fos expression, which did not co-localize with Kir2.1-dTomato expression ( Figures S6B and S6C, right), confirming the effect of Kir2.1 expression in reducing PVH-projected LSv neuron activity.
Interestingly, chronic inhibition of PVH-projected LSv neurons led to massive obesity when fed normal chow ( Figure 4C), and the average body-weight gain was up to 15 g by 12 weeks after Kir2.1 expression ( Figure 4D). The observed obesity development was associated with both reduced energy expenditure ( Figures 4E and 4G) and increased food intake ( Figures 4F and 4H). Notably, the differences in feeding between controls and Kir2.1-injected mice were more obvious during the night period compared with the daytime (Figure 4E), suggesting a potential effect of the PVH→LSv projection in diurnal pattern regulation. However, we did not observe any difference in locomotion between control and Kir2.1 groups ( Figure 4I). Of note, there was no body-weight difference between groups when metabolic parameters were measured, so any observed phenotype was not due to a secondary effect of data normalization. Collectively, these results demonstrate the importance of PVH-projected LSv neurons in body-weight regulation.
Chronic inhibition of PVH-projected LSv neurons induces behavioral signs of reduced anxiety level
Given the known role of LS neurons in emotional control, we also subjected mice with chronic inhibition of PVH-projected neurons to various tests for behavioral signs of anxiety, including light dark box tests (LDT), elevated-plus maze (EPM) tests, and OFTs. In all assays, we performed the behavioral tests prior to body-weight divergence to eliminate potential secondary effects of obesity. Consistently, mice with chronic inhibition of PVH-
GABA release from PVH-projected LSv neurons in body-weight regulation
The LS mainly consists of GABAergic neurons and expresses abundant vesicular GABA transporters (Vgat, also named Slc32a1). 18 To examine the role of GABA release from PVH-projected LSv neurons, we delivered AAV1-FlpO-mCherry to the PVH and followed this by injecting either AAV-fDIO-Cre mixed with AAV-DIO-GFP vectors or control AAV-fDIO-GFP virus to the LSv of Vgat flox/flox mice ( Figure 6A). The expression of AAV1-FlpO-mCherry was confirmed in the PVH ( Figure 6B, left) and, as expected, the expression of FlpO-mediated expression of Cre recombinase (reported by co-injected Cre-dependent GFP) was also found in the LSv ( Figure 6B, right). We then performed dual ISH of Grin1 and Vgat to gene deletion. Despite comparable expression levels of Grin1 ( Figure 6C, left), the ISH signal of Vgat mRNA in the Cre group was obviously reduced compared with the GFP vector-injected controls ( Figure 6C, right). Consistent with this Vgat mRNA reduction, our electrophysiological recording data further showed that laser-evoked inhibitory postsynaptic currents in LSv neurons was significantly lower in Cre-injected Vgat flox/flox mice compared with the GFP group following photostimulation of LSv-projecting PVH neurons ( Figures S7A-S7C). These results collectively suggest the effective Cre-mediated deletion of Vgat in PVH-projected LSv neurons. We next subjected these mice to normal chow diet feeding for 12 weeks followed by another 8 weeks on HFD feeding. Although mice with Vgat deletion in PVH-projected LSv neurons exhibited a trend toward altered feeding with no significant changes in body weight on chow, they indeed rapidly developed obesity on HFD ( Figure 6D). When Comprehensive Lab Animal Monitoring System (CLAMS; Columbus Instruments, Columbus, OH) measurements were performed prior to body-weight divergence and during the chow-HFD transition, we found that mice with Vgat deletion exhibited a reduction in energy expenditure ( Figures 6E and 6H) and locomotion ( Figures 6F and 6I) in response to HFD. However, food intake measurements at this early time point exhibited no significant difference between groups ( Figure 6G). These results collectively suggest an obligatory role for GABA release from PVH-projected LSv neurons in diet-induced obesity.
In addition, a subset of LSv neurons also express vesicular monoamine transporter 2 (VMAT2, also named Slc18a2)( Figure S7D), a gene known to be required for presynaptic release of monoamine, although the role of its expression in this area has never been explored. Similarly, we also adopted a comparable viral injection strategy in VMAT2 flox/flox mice to facilitate conditional deletion of VMAT2 in PVH-projected LSv neurons ( Figure S7E). While VMAT2 deletion was validated in the LSv ( Figure S7F), mice lacking VMAT2 in PVH-projected LSv neurons exhibited no difference in body weight compared with control animals ( Figure S7G), suggesting that VMAT2 expression in the PVH-projected LSv neurons is not required for body-weight regulation.
Chronic inhibition of PVH Mc4R -projected LSv neurons results in obesity
Our anatomic data suggest that PVH Mc4R neurons provide the majority of PVH inputs to the LSv. To specifically examine the role of PVH Mc4R neuron projections to the LSv, we employed the previously established trans-synaptic tracer WGA-GFP, which is effectively transported to immediate downstream neurons but can also be transported in an anterograde fashion to presynaptic neurons. 35,36 Since the LSv does not project to the PVH ( Figure S8), we reasoned that WGA-GFP can be used to directly target LSv neurons that receive direct PVH Mc4R neuron inputs (i.e., PVH Mc4R -projected LSv neurons). To this end, we bilaterally injected AAV-Flex-WGA-GFP viral vectors to the PVH of Mc4R-Cre mice, while at the same time a mixture of AAV-FlpO-DOG-NW, AAV-fDIO-Cre, and AAV-Flex-Kir2.1-dTomato particles was injected into the LSv ( Figure 7A). In this design, the AAV-FlpO-DOG-NW virus will express FlpO with the presence of GFP, 37 which will then facilitate the conditional expression of Cre recombinase. Thus, Kir2.1 will be only expressed in PVH Mc4R -projected LSv neurons with the presence of Cre. As shown in Figure 7B, the expression of GFP and dTomato was confirmed in a subset of LSv neurons, suggesting the specific expression of Kir2.1 in PVH Mc4R -projected LSv neurons. Interestingly, compared with the control group (LSv injected with the mixture of AAV-FlpO-DOG-NW and AAV-fDIO-mCherry) these mice developed obesity when fed normal chow ( Figure 7C). When measured with CLAMS prior to body-weight divergence, these mice showed a clear trend of reduction in energy expenditure, especially during the dark period ( Figures 7D and 7G). However, we did not observe any obvious difference in locomotion ( Figure 7E) or food intake ( Figure 7F). Interestingly, compared with controls these mice showed more exploration time in the lit room of the LDT (Figures 7H and 7I). These results collectively demonstrate that the PVH Mc4R →LSv circuit co-regulates both body weight and anxiety.
DISCUSSION
Despite extensive studies on the Mc4R function in obesity development in both rodents and humans, the neural pathway that mediates its action remains largely unexplored. Previous studies mostly focus on brain neurons that express Mc4Rs, but little is known regarding the function of their downstream neurons. 8 In addition, the function of Mc4Rs has been primarily investigated in the context of homeostatic energy balance, yet their functional relevance to behavioral adaptation is unclear. An appropriate decision of whether or not to engage in feeding in a threatening environment is an important adaptive behavior for survival. Emerging studies have suggested that hypothalamic neurons that regulate feeding are also implicated in orchestrating feeding in response to environmental cues. 25 Activation of AgRP neurons promotes feeding, suppresses feeding inhibition induced by anxiogenic cues, 38-40 and prioritizes feeding behaviors over social interactions such as mating. 41 These observations support the notion that the melanocortin pathway has the capacity to orchestrate feeding behaviors in response to ever-changing environmental cues.
Our results presented here reveal that PVH Mc4R neurons send abundant projections to the LSv, the activation of which reduced feeding associated with behavioral signs of increased anxiety. Importantly, disruption of this pathway increases body weight associated with behavioral signs of reduced anxiety. Thus, the PVH Mc4R →LSv projection bridges a key feeding brain area and an important emotion-related brain area, and co-regulates feeding and stress-related behaviors. Our results support that an increased activity level of this projection causes reduction in feeding and promotes stress-related behaviors, whereas a reduced level causes an opposite effect. Although stress-induced hypophagia has been well studied, 42 the underlying neural pathways remain unclear. Our previous study has demonstrated that the activity of PVH and LS neurons increases in response to stress stimuli and decreases during feeding. 29 Along these lines, the PVH Mc4R →LSv projections may be involved in stressinduced hypophagia. Previous studies have implicated a role for AgRP neurons and Mc4R action in stress responses. 16,38,39 Our current results on the PVH Mc4R →LSv projection suggest that the LSv may be part of the downstream pathway mediating the effect. Given the well-documented correlation between eating disorders and psychiatric illnesses, 43,44 our current results suggest a common PVH Mc4R →LSv pathway as a potential neural basis for the correlation between psychiatric illness and eating disorders. Given the stress-related selfgrooming induced by activating PVH Mc4R →LSv projections, this pathway may potentially contribute to the previously described reciprocal rescuing effects observed between obesity with Mc4R deficiency and uncontrolled self-grooming from SAPAP3 KO in Mc4R and SAPAP3 double-KO models. 17 One striking finding in the current study is the implication of LSv neurons in obesity development. Although previous studies have shown that LS neurons, especially those located in the dorsal part of the LS, play a role in feeding, and more specifically in coordinating stress-induced hypophagia, 21,24 our results provide evidence that, beyond stress-induced hypophagia, the PVH→LSv projection also represents a bona fide bodyweight-regulating pathway. Moreover, our data showed that defects in the glutamate-AMPA action within the pathway led to obesity development. In particular, chronic inhibition of PVH-projected LSv neurons caused a profound impact on obesity development. Thus, we reveal that LSv neurons represent an important key downstream site that mediates the PVH function in body-weight regulation.
Deletion of Vglut2 from LSv-projecting PVH neurons led to obesity in mice fed a chow diet, which is comparable with the obese phenotype we have previously observed with Vglut2 deletion directly from PVH neurons, 30 suggesting a major role for the LSv as a downstream node in mediating PVH action on body weight. Previous studies on PVH were mostly focused on projections in the midbrain, brainstem, and spinal cord. 11,27 Given the fact that LSv-projecting PVH neurons also send collateral projections to those brain sites, it is possible that a common subset of PVH neurons regulates body weight through a divergent projection to several downstream brain sites. However, given that our anatomic data show partial Vglut2 loss in the PVH as a result of targeted Vglut2 deletion from LSv-projecting PVH neurons and that PVH oxytocin-and AVP-expressing neurons do not project to the LSv, 29 there should be a significant number of PVH neurons that do not project to the LSv. Notably, previous results suggest that PVH neurons that regulate body weight can be largely divided into two groups: Mc4R-expressing and prodynorphin-expressing neurons. 28 Our current results demonstrate that PVH MC4R but not PVH Pdyn neurons provide major projections to the LSv. Thus, the LSv may represent an important downstream site mediating the role of PVH MC4R action in feeding and body-weight regulation.
Altogether, our data show that disruption of glutamate release from LSv-projecting PVH neurons produces a much more robust obesity than Mc4R deletion from the same group of neurons, indicating a significant number of non-Mc4R-expressing neurons that contribute to the PVH→LSv projection. Consistent with this, direct photostimulation of PVH neuron projections in the LSv produces a much stronger behavioral effect that includes jumping and self-grooming, 29 while the current study on PVH Mc4R projections only produces relatively mild self-grooming. Moreover, chronic inhibition of PVH-projected LSv neurons with Kir2.1 expression causes greater obesity development than that of PVH Mc4R -projected LSv neurons, indicating that a smaller number of downstream LSv neurons are targeted from PVH Mc4R neurons. Thus, it appears that the PVH Mc4R →LSv projection only represents a component of the PVH→LSv projection. An alternative explanation for more robust obesity from disruption of glutamate release than Mc4R deletion is that the Mc4R-mediated signaling only represents one of many upstream signals regarding body-weight regulation, whereas glutamate is the major mediating neurotransmitter of these neurons. 30 Notably, we compared food intake and energy expenditure before body-weight divergence with an aim to identify the initiating factor for the observed obesity. For this reason the measured difference, especially in food intake known to be highly variable according to the measurement method, can be small. In this sense, caution should be exercised in interpreting the data regarding no difference in food intake in some of our obesity models examined, as there exists the potential that small differences in feeding may be masked by variations in measurement. In line with this, previous studies on PVH Mc4R neurons also showed no difference in food intake when feeding is measured prior to obesity development. 30,45,46 Nevertheless, we indeed observed consistent reduction in energy expenditure associated with the observed obesity, arguing that the initiating factor in obesity associated with the PVH Mc4R →LSv projection is reduced energy expenditure, which is consistent with the previous observations on a primary role of PVH Mc4R neurons in energy expenditure. 30,45 To date, the identity of PVH-projected LSv neurons remains unknown. Mice with disruption of GABA release from PVH-projected LSv neurons exhibit diet-induced obesity, suggesting a role of GABA release in mediating the body-weight effect, which is consistent with the fact that LS neurons are GABAergic. 18 We also found that some of these LSv neurons express VMAT2, a marker for monoaminergic neurons. However, VMAT2 deletion in these neurons caused no obvious changes in body weight, arguing against a potential role for VMAT2-mediated neurotransmission. A subset of LS neurons has been demonstrated to express other markers including CRFR2, GLP1 receptors, and neurotensin. [20][21][22]24,47,48 However, since these neurons are not limited to the LSv, 21,22,48 it is thus less likely that these neurons contribute significantly to the observed effects. Further single-cell RNA sequencing can be used to identify potential novel markers that specifically identify this subset of neurons. LS neurons have been involved in various behaviors including stress, defense, aggression, and memory. 18 Our results on the susceptibility to diet-induced obesity in mice, either with Mc4R deletion in LSv-projecting PVH neurons or with disruption of GABA release from PVH-projected LSv neurons, suggest a potential role of PVH-projected LSv neurons in hedonic feeding. 49 It will be interesting for future studies to investigate the underlying mechanism and pathway for the observed susceptibility to diet-induced obesity.
Limitations of the study
There are some limitations in the present study. The activities of both PVH and LSv neurons are known to be sensitive to external stressors and respond to feeding, 29 and mutations in the Mc4R gene are associated with obesity and emotional changes, 15 suggesting relevance to human physiology. Since extra-physiological optogenetics and mouse genetics were used to achieve gain or loss of functions of neurons and circuits in this study, one main limitation is the relevance to normal physiology. Further physiological studies are required to ascertain the level of involvement of the PVH Mc4R →LSv projection in normal physiological feeding and anxiety-related behaviors. Technically, direct functional documentation of the effect of the ctGluR1 peptide on AMPA receptor trafficking is not presented, which, however, given the compelling data on the expression of the peptide and associated phenotypes, is unlikely to affect the main conclusion drawn from this study.
•
All data that support the findings of this study are available from the lead contact upon request.
• This paper does not report original code.
• Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Mice were housed at 21°C-22°C with a 12 h light/12 h dark cycle with standard pellet chow and water provided ad libitum unless otherwise noted for fasting experiments. and Vgat-FlpO mice were described previously 28,30,34,[52][53][54] and were all obtained from the Jax lab. Both male and female mice were used as study subjects, except otherwise noted. All mice that were used for stereotaxic injections were at least 8-10 weeks old.
METHOD DETAILS
Surgeries and viral constructs-All mice that were used for stereotaxic injections were at least 8-10 weeks old. Briefly, 55 For specific blocking of AMPA receptor signaling pathway in PVH-projected LSv neurons, the AAV-Flex-CtGluR1-P2A-mCherry vector was cloned by inserting the coding sequence of the C terminal fragment of the AMPA receptor GluR1 subunit, which was previously used to block GluR1 subunit trafficking, 57 into the AAV-Flex-GFP by replacing the GFP coding sequence (Vectorbuilder Inc., Chicago, IL, USA). In wild type mice, the AAV1-CMV-Cre-eGFP virus (Addgene, Plasmid #105545) was injected into the PVH (bilateral,150 nL/each) to achieve anterograde tracing, 58 Real time place preference (RTPP) assays-For RTPP assays, a commutator (rotary joint; Doric, QC, Canada) was attached to a patch cable via FC/PC adaptor. The patch cable was then attached to the optic fiber cannula ferrule end via a ceramic mating sleeve. Another patch cable containing FC/PC connections at both ends allowed the connection between the commutator and the 473 nm laser, which was controlled by the Master-8 pulse stimulator. MC4R-Cre mice injected with Cre-dependent ChR2 or GFP viruses, respectively, and implanted with optic fibers above the LSv, were placed in a clean 45 cm × 45 cm X 50 cm chamber equipped with a camera mounted on top of the chamber and optical fiber patch cable attached to a commutator. The testing chamber was wiped down with 70% isopropyl alcohol between tests. Prior to starting experiments, the patch cable was attached to optic fiber ferrule end of the mouse's cannula. At the start of the experiment, mice were placed in the light-off zone, in which no light was applied. Then, for 20 min, the mice were allowed to freely roam the enclosure, which was divided into two equal zones containing the light-off zone and a light-on zone, in which 5Hz, 20 or 100 ms (473nm, ~5 mW/mm 2 ) light pulses were delivered. The side paired with photostimulation was counterbalanced between mice.
For some experiments, instead of paring with one side, photostimulation was paired with the peripheral zone (with light off at the central zone of the arena Behavioral assays-All behavioral assays were performed before body weight became different between groups fed on chow, and the animal ID was blinded when behavior tests were conducted.
Open field test (OFT):
The apparatus consists of a white Plexiglas box and brightly illuminated (120 lx). Each mouse was placed in the corner of the apparatus to initiate a 20-min test session. A camera (Noldus, Leesburg, VA, USA) mounted above the apparatus monitor the mice and the track of its movements was recorded, which were analyzed by EthoVision XT software. Time spent in the center recorded and subsequently analyzed, longer times spent exploring the arena in the center seen as less anxiety-like behaviors.
Light/Dark box test (LDT):
The light/dark room test apparatus consists of two same space parts, light room and the dark room (Kinder Scientific, Poway, CA, USA). Mice were placed in the center of apparatus with an opening to the both side and initiate a 10-min session test. The time spent in the light room were recorded and analyzed by Kinder Scientific Motor Monitor software.
Brain slices preparation and electrophysiology recordings-Electrophysiological recordings were performed as same as our previous work. 60 Briefly, mice were anesthetized with a mixture of ketamine (100 mg/kg) and xylazine ( For AMPAR-and NMDAR-mediated currents, 20μM of bicuculline, a GABA A receptor antagonist, was included in the perfusate. To evoke currents, a concentric bipolar microelectrode (FHC) was placed lateral to the recording site. The intensity (0.05-0.3mA) of the stimulus (200μs, 0.05Hz) was adjusted to get 50-70% of maximal responses. Neurons were voltage-clamped at −70mV to record AMPARs-mediated currents, and at +40mV to record AMPARs-and NMDARs-mediated currents. Results were averaged from at least 6 sweeps. NMDAR/AMPAR ratios were calculated by dividing the value of the NMDARs eEPSC 50ms post stimulation at +40mV by the peak of AMPARs eEPSC at −70mV. 57 To activate ChR2 in the brain slices, an optic fiber was placed close to the recording electrode above the slice to deliver laser pulses (473nm, 1 ms). Neurons were voltage-clamped at −70mV to record glutamatergic receptors-medicated currents, and at +40mV to record GABA receptors-mediated currents. 29 Brain tissue preparation, imaging, and post-hoc analysis-After all behavioral experiments were completed, study subjects were anesthetized with a ketamine/xylazine cocktail (100 mg/kg and 10 mg/kg, respectively) and subjected to transcardial perfusion. During perfusion, animals were flushed with 20 mL of saline prior to fixation with 20 mL of 10% buffered formalin. Freshly fixed brains were then extracted and placed in 10% buffered formalin in 4°C overnight for post-fixation. The next day, brains were transferred to 30% sucrose solution and allowed to rock at room temperature for 24 h prior to sectioning. Brains were frozen and sectioned into 30 μm slices with a sliding microtome and mounted onto slides for post-hoc visualization of injection sites and cannula placements. Mice with missed injections to the P or misplaced optic fibers over the LSv were excluded from behavioral analysis. Representative pictures of LSv and PVH injection sites and cannula placements were visualized with confocal microscopy (Leica TCS SP5; Leica Microsystems, Wetzlar, Germany).
In situ hybridization (ISH) assays-RNAscope Fluorescent Multiplex ISH Kit (Advanced Cell Diagnostics, INC., Newark, CA, USA) was used to detect the transcription signals of Grin1, Vglut2, Vgat and Mc4r. ISH was performed as we previously described with modification and following the manufacture protocol of Fixed-frozen tissues. 30 Briefly, the freshly prepared brain tissues perfused with 10% formalin were sectioned at 15 μm thickness and mounted to Super Plus Gold glass slices. The slides were then immersed in chilled 10% buffered formalin for 15 min at 4 °C. Thereafter, the sections were dehydrated by immersing in 50% EtOH, 70% EtOH, and 100% EtOH for 5 min each at room temperature. After incubated with RNAscope Hydrogen Peroxide, all sections were performed target retrieval for 15 min at 100°C. Then, a hydrophobic barrier was drawn around tissue with a hydrophobic barrier pen after allowing slides to air dry. The slides were then placed on a hybridization humidifying rack and treated with protease pretreatment solution for 30 min at room temperature. After pretreatment, slides were washed twice with fresh 1X PBS in a slide rack. PBS was gently tapped away from slides prior to applying the hybridization probe for Grin1 (Mm-Grin1-C1 probe), Vglut2 (Mm-Slc17a6-E1-E3 probe), Vgat (Mm-Slc32a1-C2 probe) and Mc4r (Mm-Mc4r-C3 probe). The slides were placed in the humidifying rack and allowed to incubate for 2 h at 40 °C in a hybridization incubator. After hybridization, slides went through a series of washes with 1X RNAscope wash buffer, followed by 4 amplification steps of the hybridization signal. After the wash and amplification step, slides were counterstained with DAPI and cover-slipped with Prolong Gold Anti-fade mounting medium (Life Technologies). Vglut2, Vgat and Mc4r signals in matched sections of control and knockout groups were visualized with confocal microscopy. Of note, viral GFP reporter fluorescence were completely bleached out during ISH procedures, which allows us to visualize in situ signals only from target genes.
QUANTIFICATION AND STATISTICAL ANALYSIS
All data were presented as mean ± SEM. Statistical analyses were performed using GraphPad Prism.9 (GraphPad Software, La Jolla, CA). Two-tailed unpaired Student's t tests, one-way or two-way ANOVA followed by Tukey's multiple comparison post-hoc tests were used. Statistical significance was set at *p < 0.05; **p < 0.01, and ***p < 0.001. The statistical details were shown in the respective figure legends.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. All data presented as mean ± SEM. Scale bars represent 100 μm in the left and middle panels of (B) and left panel of (C), 50 μm in the right panel of (B), and 25 μm in right three panels of (C). See also Figure S5. (E and F) Traces of energy expenditure (E) and food intake (F) measured by CLAMS at 1 week after viral delivery when there was no diverged body weight between groups (n = 6 mice/group).
(E and F) Representative movement traces of OFT in control and Kir2.1 mice (E) and statistical comparison between these two groups (F, paired t test, **p < 0.01). All data presented as mean ± SEM.
|
2023-05-13T15:08:57.220Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ccc906495b726c502cbf5814e260df29050751c9",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2211124723005132/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa55ef2b4822a1711dfcb5ea10edf7eb0797eaf5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257527019
|
pes2o/s2orc
|
v3-fos-license
|
The muddled governance of state-imposed forced labour: multinational corporations, states, and cotton from China and Uzbekistan
ABSTRACT Forced labour persists in our global economy despite dedicated attention and eradication efforts from both the public and private sectors. Given the bounded reach and lack of enforcement by states and international organisations, the private sector has been the linchpin for eradication of forced labour globally. Utilising the brand-to-state boomerang model, this paper examines state-imposed forced labour in cotton production in China and Uzbekistan, and grapples with how interests – those of the states and the multinational corporations involved in forced labour – shape private governance outcomes. By investigating state-imposed forced labour in China (specifically, the Xinjiang Uyghur Autonomous Region) and Uzbekistan, we find that multinational corporations are reluctant to work towards eradicating forced labour when their net sales and profit are threatened by doing so. Building on the stream of international political economy research regarding how interests complicate governance effectiveness, we expose a gap in the literature on the impact of state-imposed forced labour on governance outcomes and illuminate global ramifications.
Introduction
Forced labour is prohibited by the International Labour Organization (ILO) 1998 Declaration on Fundamental Principles and Rights at Work, United Nations Guiding Principles, United Nations Global Compact, Organisation of Economic Cooperation and Development (OECD) Guidelines for Multinational Enterprises, numerous domestic laws, and voluntary standards. Yet, forced labour persists in our modern global value chains (GVCs). As defined through the 1930 ILO Convention, forced labour is: All work or service which is exacted from any person under the menace of any penalty and for which the said person has not offered himself voluntarily. (Art. 2.1) The ILO identifies 11 indicators of forced labour: abuse of vulnerability, abusive working and living conditions, physical or sexual violence, restriction of movement, deception, debt bondage, withholding of wages, excessive overtime, retention of identity documents, intimidation and threats, and isolation (ILO 2012). State-imposed forced labour regimes may involve any of these indicators and serve as a form of intimidation to workers on their own (Better Cotton Initiative 2021, ETI 2019).
Forced labour is widespread. According to the ILO, 27.6 million people are in forced labour, of which 17.3 million people are exploited in the private sector, 6.3 million persons in forced sexual exploitation, and 3.9 million persons in forced labour imposed by state authorities (ILO 2022). Governments, academics, and third-party organisations are devoting more and more attention to eradicating forced labour globally and the role of multinational corporations (MNCs) therein (Jones and Tiffen 2020, LeBaron and Lister 2021, Phillips and Mieres 2015. In this paper, we concern ourselves with private governance effortsefforts by MNCs to eradicate forced labour through Corporate Social Responsibility (CSR) policies and multistakeholder initiatives. There is now emerging evidence that forced labour falls within the spectrum of labour exploitation within GVCs, and this paper evaluates forced labour and governance according to that school of thought (Crane 2013, LeBaron 2018, McGuire and Laaser 2018, Phillips and Mieres 2015. Our study specifically examines state-imposed forced labour in cotton production that implicates MNCs. Because cotton comprises a large share of global forced labour incidences (Bureau of International Labor Affairs 2021), studying forced labour eradication efforts in cotton is particularly critical.
In our study, we grapple with how the interests and actions of the states and MNCs involved in forced labour shape governance outcomes. We examine cases of forced labour imposed by two states: China and Uzbekistan. Exploring the unique contexts of these two cases of state-imposed forced labour and the variation between them illuminates unique influences on MNC efforts to eradicate forced labour. Although MNCs have made efforts to end the use of forced labour in their GVCs through CSR policies and multistakeholder initiatives, forced labour remains ongoing in China while it has been eradicated in Uzbekistan. An important gap in the international political economy literature surrounds the uniqueness of state-imposed forced labour and its impact on private governance outcomes, as the emphasis has been on how forced labour arises in and is perpetuated by the market (Crane 2013, LeBaron 2020, Phillips and Mieres 2015. This paper focuses on the particularly novel phenomenon of how state-imposed forced labour can influence the willingness of MNCs to act socially responsible. By examining state-imposed forced labour and MNCs' efforts to eradicate the practice, we apply the brand-to-state boomerang model as developed by . Boomerang politics characterises the impact of activism and the resulting private governance efforts on desired outcomesusually, improvements in human rights or environmental standards (Keck and Sikkink 1998). By testing the brand-to-state boomerang model as developed by in two cases, we find that the willingness of the MNCs to act socially responsible to eradicate forced labour is affected by states' varying levels of ability and determination to contest the existence of forced labour. When a state with a massive consumer market strongly disputes the presence of forced labour, MNCs are disincentivised to implement CSR measures as their sales and profit in the disputing state are threatened. Thus, private governance outcomes often depend on the degree to which forced labour is contested and the threat that contestation poses to the profit of MNCs.
The remainder of our paper is divided into six sections. The first three sections are the theoretical building blocks of the paper. The first situates our paper in a broader context of forced labour in the global economy; we map out the importance of studying forced labour in cotton GVCs and discuss the role of both the private and public sector in eradicating forced labour or allowing the continuation of forced labour. It will become especially clear that MNCs play an important role in this process, and that their actions to improve labour standards are hugely dependent on how they perceive their profit to be affected, for example by state behaviour. The second theoretical section introduces the model that we use to depict and understand how different public and private actors shape the governance of forced labour, and the third theoretical section outlines the process tracing methods used to examine the two cases. Following these three theoretical sections, we turn to our case-study investigations of China (the Xinjiang Uyghur Autonomous Region) and Uzbekistan. The very last section of our paper is a conclusion, in which we summarise and discuss our main findings.
Forced labour and private governance in the modern global economy
Forced labour in cotton GVCs
According to data on forced labour from the U.S. Department of Labor, cotton is the product produced with the highest incidence of forced labour; following cotton are gold, sugar, cocoa, coffee, and bricks, among others (Bureau of International Labor Affairs 2021). Because it makes up a significant share of global incidences of forced labour and because it flows through complex GVCs from raw material to final product, cotton is highly relevant for study on efforts to eradicate forced labour. Most contemporary trade in the global cotton industry takes place between lead firms and suppliers in GVCs (Anner 2018). In 2015, somewhere between 40 and 50 per cent of global trade flowed through GVCs (World Bank 2020).
GVC analysis is central to understanding the different actors and their interaction, value capture, and governance in a given sector. Specifically, GVC analysis of governance depicts the level of control a MNC, or lead firm, has over the production processes of its suppliers toward the beginning of the GVC (Gereffi 1994, Gereffi and Fernandez-Stark 2016. GVC analysis is not the only framework offering a useful way to think about economic networks in the global economy. For example, the global production network (GPN) approach (Coe and Yeung 2015, Henderson 2002, Hess and Yeung 2006 has emerged as a notable branch of the GVC literature, defining networks to encompass more actorssuch as geographical territories and statescompared to the firm-centric focus common to the GVC approach (Behuria 2020, Castañeda-Navarrete et al. 2021. However, our analysis is mostly grounded in GVC analysis as this literature has emerged as tremendously rich and useful for understanding governance structures in the global economy, the power of firms in dictating standards in their GVCs, and the business interests of MNCs. MNCs' business interests are defined on performance and image dimensions, such as minimising risk to profitability and reputation . GVC analysis is especially relevant for cotton, as the framework provides useful illustrations of how cotton production is organised globally from farmer to final productyarn, fabric, or garmentsin a complex web of intermediaries. See Figure 1 for a visualisation of a typical cotton GVC. As depicted in Figure 1, a typical cotton GVC has many different stages from farm to garment, and in our contemporary global economy, each stage can take place in a different state. With regard to cotton produced with forced labour in China and Uzbekistan, the presence of forced labour is at the beginning of the GVCparticularly at the farming, aggregating, and ginning stages (Lehr andBechrakis 2019, Schweisfurth 2020). For this study, GVC analysis shows how MNCs selling garments are implicated in forced labour and illustrates the sizable control lead firms have over suppliers. In particular, garments have been widely studied and have been shown to fall into the 'captive' governance type, meaning lead firms wield much power and control over their suppliers and inputs (Mayer and Gereffi 2010). However, this power and control manifests in conflicting pressuresfor efficiency and for better labour standards; these conflicting pressures constitute a limitation of private governance in achieving better labour standards.
Forced labour and business pressures
Much of the literature on labour rights and standards in GVCs details the lead firm leverage over suppliers, which in turn leads to labour exploitation (Crane 2013, LeBaron 2021a, McGuire and Laaser 2018, Selwyn 2019, Sturgeon 2001. Examples of competitive business pressures that yield to lead firm leverage and harm to the workers are cost-minimisation, value capture, short delivery time, and subcontracting, among others (ILO 2019, LeBaron 2020, Milberg andWinkler 2011, Sturgeon 2001). Recent econometric research has found that changing the efficiency-seeking strategies of cost minimisation in the GVC in emerging markets may help improve labour standards (McGahan and Distelhorst 2019). Further, multistakeholder initiatives comprised of any coalition of businesses, non-governmental organisations (NGOs), and/or investors, are found to be susceptible to the same pressures as MNCs, namely cost-minimisation, profitability, and financial conflicts-of-interest (Lund-Thomsen 2021).
Because this paper considers forced labour on the spectrum of labour exploitation, these business pressures can lead to conditions sustaining forced labour (Crane 2013). Meanwhile, linking up to GVCs can be an important feature of economic development and growth, so many states are working to make their business environments attractive for these linkages (Chang and Andreoni 2020, Gereffi and Fernandez-Stark 2016, Graham and Woods 2006, Hauge 2020. Considering both these business pressures and economic development goals is important when thinking about global cotton production, as they showcase some limitations of private governance. GVCs contain contradictory elements. They offer useful avenues for development through growth in trade and investment among countries that take part in them as highlighted well by World Bank (2020). Conversely, the expansion of GVCs have enabled MNCs to strengthen their power in the global economy, and often involves labour exploitation and immiseration at the bottom rung of GVCs through downward pressure on prices and value capture (Selwyn and Leyden 2022). In this paper, we do not dismiss the value of GVC participation for development, but certainly try to highlight the labour exploitation that characterises many GVCs.
Public and private governance of forced labour
Labour governance is the public and private regulations, standards, norms, rules, and actions that influence labour standards in the global economy (LeBaron 2020). As noted above, international organisationssuch as the OECD, United Nations, and ILOand states prohibit forced labour. However, because international organisations focus on decentralisation and state regulation of labour, definitions about how liable lead firms are for labour violations in their GVCs are indeterminate (Černič 2018). For example, the ILO delegates power to the state to regulate their labour markets, while the structure of the global economy has changed to largely consist of dispersed production through GVCs. Further, the ILO largely constructs its remediation programmes in cooperation with the state involved (Phillips and Mieres 2015). Thus, states can prevent ILO programmes from operating, if desired. In other words, the reach of public labour governance are the boundaries of the domestic market, which is out of line with the modern global economy in which capital moves freely and the production of a single good takes place in many different countries through GVCs (Alexander 2019, Phillips 2013, Ponte and Sturgeon 2014. Meanwhile, states have mostly acted through due diligence legislation, or legislation that codifies self-regulation by MNCs to ensure goods imported into the domestic market are not tainted by forced labour (Feasley 2015, Greer and Purvis 2016. There are increasing varieties of due diligence legislation with notable examples in the UK, US, EU, and Brazil (Phillips et al. 2018). The general consensus in the literature is that non-mandatory due diligence legislation does not help to eradicate forced labour, as there are no clear guidelines for what disclosures MNCs must provide regarding efforts to eradicate forced labour, no enforcement or penalties for non-compliance, and no inclusion of rules for labour recruitment intermediaries in GVCs (LeBaron 2020, Phillips and Mieres 2015, Smit 2021). Thus, MNCs have strategic control over the design and implementation of their due diligence measures (LeBaron 2020). So, within a broader institutional environment of soft international and national policies, rules, and regulations, private governance through CSR is the linchpin of labour governance through the coordination and self-regulation of the GVC by the lead firm (perhaps pressured by consumer or human rights activism) (Stringer and Michailova 2018).
Thus, the focus of this paper is on private governance in the form of CSR, as mobilised alone and/ or through multistakeholder initiatives and third-party programmes, and particularly MNC's structural power to impact outcomes befitting our use of the brand-to-state boomerang model. Structural power confers 'the power to decide how things shall be done, the power to shape frameworks within which states relate to each other and to people' (Strange 1994, p. 24). While an institutional view suggests the eradication of forced labour occurs because of regulation and normative values, forced labour has structural inertia that deflects these institutional forces (Crane 2013). MNCs' structural power tends to promote soft law, rather than binding international agreements (LeBaron 2020). In sum, MNCs not only exercise instrumental power and agenda-setting power to bolster their own business practices (Fuchs 2005), but also act as governing institutions themselves, both alone and collectively (May 2015). This is particularly evident in the area of private governance of GVCs by MNCs, multistakeholder initiatives, and third-party programmes, and has ramifications for the eradication of forced labour globally.
Private governance effectiveness
There is mixed evidence regarding whether MNCs governing their GVC through self-regulation actually improve labour standards, given the conflicting business pressure for efficiency (Alexander 2019, LeBaron and Lister 2021) and the distance from lead firm to supplier in a GVC (Locke 2013). developed a framework to illuminate the process and conditions under which MNCs pressure a state to have better labour standards through private governance. In this study, the focus is less about how better labour standards are achieved, but rather how interests shape private governance, how private governance is influenced by states, and the corresponding influence on the outcome. Broadly, the question becomes how does stateimposed forced labour impact CSR efforts and outcomes. Through case studies of state-imposed forced labour in China and Uzbekistan, a political dilemma facing MNCs regarding their net sales and profit (occurring when a state has the ability and determination to contest the presence of forced labour) is shown to be particularly influential on the outcome of eradication of forced labour.
State-imposed forced labour and its influence on eradication efforts by MNCs
The brand-to-state boomerang model A boomerang model depicts the influence public, private, and social actors have on governance outcomes. The original boomerang model developed by Keck and Sikkink (1998) specifically considers how governance actors, particularly civil society and MNCs, influence state-to-state negotiation for certain outcomes. A modified form, the brand-to-state boomerang, explains the influence brands or MNCs can have on a state to attain certain outcomes (den Hond andde Bakker 2012, Seidman 2007). We apply and build upon this boomerang to study MNCs' willingness to eradicate forced labour from their cotton GVCs in the Xinjiang Uyghur Autonomous Region (XUAR), China and Uzbekistan. 1 The brand-to-state boomerang can be conceptualised through a three-stage model, as developed by . In the framework, issue salience is the negative impact on brand reputation and sales (because consumers will shop elsewhere) that the labour standards violation brings to an MNC. However, in our application, the contestation by the host state of the labour violation adds nuance to issue salience because of conflicting pressures from home and host states, as will be elaborated in the next subsection. The issue salience then informs the deliberateness of the MNC response. The following collectiveness of the private governance action is enabled through the mobilisation structure, i.e. the number of MNCs, the multistakeholder initiatives, and the third-party organisations taking action. Collective action is necessary for influence on the outcome because of the large number of MNCs with cotton GVCs and MNCs taking on the higher costs of remediation alone harm their competitiveness, i.e. a collective action problem. As necessary for boomerang politics, MNCs must have GVC operations in the state and take on a governance role, 'Brand advocacy requires the presence of brands willing to assume a political role vis-à-vis government' (Oka 2018, p. 99). The political context of the state and resource dependence 2 of the host state on cotton production determines the power relationship between the MNCs and the host state. Overall that political context mediates the MNC response to the labour violation and the collective action to remediate the labour violation. These factors influence the private governance outcome, as visualised in Figure 2. Through our case studies, we test and tease out nuance with regard to the level of state contestation to the three-stage brand-to-state boomerang model and specify the conditions for MNC leverage and influence on outcomes. This three-stage model informs the structure of our case studies regarding governance efforts by MNCs in the XUAR and Uzbekistan. First, we discuss the political context and resource dependence, as this provides necessary background on the state-imposed forced labour regimes and their contestation of the presence of forced labour. Second, we discuss the issue salience and MNC response, and third, the mobilisation structures and collective action. Given that forced labour has been eradicated in Uzbekistan and remains ongoing in the XUAR, utilising and building upon the boomerang model expands the literature on the circumstances under which private governance is effective, as well as illuminates the significant impact of state contestation.
State contestation, net sales, and the boomerang model
The applicability of the brand-to-state boomerang stems from the fact that the MNCs implicated in the forced labour in cotton in each case are large garment retailers, private governance efforts have been taken, and there is variation in the outcome. Additionally, garment MNCs are known to be particularly sensitive to negative press because their business, in a large part, relies on brand reputation (Barney 1991, Dunning 2001. In our application of the brand-to-state boomerang, we add the contextual nuance of disputed state-imposed forced labour. The contestation of the presence of forced labour by a state that organises the forced labour impacts both the issue salience component and changes the political context informing the MNC-state relationship. Note that issue salience is high when brand reputation and sales may be negatively impacted by a labour standards violation, and thus, MNCs are motivated to act in a deliberate manner . Therefore, MNCs are most likely to act collectively when the issue is salientin terms of a benefit to both reputation and salesand strong influence for improved labour standards is most likely when MNCs act collectively . However, when forced labour is contested by the imposing state, we argue the benefit to brand reputation and sales for remediating labour violations in one state can be negated by the impact to reputation and sales in the disputing state. Namely, MNCs face a trade-off in securing profits in one state versus another state, and MNCs must make a political decision to remain silent or exert their authority in GVC governance. This argument is shortened to 'net sales.' We argue this net sales juxtapositionthe political dilemma a MNC faceslikely happens when a market of a state that is contesting the existence of forced labour is important to the MNC's interests, i.e. their profit.
Thus, in instances of state-imposed forced labour, the private governance efforts and corresponding outcomes likely depend on the ability and determination of the state to contest the presence of forced labour and how important that disputing state is to an MNC's net sales. While this argument does not require a reconfiguration of the three-stage model, it does warrant additional empirical examination of the level of contestation and issue salience in each case. In the cases of China and Uzbekistan, as we will show through a rigorous process tracing methodology outlined below, the opposite governance outcomes display the significant ramifications of state-imposed forced labour on MNCs' willingness to exert their governance authority.
Methodology
Forced labour in cotton production in China and Uzbekistan implicates MNCs through their GVCs. These case studies exhibit the nuances of state-imposed forced labour and its influence on eradication outcomes. Case studies are essential empirical work in investigating and disentangling a phenomenon and its real-life context when those boundaries are opaque (Yin 2009). China and Uzbekistan, as the second and sixth largest producers of cotton in the world, 3 respectively, provide a useful comparison. These two cases' suitability for the empirical examination of private governance effectiveness is emphasised through the following characteristics: (1) both involve instances of state-imposed forced labour, (2) both implicate MNCs through GVCs, (3) both were addressed by international activist campaigns and face regulatory action in MNC home/buyer states, (4) both are cases involving cotton production (as forced labour characteristics can differ by sector). 4 Thus, given the similarities of the cases and the different governance outcomes with regard to continuation and eradication of forced labour, studying the contextual variation is significant to infer beyond the immediate, observable information (King et al. 1994). A systematic, structured procedure for data collection and analysis is described below.
First, in examining the particularities of state-imposed forced labour in each case, we analyse the political context influencing private governance constraints and opportunities, as well as explain resource dependence on cotton production to characterise the power relationship between the MNC and the state. The political contextincluding geographic isolation, domain maintenance, and moral legitimisationshape and sustain forced labour, and examining the variation in state-imposed forced labour can 'refine the relevant contextual specificities and boundary conditions' (Crane 2013, p. 63). Threats of retaliation against MNCs for private governance efforts and the size of consumer markets shape the power dynamic between the state and MNCs. This is mainly carried out through analysis of descriptive statistics regarding cotton production as a percentage of each state's Gross Domestic Product (GDP). Further, relevant documentary evidence from governments and investigative reporting are analysed for information on MNCs implicated and the level of state contestation.
Second, we examine the private -MNC and multistakeholder initiativegovernance activity to define and analyse the issue salience and MNC response in each case. We research whether there is sizable news coverage and the number and weight of activist campaigns heightening MNC reputational damage in the home and host states. Further, we analyse a few examples of prominent MNCs and their governance activity in their CSR policies or code of conduct guidelines to analyse the reach and oversight of their GVCs. The sampling criteria used for choosing certain MNCs (so that they are representative of the larger population of MNCs implicated in forced labour) is their prominence in cotton GVCs and their global brand recognition, as showcased in their prominence in the media, their internationalisation in more than 30 markets, and their membership in garment retailer associations. It should be noted that researching company responses to forced labour is challenging because voluntary reporting is unreliable and unverified, and much of the MNC-supplier relationship is proprietary (Rühmkorf 2018). Nevertheless, important evidence is found to draw out the impact of varying state contestation of the presence of forced labour on the willingness of the MNC to remediate.
Third, we analyse the mobilisation structure and collective action, including the number of MNCs active and the collective effort made, and the limitations of these governance initiatives in each case. As third-party programmes, such as ILO programmes, contribute to the mobilisation structure, their activity is described in each case. Conducting simple text analysis on two garment retailer association public statements emphasises the difference in wording on forced labour governance efforts and illustrates the variation in collective action in each case. We draw information directly from campaign/multistakeholder initiative websites that are particularly active in cotton and forced labour eradicationretail associations, the Ethical Trading Initiative, the Better Cotton Initiative, the Responsible Sourcing Network, and the Cotton Campaign.
To supplement the document-based research and empirical analysis, which includes accounts from local stakeholders, we also conducted semi-structured interviews with experts on forced labour and cotton GVCs, including actors involved in governance efforts in Uzbekistan, to solicit perspectives on the ability of private governance efforts to eradicate forced labour. Topics of discussion were governance initiatives in the XUAR and Uzbekistan, CSR effectiveness, and unique characteristics of stateimposed forced labour that can lead to political dilemmas for different governance actors.
Political context and resource dependence
The case of forced labour in the XUAR offers perspective into state-imposed forced labour, the unique context of CSR efforts in China, and also highlights the dilemma a MNC faces when a state contests the presence of forced labour. The forced labour of Uyghurs and other ethnic groups living in the XUAR by the regional government under the control of the Chinese Communist Party has been widely documented by governments, NGOs, human rights organisations, and thinktanks (CECC 2020, Foreign & Commonwealth Office 2020, Lehr and Bechrakis 2019, Xu 2020).
The political context mediates the influence a social campaign can exert to achieve an outcome (Meyer and Minkoff 2004). After initial denial of the existence of detention facilities where forced labour occurs (Maizland 2021), Chinese government officials began to allege that the facilities in the XUAR are vocational training centres, with the aim of poverty alleviation and de-radicalisation of terrorist, separatist, and extremist views (Zeng 2020). The Chinese government maintains strict control of any foreign journalists entering the region, censors these visits, and rebuts any negative reporting (Kang 2021). Therein, the state disputing the presence of forced labour marks a phenomenon worth studying with regard to private governance. Moral legitimisation sustains and shapes capabilities for forced labour to continue. Specifically, forced labour is accepted through practices such as 'storytelling and other forms of communication as well as broader forms of socialization and culture management' (Crane 2013, p. 61). In the Chinese government's direct management of the situation, forced labour becomes politically sensitive, and this likely alters the motivations for MNCs to act as they may face retaliation.
Recently, the Chinese government has retaliated; they have reacted to allegations of forced labour in the XUAR by limiting MNCs' sales, sanctioning Western government officials and private sector workers, and stirring a consumer boycott (Indvik 2021). On 23 March 2021, the US, UK, EU, and Canada, in their first coordinated effort, imposed sanctions on Chinese officials, calling the activities in the XUAR human rights abuses. On 24 March 2021, after the joint sanctions, H&M, Nike, Burberry, and others disappeared from product searches on major Chinese online retailer's websites and mapping applications (Indvik 2021). The Chinese Communist Youth League also established an online protest, 'Support Xinjiang Cotton' against H&M, Nike, and other brands. Numerous Chinese brands and celebrities came out in support (Cheng and Chan 2021). These instances show the state contestation and the resulting effect on MNCs' bottom lines in China and increasing politicisation of forced labour in the XUAR.
With regard to resource dependence, the importance of cotton as a commodity is important to evaluate the power asymmetry between the MNC and the state. Cotton is particularly important to the economic output of the XUAR, but less significant overall to China. Over 80 per cent of China's cotton is sourced from the XUAR (Lehr 2020), and 22 per cent of the world's cotton comes from China. 5 Compared to Uzbekistan, whose cotton production is a large share of GDP as will be detailed in the next subsection, China's cotton, yarn, textile, and apparel exports, not counting production and internal consumption, comprise only 1.9 per cent of China's GDP (Lehr and Bechrakis 2019). With many other important commodities contributing to China's GDP, China is relatively less resource dependent on the MNCs who source cotton from China, and the power dynamic further leans towards the state. This minimal resource dependence alters the political context, which according to the three-stage model mediates the MNC response and collective action.
The forced labour in the XUAR remains ongoing and is sustained through state-imposition and control, in a large part due to the power dynamic in favour of the state. Furthermore, the conflicting perspectives of the MNC home statemostly Western statesand China heightens the net sales risk the MNC faces in what governance efforts to take as the MNC weighs which consumer market is more important to their bottom line if one market or the other is cut off.
Issue salience and MNC response
The issue salience of the labour violation (largely brought on by potential damage to brand reputation) informs MNCs' response to instances of labour violations ). Many MNCs have been implicated, and many MNCs respond saying it is impossible to know their GVC in its entirety (BEIS Committee 2021), while also saying they can effectively self-police their GVCs (Lowry 2021).
There is an inherent contradiction. The MNCs implicated in forced labour in the XUAR include Marks & Spencer, H&M, and Coca Cola, among 82 others (Xu 2020). News reports have increased dramatically since 2017 on the issue, increasing the notoriety of the case. Since 2016, H&M was named in 190 articles and Nike in 144 articles. 6 This media attention brings reputational damage to MNCs, as exemplified in a briefing paper released by the Ethical Trading Initiative, which specifically focuses on how businesses are negatively impacted by state-imposed forced labour and how they can mitigate that reputational damage (ETI 2019).
Additionally, the reputational damage to MNCs is very strong in the MNC home states, as many states are conducting oversight of MNCs' ability to carry out oversight of their GVCs. Many MNCs are being explicitly named in government documents, thinktank reports, and activist campaigns (BEIS Committee 2021, CECC 2020, End Uyghur Forced Labour 2020, Xu 2020). For example, in a 10 March 2021 report, the UK Parliament's Business, Energy and Industrial Strategy Committee found that many companies are complicit in forced labour in XUAR because they could not guarantee that raw cotton did not originate there (BEIS Committee 2021). Further, the U.S. House Ways and Means Committee has conducted multiple Congressional Hearings 7 on forced labour in GVCs, and even specifically on the XUAR.
Whereas MNCs face pressure from many states in the West and human rights organisations to audit their GVCs and end sourcing from the XUAR, they also face risk to their sales and profit in China, given China's contestation of the existence of forced labour and their economic retaliation. H&M serves as an illustrative example of the political dilemma a MNC faces with regard to net sales and its impact on issue salience. On 24 March 2021, H&M took down their 'Statement on Xinjiang' due to a spike in negative Chinese press coverage, store removal from large Chinese online retailers JD.com and T-Mall, and store removal from mapping apps Baidu and Gaode (Indvik 2021). Moreover, Hugo Boss and Asics posted on the Chinese social media website Weibo that they would continue to source cotton from the XUAR (Business and Human Rights Resource Centre 2021), but later deleted their posts (Indvik 2021). Note that China comprises 10 per cent of Hugo Boss's sales globally, and China comprises 12 per cent of Asics' sales (Business and Human Rights Resource Centre 2021). This emphasises the key importance of China's market to MNC sales and heightens the net sales risk a MNC faces when the evidence of forced labour is contested by the government of a key market.
As private governance through CSR is normally beneficial for both an MNC's reputation and sales, this set of circumstances requires particular attention. This dilemma concerning net sales illustrates the perils of CSR when political contestation exists; the MNC faces misaligned incentives with regard to acting socially responsible with respect to consumers in their home marketthey must risk sales in one state or risk sales in another. In 2020, with a population of 1.4 billion 8 and an average income per capita of 10,610 USD, 9 China is an important market. In sum, issue salience is low for MNCs given the massive consumer market in China greatly impacts the business interest of MNCs in terms of net sales. Thus, net sales are an important consideration when determining the issue salience and the resulting MNC response through the three-stage model. In this case, the issue salience has been shown to be considerably low due to the annulling effect of state contestation of forced labour.
Mobilisation structures and collective action
In the brand-to-state boomerang, issue salience informs the MNC response, and the MNC response influences the collectiveness of the private governance effort through mobilisation structures (see Figure 2). In this case study, the low issue salience leads to minimal mobilisation and collective action on the part of MNCs and multistakeholder initiatives as described below.
First, MNCs passed risk and liability onto multistakeholder initiatives and third-party programmes instead of enhancing the auditing of their own GVCs. Therein, the collective action is lacklustre because of the threat of sales and profit harm and lower issue salience. For instance, on 10 March 2020, a coalition of four trade associations wrote a joint statement that iterated their concern over reports of forced labour, that it poses a 'profound challenge' to the integrity of the GVC, and that they are considering all available approaches but ultimately urge state-to-state engagement on the issue. 10 This statement does not say member MNCs will end sourcing from the XUAR nor increase their oversight and transparency regarding their distant suppliers. Rather, they call for the situation to be handled diplomatically. This exemplifies the finding that MNCs' CSR policies on forced labour are 'more aspirational and less stringent' than other reputation-damaging behaviour, such as bribery (LeBaron et al. 2021, p. 11). Therein, we see the skirting of responsibility, the lack of political will for collective action led by MNCs when CSR is misaligned with profit interests, and the call for public governance when forced labour is disputed.
Furthermore, two multistakeholder initiatives, whose members are MNCs, human rights organisations, and suppliers, made more of an effort, but still faced blowback for making statements on the XUAR. The Ethical Trading Initiative released a statement only after negative media coverage in mid-March 2021. Ethical Trading Initiative urged its members to withdraw from XUAR entirely due to the lack of access for independent auditors (ETI 2021). Similarly, the Better Cotton Initiative suspended licensing cotton in the XUAR in March 2020, and suspended all field-level activities, such as assurance and capacity-building work, in October 2020 (Better Cotton Initiative 2021). In the West, the Better Cotton Initiative faced negative press after both taking down their statement on the XUAR from their website for a period of time and, simultaneously, faced a consumer boycott in China because of that statement (China Daily 2021, Woo 2021. This depicts the dilemma facing MNCs and multistakeholder initiatives when the situation of forced labour is disputed, as well as shows why collective action was minimal in this case of state-imposed forced labour. Other multistakeholder initiatives and third-party programmes have also exhibited minimal collective action. The Responsible Sourcing Network, which includes investors, companies, and human rights advocates in their coalition, does not have a XUAR Cotton Pledge, while it does for Uzbekistan and Turkmenistan. 11 Instead, Responsible Sourcing Network references the Uyghur Forced Labour Coalition, that is largely supported by human rights organisations, with only seven MNCs signing on to their strongly worded Call to Action that requires MNCs to cut ties with all suppliers and sourcing of cotton from the XUAR. 12 Further, the UK Parliament has reached out to the ILO in a letter to inquire why they have not commented on the situation in the XUAR. The ILO, with a membership of states, largely constructs its programmes with the cooperation of state governments (Phillips and Mieres 2015). The ILO responded to the UK Parliament saying it cannot make any public statements until its Committee of Experts have examined all evidence available from different actors (ILO 2021b). 13 In sum, few MNCs are acting, the mobilisation structure is comparatively stunted, and the collective action of MNCs is minimal. Given the size of the market in China, market access is essential for MNC sales and profit. Thus, it is clear that MNCs have not acted in a collective manner to end the state-imposed forced labour in the XUAR, likely because of net sales concerns. This case study highlights the nuance of state contestation, which lowers the issue salience and affects private governance efforts. Next, the state-imposed forced labour regime in Uzbekistan is examined.
Political context and resource dependence
The case of forced labour in Uzbekistan offers perspective into state-imposed forced labour, as well as the lesser political dilemma a MNC faces when implicated in cases where the forced labour is not severely disputed and CSR activity does not imperil access to a key market. Though the Constitution of Uzbekistan prohibits forced labour, there is widespread acknowledgement and research into the state-imposed forced labour regime that existed there (Atayeva and Belomestnov 2010, Evans and Gill 2017, McGuire and Laaser 2018, Schweisfurth 2020. Reports of state-imposed forced labour began in 2004 by Uzbek journalists, and later the Environmental Justice Foundation published a report White Gold to document the use of state-imposed forced labour (EJF 2010). The Uzbek Forum for Human Rights (formerly Uzbek-German Forum for Human Rights, UGF) has monitored and reported annually on child and forced labour in Uzbekistan's cotton fields since 2009 (Schweisfurth 2020).
The power dynamic between the state and MNCs sourcing cotton from Uzbekistan is nuanced. It is important to note the vestiges of the Soviet legacy, especially in state-controlled production, in Uzbekistan. 14 Following the death of President Islam Karimov who ruled from 1991 to 2016, the former Prime Minister Shavkat Mirziyoyev became President and continued the state-imposed forced labour for cotton harvests (MacFarquhar 2016, Schweisfurth 2020). In Uzbekistan, thirdparty monitoring and auditing was challenging particularly because of the climate of fear and repression; monitors and auditors were met with arbitrary arrest, threats, travel restrictions, surveillance, and confiscation of research materials (Evans and Gill 2017). Similar to the case of stateimposed forced labour in the XUAR, the government ascertained control and management of the narrative around forced labour to legitimise it (Crane 2013). This shows that there is a level of contestation and unwillingness to acknowledge and work to eradicate forced labour, but this is not as significant as in the XUAR wherein numerous government documents adamantly deny the existence of forced labour.
With regard to resource dependence, the state is relatively more dependent on cotton, as it is integral to Uzbekistan's economy. Uzbekistan is the sixth largest producer of cotton, comprising 3 per cent of the world's cotton production. 15 While cotton production is integral to both China and Uzbekistan, cotton makes up a significantly larger share of GDP in Uzbekistan, approximately 25 per cent of GDP in 2016 (Evans and Gill 2017). Further, the mere fact that the ILO was allowed to monitor harvests starting in 2013 shows that the state was relatively more pliant. The state's significant dependence on cotton mediates the influence of MNC efforts in private governance, as shown through the three-stage model.
As of January 2021, the ILO had declared that forced labour has ended in Uzbekistan (ILO 2021a). Next, issue salience and MNC response are discussed to highlight the contextual differences of stateimposed forced labour in Uzbekistan and analyse how these differences enabled more robust and influential private governance.
Issue salience and MNC response
In the case of state-imposed forced labour in Uzbekistan, the issue salience to MNCs (brought on by brand reputational damage) is high, and no significant political dilemma faces MNCs. Though some contestation is present (in the fact that the state continued to orchestrate forced labour through the presidential transition), it is not significant enough to influence the governance outcome. For example, MNCs feel pressure from consumers to implement CSR policies and end sourcing in Uzbekistan due to media coverage of the issue. News reports on the forced labour in Uzbekistan increased steadily since 2003, and news peaked in 2020. 16 Most sources were government documents, human rights organisations, and international institutions. H&M was named in five articles and Nike was not mentioned. The news reports that were cited the most were in Uzbekistan Daily, Reuters, Central Asia News, and The New York Times. This media attention served to heighten awareness and reputational damage. Further, the fact that Uzbekistan's news sources were even able to discuss forced labour shows lesser state contestation. In the case of the XUAR, the US-China rivalry may have amplified media attention and policy conflict regarding GVC governance more so than with other states (Gereffi 2020). Nevertheless, the size of the market and the potential impact on net sales are central to understanding the issue salience fully.
MNC home state sales would be harmed if they continued to source cotton from Uzbekistan because garment MNCs rely on brand reputation (Barney 1991, Dunning 2001. For example, the Ethical Trading Initiative specifically mentions state-imposed forced labour in Uzbekistan as harmful to MNCs' brand reputation (ETI 2019). Meanwhile, decreased sales in Uzbekistan would likely not be significantly harmful because there are few consumers. With a population of just 35 million 17 and an average income per capita of just 1,670 USD, 18 Uzbekistan is not a large market for MNC garment retailers. Further, the government of Uzbekistan did not make any similar threats to MNCs' profit like in China (i.e. their stores were not being banned from online platforms, nor did their state media spur a consumer boycott). In other words, issue salience is high for MNCs given that CSR activity brings benefit to both reputation and sales, and the net sales interest is weighted heavily toward the home states of the MNCs -Western statesgiven the minimal counterbalancing force from Uzbekistan's consumers. In sum, the issue saliencethe reputational damage forced labour in Uzbekistan brings to the MNCis high, and MNCs responded concertedly.
Mobilisation structures and collective action
In contrast to the lack of collective action in the XUAR, MNCs, multistakeholder initiatives, and thirdparty programmes have made a collective and credible threat to end cotton sourcing from Uzbekistan because of state-imposed forced labour. From the beginning, coalitions mobilised to act collectively. On 15 August 2008, a coalition of four trade associations wrote a letter to President Karimov asking him to end the use of forced labour in Uzbekistan. In contrast to the hesitant industry association joint statement on the XUAR, this letter directly urged the government of Uzbekistan to end forced labour in cotton, and threatened to withdraw from sourcing cotton from Uzbekistan entirely if improvements were not made. 19 Clearly, given the mediating political context of relative power of MNCs over the state, the associations and their MNC members faced less of a political dilemma in this case and acted collectively.
Further, the mobilisation structures enabling collective action were more robust in Uzbekistan. For example, the Cotton Campaign, a multistakeholder initiative consisting of human rights, labour, investor, and business organisations including the Responsible Sourcing Network, Uzbek Forum for Human Rights, and others, has focused on building awareness and recruiting MNCs to use their collective power to end forced labour in Uzbekistan since 2012. Further, the Responsible Sourcing Network maintained the Uzbek Cotton Pledge, a pledge to not source cotton from Uzbekistan, and its membership of 328 brands from 2010 to 2022. Therein, the mobilisation structures were highly integrated. Additionally, the Better Cotton Initiative recently relaunched their programme in Uzbekistan and has been sharing their open-source method for sustainable cotton production with the ILO and the Deutsche Gesellschaft für Internationale Zusammenarbeit. They are developing vertically integrated farm-to-garment production systems in order to export textiles and garments rather than raw cotton and move Uzbekistan up the GVC (GIZ 2020). 20,21 This is remarkable evidence of the structural power of MNCs. Taken together, there were numerous campaigns, multistakeholder initiatives, and third-party programmes operating in Uzbekistan with the aim of forced labour eradication. This mobilisation structure enabled a strong and credible private governance effort. In sum, the minimal contestation that existed meant the state-imposed forced labour regime was more pliant (presenting an uncontested political opportunity for MNCs), and MNCs mobilised.
Conclusion
Forced labour persists in our contemporary global economy, and this study has shown the mixed effects of MNC self-regulation, multistakeholder initiatives, and third-party programmes on the eradication of state-imposed forced labour, largely due to behaviour and context of the states in question. Essentially, private governance breaks down under the influence of state contestation due to the threat of lost business sales and profit for MNCs in the host state. This builds on the international political economy literature in that MNCs are a structural force that shape the agenda around governance (in this case, the eradication of forced labour), and that their private authority has limits: MNCs will likely be socially responsible only when it is in their business interest.
In the XUAR case, state-imposed forced labour remains ongoing. Although some MNCs and multistakeholder initiatives have responded, they have not acted concertedly. This is largely due to the threat of economic blowback on their sales in China. Notable exceptions are the Ethical Trading Initiative and Better Cotton Initiative statementsbut even these have been saddled with controversy. At the zenith of politicisation, some MNCs and multistakeholder initiatives took down their public statements on the XUAR following China's retaliation on their market access. As Shepherd (2021) aptly put, 'Multinational companies have been forced to walk a tightrope to ensure they are not complicit in human rights abuses in Xinjiang while avoiding Beijing's ire as they seek to operate in the world's second-biggest economy.' As argued by Lehr and Bechrakis (2019), implementing a voluntary industry ban on cotton from the XUAR similar to the one on cotton from Uzbekistan 'would undoubtedly prove challenging for the industry, given the importance of China as a source country' (p. 18). As such, the political context of a major economy whose government vehemently disputes the existence of the state-imposed forced labour regime contravenes the issue salience, given net sales and profit concerns.
In the Uzbekistan case, state-imposed forced labour has been eradicated. A broad and concerted effort by many MNCs and multistakeholder initiatives to end sourcing of cotton from Uzbekistan was shown to be particularly influential on the eradication of forced labour because the MNCs did not face a significant political dilemma with regard to their net sales. Further contributing to this influence was the political context of a resource dependent state that was likely to succumb to outside pressures, granting relative structural power to MNCs. As argues, there needs to be a credible threat of withdrawal from the state for MNCs to influence the government, and this was clearly present in Uzbekistan. See Table 1 for a summary of the two cases.
Then how does state-imposed forced labour impact private governance? The impact is likely mediated by the degree to which the forced labour is contested by the imposing state and the threat that contestation poses to an MNC's net sales and profit. In the case of the XUAR, the state's ability and determination to contest the presence of forced labour lowers the issue salience the negative impact on brand reputation and sales in the Westbecause of the credible threat of loss of sales and profit potential for MNCs selling to consumers in China. In the case of Uzbekistan, MNCs and multistakeholder initiatives acted collectively likely because they did not feel a large enough threat to their sales, given the consumer market in Uzbekistan is small and Uzbekistan did not retaliate. In sum, private entities can play a role in eradicating state-imposed forced labour, but the impact on profit and sales determines actors' willingness to act socially responsible. In other words, if a disputing state's market is important to the MNCs' sales and profitability, then there is a strong disincentive for MNCs to interfere. Broadly, private governance is not entirely ineffective in cases of state-imposed forced labour, but when the presence of forced labour is severely disputed and politicised, private governance and the conditions necessary for effectiveness are muddled.
A limitation of this study is the different durations of the governance activities in each case, with the state-imposed forced labour in the XUAR being more recent and ongoing. However, as argued, the conditions are significantly varied insofar as the length of time governance has been ongoing matters less, as it is already clear that the private governance activities in the XUAR are fewer and less concerted. Further qualitative and quantitative research on state-imposed forced labour is needed. Specifically, examining private governance in other cases such as Turkmenistan through the boomerang model would be insightful. Additionally, researchers should quantitatively examine and determine the exact size of the consumer market that would tilt the scales and pose a political dilemma in terms of the business interest for net sales over social responsibility. Finally, as described in the literature review, as forced labour falls on the spectrum of labour exploitation, examining state contestation and issue salience in cases of poor labour conditions tacitly imposed (or not effectively deterred by the state) could be tested. This study has policy implications because it indicates that the structure and governance of GVCs need to be comprehensively examined and structurally changed to eradicate forced labour globally. Policy practitioners are increasingly discussing and implementing unilateral import bans to prevent entry of goods produced with forced labour. 22 In cases of state-imposed forced labour, these public governance interventions might prove more effective as they may alleviate the political dilemma facing MNCs in terms of acting against the disputing state because MNCs are required to act. However, these interventions can be implemented inequitably across products and states and for geopolitical reasons, and this politicisation can be detrimental to the actual end goal of eradication of forced labour if a MNC decides to bifurcate production into 'clean' and 'dirty' GVCs. Recent studies on private governance of labour standards have discussed enforceable duties on lead firms through the Bangladesh Accord (Anner et al. 2013) and the potential for more effective private governance when multistakeholder initiatives are unioninclusive (Ashwin et al. 2020). However, policy practitioners may want to consider to what extent these more robust private governance initiatives will be effectively implemented by MNCs under conditions of state contestation.
In sum, this application and modification of the brand-to-state boomerang in China and Uzbekistan contributes to the international political economy literature on private governance, and particularly, to the rather limited studies of private governance effectiveness in cases of state-imposed forced labour. When states deny the existence of forced labour and are willing to enforce that belief, it complicates MNCs' willingness to act socially responsible given the credible threat of harm to their bottom line. MNCs can take refuge in unclear and complicated GVC structures and non-mandatory disclosure of the furthest suppliers, and call for the case to be settled diplomatically, rather than exercising their considerable structural power to eradicate forced labour. MNC shirking due to politicisation, mixed evidence regarding private governance efforts to eradicate forced labour globally, and potential bifurcation of production because of conflicts of interest between one state and another all detail the muddled governance of forced labour.
Disclosure statement
No potential conflict of interest was reported by the author(s).
|
2023-03-15T15:24:07.699Z
|
2023-03-13T00:00:00.000
|
{
"year": 2023,
"sha1": "9529422051904ac9ce333b50aede800bb211dae8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/13563467.2023.2184470",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0e2de601cd648c073cd0617bd9eb95b47b069703",
"s2fieldsofstudy": [
"Political Science",
"Business"
],
"extfieldsofstudy": []
}
|
54206704
|
pes2o/s2orc
|
v3-fos-license
|
Embedded C Programming Using FRDM to Enhance Engineering Students’ Learning Skill
Computer programming course that utilizes languages such as C/C++ is always packed with dreary syntax details that consume most of the students’ learning time to obtain ‘grammatically’ correct source code. Consequently, it is difficult for most of the students to apply the theory they have learned in a real life context. Thus, this project proposed a hardware based learning approach for C programming curriculum and reports the effectiveness of using microcontroller board named FRDM-KL05Z to assist teaching and learning activities. The USB-powered microcontroller board is very easy to use and is programmable using C programming language. Students will have the opportunity to learn selection statement with real sensors, touse repetition statement to blink LEDs and utilizing function as well as structure to control actual input and output peripherals. In general we evaluated the students’ response in five criteria namely the students attributes, lecturer’s profile, implementation, facilities and students’ understanding. From the survey, the results in exit survey are higher compared to entrance survey for all criteria. This shows that the students are satisfied with the implementation of the module which has increased their understanding in learning C Programming.
Introduction
Many educators realize the urgency of education reform to respond to the current needs and future trends of the society. The urge of re-engineering the curriculum is needed for new approaches in designing the curriculum and active learning methods for encouraging students to be more creative. Responding to these calls for education reform, this study was conducted as an extension of work originally presented in IEEE 8th International Conference on Engineering Education (ICEED2016) [1]. Projectbased learning (PBL) and "learning by doing"approach have been recognized as one of the effective means for students to obtain the problem-solving capability [2][3].
Another method in learning approach for engineering students is embedded system education. Teaching embedded system design is a challenging undertaking because a teacher cannot assume that all students enrolled in a class have solid prerequisite knowledge across all these areas. To speed up the learning process and motivate students to learn actively, the project-based learning approach [4][5][6][7] is applied in this embedded system design laboratory.
Introduction to the C programming is one of the basic elements in Electrical engineering course. The subject currently is offered in semester 2 [8], and it contains 7 chapters. Each chapter introduces the student with basic command of C programming to execute specific task such as compiling, linking, control flow statement, function and array [9]. The students will be evaluated based on the laboratory task which is only a computer-based laboratory, test that covers theoretical part of the syllabus and mini project. The main objective of the mini project is to gauge the understanding of students in the overall C programming course.
C programming is one of the most important subjects in Electrical Engineering course. However, the process of teaching and learning a programming subject is not an easy task as stated by many studies [10]. Students were reportedly having problem in grasping this subject due to the complex nature of this subject. The development of a program involves similar step like any problem-ASTESJ ISSN: 2415-6698 solving task. The processes include definition of the problem, planning of the solution, coding the program, testing the program and documenting the program. Thus, the ability to execute the whole flow of the process is very critical for a student in mastering the subject well.
Various methods in teaching and learning programming courses have been reported. As an example, several researchers have come with game-based digital learning for teaching C programming course [11][12]. Other method includes active learning by students, which may include in-class activities like peer learning, games and mnemonics [13][14]. A more advance approach as discussed in [15] described how LEGO Mindstorms robot was used as hands-on educational technology in teaching introductory programming courses. This proves that C programming is a complex subject and it requires creative way of delivering it to students.
More recently, Conceive -Design -Implement -Operate (CDIO) initiative are gaining significant popularity among educators around the world [16]. Authors in [17][18][19][20] reformed their teaching approach in delivering C/C++ curriculum based on CDIO approach.CDIO concept emphasis on the knowledge development by integrating engineering skills such as team cooperation, problem solving, communication andknowledge-application ability of students in real-life context. Results indicated that such initiative can effectively improve the students' ability to comprehend basic knowledge in programming and their applicationin engineering system analysis.
Throughout this course, the main drawback that has been observed by the lecturer is that students having difficulty to implement the theoretical part of the subject into practical. This may be caused by lack of experimental design task exposure. Majority of the students do not have the skill to expand their learning of C programming after finishing this course. This situation can be seen in their final year project course (FYP) where most of the students will use microcontroller in their project and majority of them do not have the ability to apply the C programming knowledge to program the microcontroller to execute the desired task. Therefore, a module has been proposed to help the lecturer to resolve this situation. The module will be included in the students' learning process throughout the semester and it does not change the main objective of the syllabus. This module will only be used as a tool to help the student to gain more knowledge and skill in C programming language.
In fact, the microcontrollers are commonly utilized in numerous industrial applications such as remote control devices, office machines, electrical gadgets, automobiles, motor control systems, robots and other industrial fields. For that reason, it is taught as an applied course at universities, especially in the departments such as engineering of electrical and electronics engineering. Freescale has designed the FRDM-KL05Z in collaboration with MBED for prototyping all sorts of devices, especially those requiring the size and price point offered by Cortex-M0+. It is packaged as a development board with connectors to break out to strip board and breadboard, and includes a built-in USB FLASH programmer.
The aim of this study is to measure the effectiveness of the proposed module which is C programming based on embedded system approach, and the students' perception of this course is also evaluated so that there will be more effective changes that can be made to this course to enhance the student's ability in the real application after their graduate.
Methodology
In order to enhance the students' understanding and interest in Introduction to C Programming (ECE126) subject, a one-day training module was conducted to expose them with the implementation of C-programming as shown in Figure 1.0. During the training, participants were given a set of training materials listed in Table 1.0 so as to boost students' engagement throughout the session, harness their interest and motivate them to use the knowledge learnt in ECE126 curriculum in real-life context. A laptop was used to write a program code in C language. Next, the code was transferred to FRDM board and after that user will run the program from the board. The execution of the code was monitored using a smart phone application in real time via a Bluetooth connection. Figure 2 depicts the overall programming process carried out during the training session. Participants were taught to write a program code using a web-based MBED compiler. The software can be used to write and edit a program code using high-level languages such as C and C++. Next, a linking process was carried out in which the written code was linked to FRDM-KL05Z controller board or any hardware platform supported by the compiler. After that, the code was compiled and a binary machine code was generated should there be no syntax error. Next, the compiled code was transferred to the board via mass storage device (MSD) flash programming interface. Subsequently, a communication link between the FRDM board and a smart phone was established via HC05 Bluetooth module so that students can observe their program simply through their own smart phones. This approach is slightly different from the current practice where students were normally taught to create a console application that produced an executable file, which can only be used within the Windows environment [21].
FRDM KL05Z controller board
The FRDM-KL05Z microcontroller board depicted in Figure 3 was used in this project. It is one of the ARM Mbed supported platform based on 32-bit ARM Cortex-M processor [21]. It is a powerful tool to build an embedded system from prototyping stage to mass production. One can also write their own software on top of the board's operating system, Mbed OS, which offers an easy way of hardware control, interaction and integration with other tools. It offers convenient on-board sensors such as a touch sensor, an accelerometer and a Red-Yellow-Green (RGB) Light Emitting Diode (LED) that allow new users to practice and test their code without any need of additional hardware interface.
Training Modules
There were seven topics related to ECE126 subject that have been covered during the training session. They include selection statement, repetition, functions, arrays, pointer and structure. These topics were delivered in seven different modules utilizing the on-board peripherals such as the LED, accelerometer and touch sensor. The modules used during the training were listed in Table II. In the first module, the participants were taught to observe the execution of their program via smartphones using a Bluetooth serial application known as "BT Term" developed by Futaba Inc. A printf() statement was used to output a "hello world!" sentence via the Bluetooth application as depicted in Figure 4. Besides, the second, third and fourth module manipulate the behavior of the on-board RGB LED using several sets of program codes. Students were taught to apply their knowledge on the selection statement to decide on the activation of the LED. Other than that, repetition statements such as while, do-while and for were also employed to create replications of the LED action. Participants were also assigned to blink the LED, in which they need to determine the suitable increment, decrement or assignment operators. Besides, they also learnt how to use break and continue statement to stop the repetition or to skip some iteration within the loop structure. In addition, functions were also been used in module 4 so that the participants can create several set of LED actions according to their preferences. Figure 5 illustrates the use of function to activate the red LED upon receiving value from the user. The program indicates that if a user specifies an input value is equal to 1, the red LED will be activated while red LED will be deactivated if other input value is given. The response of the LED is shown in Figure 6. Besides, one of the most important topic in C programming such as arrays were also been given much attention during the training session. It has always been useful to store information about certain things under a single variable name. For example, in module 5, students were taught to use such concept to store measured data obtained from the accelerometer sensor every one second into an array variable. Next, they can manipulate the content of each array to do additional action such as activating the LED if certain condition is met. This allows students to think, try and explore more about the concept as well as drawing connections between what have been taught and real-world contexts.
Meanwhile, module 6 focuses on the workings of pointer variable, how to use them as well as the common mistakes faced by a programmer when dealing with it. In this exercise, pointer was used to access a memory location that contains user information. Participants were required to create a login interface that prompt user to input their login ID and password. The program will evaluate the input entered via the serial Bluetooth application before triggering the green LED if correct input is given and red LED if otherwise.
The final module attempted to enlighten the participants about the basic concept of structure in C programming language and how it can be used to hold several information within the same category that share common attributes. Figure 7 demonstrates the use of structure concept to control the brightness of the LED. A structure with a tag named LED was declared in this context and three members named "bright", "medium", and "dim" were defined. The member attributes can be accessed by using "." operator such as "RED.bright" which will provide the variable "LAMP" with 1.0 value. In order to evaluate the approach's acceptability and its effect on learning C programming within the training period, a statistical studies were carried before and after the training program. The results of the survey will be discussed in the following section. Figure 8 shows the bar graph descriptions for Criteria 1 to measure the students attribute towards C Programming subject. As shown in the figure, the mean scores for all questions in exit survey are higher compared to the entrance survey. The students responded that they have better programming skills and sufficient knowledge and understanding in C Programming after the module was implemented.
Results
From the survey, we also measured lecturer's ability in teaching the subject from the students' point of view and the statistic is depicted as in Figure 9 below. The mean scores in exit survey are higher than the entrance survey. Majority of the mean scores for all questions in this criteria are higher than 4.00. This indicates that the students felt that the lecturer is capable of teaching this subject. We also evaluated the effectiveness on how the lecturer implemented the module as shown in Figure 10 below. The mean scores for all questions are higher in exit survey than entrance survey, which are above 4.00. This shows that the students are satisfied with the implementation of the module.
Other than that, the students gave positive response towards the facilities provided during the module session. Figure 11 below shows the statistic of the criteria. The mean scores for equipment space (mean = 4.13) and functionality (mean = 4.10) are higher in the exit survey compared to entrance survey. The survey also tested the students understanding of the module as can be seen in Figure 12 below. From the figure, mostly students gave positive response that the module enhances their understanding of C Programming. This research study was further extended to evaluate the students' response reflecting the module execution and implementation. Figure 13 to 16 below depict the students' responses regarding the overall module assessment. Figure 13 shows the bar graph for Criteria Motivation and Students' interest toward the C Programming. The highest score from this criterion was 'Agree' for all three questions and this shows that they are motivated and interested in learning C Programming. Figure 14 describes the difficulty of the course from the students response. Majority of the students found that the course was quite challenging which they were struggling to understand and complete the task given. Figure 15 displays the third criterion which measured the Expectancy for Success among the students. The results shows positive feedback from the students which they believe this course will improve their academic performance. Figure 16 depicts the students response on the implementation of the course. They found that this course was enjoyable and they highly recommend that the course should be more fun and interesting which interactive games can be integrated and real life application can be interfaced.
Conclusions
This paper reports the effectiveness of the implementation of the C Programming module. In general we evaluated the students' response in five criteria namely the students attributes, lecturer's profile, implementation, facilities and students' understanding. From the survey, the results in exit survey are higher compared to entrance survey for all criteria. This shows that the students are satisfied with the implementation of the module, which has increased their understanding in learning C Programming.
|
2019-02-17T14:16:38.427Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "c758c09d57be9d8240e1d640010680629752988c",
"oa_license": "CCBYSA",
"oa_url": "https://astesj.com/?download_id=2697&smd_process_download=1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e82a862288100dbc59c780dd7c68b5373164ab6c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
244949022
|
pes2o/s2orc
|
v3-fos-license
|
Explosive behaviors on coupled fractional-order system
Fractional derivatives provide a prominent platform for various chemical and physical system with memory and hereditary properties, while most of the previous differential systems used to describe dynamic phenomena including oscillation quenching are integer order. Here, effects of fractional derivative on the transition process from oscillatory state to stationary state are illustrated for the first time on mean-filed coupled oscillators. It is found the fractional derivative could induce the emergence of a first-order discrete transition with hysteresis between oscillatory and stationary state. However, if the fractional derivative is smaller than the critical value, the transition will be invertible. Besides, the theoretical conditions for the steady state are calculated via Lyapunov indirect method which probe that, the backward transition point is unrelated to mean-field density. Our result is a step forward in enlightening the control mechanism of explosive phenomenon, which is of great importance to highlight the function of fractional-order derivative in the emergence of collective behaviors on coupled nonlinear model.
Introduction
Spontaneous symmetry breaking in ensembles of interacting dynamic subsystems is an evergreen challenging problem on nonlinear science, which has widely applicability in diverse fields like physics, biology, and engineering [18,28,33,38]. In reality, spontaneous symmetry breaking issue in amplitude and phase variation on a coupled identical unit based on the breaking of the general translational and permutational symmetry of coupled oscillators, has induced several emergent collective phenomena such as chimera states [1,16,19], synchronization [12,30], extreme events [14,29] in natural and dynamic system. Recently, an ever-increasing interest on symmetry breaking phenomenon has been witnessed by the enormous burst studies on explosive synchronization (ES) and fractional differential equations.
The emergence of explosive synchronization (ES) on nonlinear dynamic system, which is characterized by the sudden (discontinuous) and irreversible transition between incoherence and coherence state as the variation of coupling strength, injects fresh blood to the research community to investigate the phase spontaneous symmetry breaking on coupled units. This kind of sharp and irreversible transition was first reported on phase model [13], where scale-free network and posi-tive correlation between the frequencies of the oscillators and their degrees were expressed explicitly as the two key factors of ES. Later, an important work [46] that generated a more general framework, frequencyweighted coupling where the before two ingredients are included as specific cases, was established. It has made ES become a topic of great interest among researchers on nonlinear science and fascinated plenty of significant studies [5,8,20,45] and experimental evidence [10,15,26,44]. Besides, a new phenomenon of explosive oscillation quenching on a frequency-weighted networked system [7], named as Explosive death (ED), grabs the attention of researchers owing to its rich background on complex systems [4,6,9]. In contrast to the classical continuous oscillation quenching transitions [32,48], ED implies the abrupt and irreversible process from oscillatory state to suppression of oscillation. Specifically, on account of the properties of the steady state, ED can be classified into explosive amplitude death and explosive oscillation death [17]. In fact, amplitude death (AD) appears when the coupled oscillators stabilize at a common homogeneous steady state (HSS), and when the stable HSS is associated with the coupling strength, nontrivial amplitude death (NAD) is observed. Whereas, oscillation death (OD) refers to that the system stays at the coupling related inhomogeneous steady state (IHSS), producing via the symmetry breaking, where all the oscillators occupy diverse stable branches of IHSS. Hence, when the system undergoes a first-order discontinuous transition from oscillatory to AD, explosive amplitude death occurs. While explosive oscillation death emerges when the stable state is OD. It is worth mentioning that, the rapid development of explosive behaviors has benefited a lot from the practical application on many complex networked systems and a great deal of research has been conducted for providing an effective control method on ED with different coupling schemes [11,23,34,[40][41][42]47].
As we all know, fractional differential equations play a vital role in many fields of science and engineering owing to the fact that the dynamics of many system can be rendered better by the fractional derivative [2,35,37]. In detail, most processes in real world with special materials and chemical properties are more likely to show fractional behaviors, such as properties of memory, non-locality and history-dependence. In the past few decades, an increasing number of efforts have been made to fractional-order system, especially in viscoelastic materials [3], fractional control prob-lems [22,31], relaxation processes in sensory adaptation [36] and so on. In fact, the extensive studies focusing on explosive phenomenon are done in integer coupling oscillators, while the effect of fractional derivative on the spontaneous symmetry breaking including ED has fundamental importance to the analysis and control of coupled system. So far, the contribution of fractional derivative on explosive phenomenon has not been revealed. Thus, in order to give a mechanistic insight to the explosive behavior, we elucidate a strategy of fractional-order mean-field coupling system in this paper that is outlines as follows. In Sect. 2, the fractional differential equations with Caputo derivative on mean-filed coupling system is elaborated detailly. In Sect. 3, the transition processes from oscillatory to stable steady state are presented numerically and theoretically. Furthermore, the roles of fractional derivative and mean-field density on explosive behaviors are revealed. Finally, the conclusion is summarized in Sect. 4.
Mathematical description
Without loss of generality, we analyze the van der Pol oscillation equation that was first proposed by Balthasar van der Pol [39] in vacuum tube circuits. In fact, the classical van der Pol oscillator described by the second-order differential equation:ẍ (t) − b 1 − x 2 (t) ẋ (t) + x (t) = 0 severs as a basic model of self-sustained oscillatory system, which is also known as unforced van der Pol equation. Recently, inspired by the development of fractional calculus, the fractional derivatives are applied in the van der Pol model in various forms. In particular, by introducing the capacitance via a fractance in the nonlinear RLC circuit, Pereira et al. [27] considered the fractional-order van der Pol (FVDP) as dt + x (t) = 0, 1 < α < 2 represents the fractional order. Meanwhile, another version of FVDP with a fractional damping term was obtained by Xie and Lin [43] as Subsequently, Mishra et al. [25] presented a general version of FVDP, namely, sequential fractional van der Pol equation, which is described as Further, the state space formulations are generated as, where D α t x i and D α t y i are the Caputo derivatives of x i and y i , 0 < α < 1 indicates α-order time derivative, respectively. In fact, the Caputo's definition of fractional derivative [21] is suitable to use in this paper on account of its superiority for the practice problem in engineering, which is defined as follows, where (·) is the Gamma function and n − 1 < α < n ∈ Z + . As the investigation of differential equations continued, the fractional-order counterpart of van der Pol oscillator has drawn more and more attention, many researchers in different branches have studied the dynamical properties of chaos and other complicated emergent collective phenomena.
Here, we intend to deal with the FVDP model in the form on Eq. (1). Owing to the fact that our main concern is whether explosive death could appear in the fractional-order dynamical system via a first-order discontinuous way, we consider a nonlinear model of N identical FVDP oscillators coupled mutually through a mean-field interaction. The dynamic is governed as follows, Herein, i = 1, 2, . . . N represents the number of coupled oscillators. The value Q is the intensity of the mean-filed with 0 ≤ Q < 1 and k (k > 0) indicates the coupling strength. The system parameter is b (b > 0) which shows the nonlinearity and damping strength of the single van der Pol oscillators and expresses a limit cycle.x = N i=1 x i /N purposed the mean-filed of the state variable x. Apparently, in the limit case α → 1 − , the first-order derivativeẋ can be obtained from Eq.
(2). That is,ẋ = lim α→1 − D α t x and Eq. (3) is reduced to the classical mean-field coupled van der Pol system, where oscillation quenching behaviors show with the variation of the coupling intensity.
Discontinuous transitions on coupled fractional-order system
We are interested here in the transition process from oscillatory state to the stationary state. To quantify the state at each coupling strength, i.e., to distinguish the oscillatory state and steady state, we compute a quantify indicator A (k) ∈ [0, 1] evaluated at a sample coupling strength k as the order parameter.
The variable a (k) indicates the difference between the global maximum and the minimum longtime average amplitude for a given coupling strength k when the system undergoes a sufficiently large transient time. The scaled average amplitude is given by, Thus, A (k) = 0 terms to the 'death' state and 0 < A (k) ≤ 1 represents the oscillatory state. It is noted that the order parameter A (k) is obtained numerically by the Runge-Kutta scheme. Owning to the fact that we are concentrating on investigating ED in fractionalorder system, we simulate the system adiabatically in both forward and backward directions through a continuation increase or decrease of coupling strength with step δk = 0.02. So far it has been reported [41] that in the limit case α → 1 − for N = 100 coupled van der Pol oscillators, the order parameter A (k) shows a sudden fall to zero with the increase of coupling strength. Still, as the coupling strength decrease, A (k) performs an abrupt jump from zero to a value which is repainted in Fig. 1a in the case of N = 2, indicating the firstorder transition appears in the integer order mean-filed coupling system is independent of the number of oscillators. On further observation of the time series near the critical transition points in Fig. 1b, this phenomenon is identified as explosive NAD. For the proposed consideration, we first consider the numerical results on a system of N = 2 cou- pled fractional-order system with a fractional derivative α = 0.8 in Fig. 2. As expected, we observe that the order parameter A (k) still exhibit a sharp, irreversible transition whenever we increase the coupling strength k from zero to the maximum values k max or decrease the k from k max to zero in Fig. 2a. Most certainly, these two sudden jumps emergent at different value of coupling strength k, showing a hysteresis area (HA) encircled by the forward and backward transition points k f , k b . However, when we reduce the coupling strength from the maximum k stacked in the forward direction, the transition path in the backward direction (blue lines) does not overlap the changing curve in the forward continuation (orange lines) after going through the backward transition point. That is, the oscillators cannot be restored to the initial state when the fractional derivative α = 0.8 was introduced, which is definitely a far cry from the previous research. Note that both the critical forward (the yellow star) and backward (the green cross) transition points are moving advanced compared to the integer order system in Fig. 1a. To better characterize the properties of the emerging explosive death, we then portray the corresponding time transient of the coupled system near the critical forward and backward transition points in Fig. 2b1 and b2. We can infer that the oscillators synchronize for a smaller value of k. Then with the coupling strength getting larger, the synchronous behavior disappears immediately after the critical forward transition point k f = 1.0. Interestingly, at the same time, the oscillators stabilized at two inhomogeneous steady branches. Namely, once the fractional derivative exists, the system will preferentially stabilize at the OD state rather than the homogenous equilibrium. In fact, when the coupling strength k is becoming smaller with the decrement, Fig. 2b2 reveals that the OD state is lost as soon as the backward transition point k b = 0.72 (less than k f ) reaches. It can be clearly seen that the system oscillates with two opposite amplitudes after this critical coupling strength, indicating that the system reaches an anti-synchrony state rather than initial synchronization. This is not surprising, since we have determined that the oscillators are stable on two branches in the forward transition, we further choose the inhomogeneous steady state as the initial state of k = k max in the backward direction rather than random selection like forward process, thus the anti-synchronized state is easier to obtain. Thus, not only the hysteresis area of OD and synchronized state is observed, but also a fresh region of synchronization and anti-synchronization appears. Moreover, the results above are independent of the number of coupled oscillators. In fact, in the case of N = 100, the explosive OD appears in both system and the forward transition points are the same as the case N = 2. Meanwhile, in the case of N = 2 , the two oscillators stable at two branches that are symmetric about the origin. However, oscillators distribute in the upper and lower branches which is asymmetric about the origin for N = 100 . Hence, the initial state of N = 2 and N = 100 in the backward direction is different, inducing the diverse backward transition points. Besides, the system experiences a sudden and irreversible transition from oscillatory state to chimera death state, which has a great difference from the case for N = 2 (see details in Sect. 3.3).
Amplification of the hysteresis area
When the explosive discontinuous transition happens, there must exist divergent critical transition points in two directions owing to the adiabatic methods. The region (nondimensional area) enclosed by these two points is so-called hysteresis area which is known as the coexistence of OD and synchronization in the previous section. To achieve more enhancement with the characteristics of HA in the fractional-order system, we perform the variation of forward transition point k f , backward transition point k b and HA width along with the decrease of fractional derivative α from 1 to 0 by decrement δα = 0.01 in Fig. 3 for a fixed Q = 0.5, and some interesting conclusions are drawn. Specifically, a tall hysteresis area is clearly found to pop up for α = 0.99 in Fig. 3a, meaning that the explosive OD happens even for α = 0.99. In other words, there is bound to exhibit explosive OD not explosive NAD once the fractional derivative is introduced. Then we decrease α and observe a uniform reduction of the hysteresis region. Until the threshold value α min = 0.72 arrive, the forward and backward transition points coincide, indicating that the appearance of hysteresis loop made up of OD and oscillatory state is closely connected with fractional derivatives. In fact, the critical forward and backward transition points in the current paper are described as the points after the state variation, thus, when k b − k f = δk = 0.02 is established, we believe the two critical points overlap. It is tentatively suggested that, the hysteresis region is decreasing with the reduce of fractional derivative. More to the point, the fractional derivative α > 0.72 is what we see as a qualifying condition to ensure the occurrence of explosive OD. Besides, we note that, with the increase of α, the hysteresis area width enlarges linearly. Thereby, the variation of HA width with the fractional derivative is well fitted by the linear function H A width = 3.2699α − 2.3277 (dotted line in Fig. 3b).
Encouraged by the interesting observation on the occurred condition of the explosive OD in fractionalorder system, we now proceed to vary the intensity of the mean-filed Q and investigate the peculiarity of the explosive transition. In fact, we start with the value Q = 0 and increase it with increment δ Q = 0.1 and go up to Q = 0.9 as described in Fig. 4. We note that, the threshold value α min doesn't growth with the enlargement of the mean-field intensity Q. Further, the intensity of the mean-filed could implement as a regulator to achieve the reviving of explosive behavior. In other words, to take a small example, in the case Q = 0.1, the explosive OD will vanish if the fractional derivative fit with α < 0.8. Then we increase Q to Q = 0.4, the fractional derivative α in the interval of 0.72 < α ≤ 0.8 could induce explosive behavior. In this way, the value Q could adjust the occurrence of the first-order transition. Thus, it is verified that the mean-filed intensity plays a vital role of controlling the qualifying condition of the explosive OD.
Moreover, for detail investigation, we then compute the critical transition points during the forward and backward continuation with the increase of the meanfiled intensity for different fractional derivative α in In fact, it is not surprising to note that the HA width in Fig. 5c performs the similar results with the forward transition points since it is obtained by subtracting the value of backward transition point from the value of forward transition point. Meaning that the fractional derivative and the mean-field intensity could influence the appearance of the discontinuous transition from synchronization to stationary state in the forward direction. Besides, the fractional derivative also acts an important role to adjust the backward transition from OD to anti-synchronized state. However, the backward transition point is entirely unrelated to the mean-filed intensity that is in good agreement with the theoretical results (see details in Sect. 3.2), which indicates that the backward transition point is independent of the mean-filed intensity Q.
Analytical analysis of critical transition point
To further enunciate the transition in the explosive behavior, we move to some theoretical analysis via Lyapunov indirect method in the next step of our study. Specifically, the fixed points of the two coupled fractional oscillators can be divided into three categories: (a) trivial steady state E 0 = (0, 0, 0, 0). x * = 1 − 1 bk and y * = kx * , which is appearing due to the symmetry breaking. Note that the IHSS exist under the condition k > 1/b . Moreover, the IHSS is independent of the mean-field density, which is showing detailly from the form of the inhomogeneous fixed point. From the results presented above, it is clear observed that the stationary state of meanfield coupling fractional-order system is OD. Thus, we mainly adopt the Lyapunov indirect method on the coupling related IHSS E * . Thereby, we perform the linear stability analysis in the inhomogeneous equilibrium E * to obtain the stable boundary, in other words, the critical transition point. The characteristic equation is given by, Then the corresponding eigenvalues are described as, Herein, According to the stability theorem of fractional systems proposed by Matignon [24], the fixed points E * is asymptotically stable if all the eigenvalues λ * m (m = 1, 2, 3, 4) corresponding to the linear matrix content the criterion, That is, if the absolute values of all the eigenvalues corresponding to the linear matrix are greater than απ/2 , the nonlinear system is asymptotically stable. Hence, the stable inhomogeneous equilibrium of coupled fractional-order oscillators, i.e., OD state can be determined as following, . Therefore, an explicit illustration of the backward critical transition point is exhibited. In fact, when the damping coefficient b is smaller than √ 1 − Q, the backward transition point k b is 1/b . Otherwise if the damping coefficient b and the mean-field density Q meet with the situation b > √ 1 − Q, a relationship between the fractional derivative α and the critical backward transition point k b is drawing clearly by the expression in Eq. (8). That is, the critical points in the backward direction should match the condition α < One final thing to note here is that, on the condition of Figs. 3 and 5, we can examine that if the backward critical transition point satisfies the crite- that is marked as black solid line in Fig. 3a, the oscillators are stable. In addition, we observe that the stable condition is independent of the mean-flied density. However, the backward transition points can be divided into two types via the relationship of b and √ 1 − Q. Besides, one can find that the criteria have no relationship to the mean-field density Q which is fitted with the numerical results in Fig. 5b. Therefore, it verified that the backward transition point is independent of the mean-field intensity in the current paper.
Emergence of explosive chimera death
Based on the above observation, explosive NAD exist in integer order system really does not appear in the fractional mean-flied coupling oscillators, but oppositely a sudden and irreversible transition from oscillatory to OD state once the fractional derivative is introduced. However, the system reported in the previous sections is only a special case for N = 2, it is valuable to study how the system changes with the degrees-of-freedom of the coupled oscillators. For simplicity, we divided the network size into odd and even cases and consider the transition processes in Fig. 6, where we take N = 100 and N = 95 as examples. Through observing the variation of order parameter and time transient near the transition points, we find that, the sudden and irreversible transition from synchronous to OD state happens in both odd and even cases at the forward process. And in the backward direction, with the decrease of the coupling strength the system undergoes a sharp jump from OD to synchronization. Meanwhile, a hysteresis area of synchronization and OD coexistence. Besides, the critical forward transition points are the same in both cases, while the backward critical transition points have a fine distinction between N = 100 and N = 95. It is interesting to note that, in the backward direction, oscillators go back to synchronous oscillation not anti-synchronization showing in Fig. 2 in the backward direction, which has a great difference from the transition for N = 2. Therefore, the coexistence state of synchronization and anti-synchronization appearing in Sect. 3.1 will not exist for high dimensional system.
To provide more insight into the discontinuous firstorder transition appearing in the high dimensional fractional system, we examine the space-time plots and the snapshots of the variable X i near the forward and backward transition point in Figs. 7 and 8. Still the in-phase synchronized state is well pronounced in the oscillatory state for both odd and even cases. Interestingly, through further analyzing the space-time and snapshots at the stable inhomogeneous steady state, we observe that the N identical oscillators split into two subpopulations: the neighboring oscillators distribute Fig. 7 a-b Space-time plots for the variable X i (t) near the forward critical transition points for different network size a1-b1 N = 100 and a2-b2 N = 95. c Snapshots at t = 1000 in the forward direction to show the explosive chimera death the same branch of inhomogeneous steady state on one domain, while the neighboring elements are populating randomly on the two branches of the inhomogeneous steady state in the other subpopulations. Hence, the stationary state is characterized by the coexistence of spatially coherent oscillation death and spatially incoherent oscillation death, which is the typical characteristic of chimera states. In other words, the stationary state is further identified as chimera death (CD) via Figs. 7 and 8. Therefore, the oscillators undergo the sudden jump between synchronized state and CD, while a hysteresis area of CD and synchronization exist, that is, the explosive chimera death emerges in N coupled fractional oscillators. In more detail, for high dimensional cases N = 100 and N = 95, the fractional system experiences a first-order transition from in-phase synchronized state to chimera death state, which has a great difference from the case for N = 2. Moreover, although the explosive CD exist in both even and odd situations, the distribution of nodes which is stable at coherent and incoherent oscillation death is completely random. When we reduce the coupling strength from k max to zero, we choose the final state of the forward transition rather than the uniformly symmetrical state in Sect. 3.1 as the initial state. Thus, there are subtle differences in the backward critical transition points for N = 100 and N = 95.
For detail investigation of the explosive transition evolution with the number of network node N , we examine the variation of forward, backward transition points as well as the hysteresis area width with the increase of network size in Fig. 9. Interestingly, the cru- cial transition points are independent of the parity of the number of oscillators to some extent. In particular, the forward transition points even remain unchanged for any network size we showed. Whereas, as for the stationary state in the forward direction, the distribution of the coherent OD and incoherent OD for diverse network size is random, which is the initial state of the backward process. Thus, the backward transition points fluctuate slightly in a range. Meanwhile, the width of the hysteresis area composed of synchronized state and chimera death state also performs floating in a small interval as showing in Fig. 9b.
Conclusion
The present study has dealt with the problem of explosive phenomenon on a system of mean-filed coupling fractional-order nonlinear oscillators. We demonstrate the setting of a sudden and irreversible transition from oscillatory to stationary states, where it is observable even when the fractional derivative is proposed to describe the chemical properties and special materials in real world. It is uncovered that, once the fractional derivative is introduced, the system prefers to stabilize at the inhomogeneous steady state rather than the homogenous equilibrium. Meanwhile, it is indicated that fractional derivative exhibits an amplification of the hysteresis area enclosed by the critical forward and backward transition points of the explosive transition. Indeed, our result is not limit to the case of two coupled oscillators, which is fully robust against the large amount of the oscillators. In more detail, for high dimensional system, oscillators undergo the abrupt transition from synchronized state to chimera death state and result in a coexistence state of synchronization and CD state, which is independent of the parity of the number of oscillators. Moreover, for further consideration on the role of the fractional derivative, it could be revealed that, there is a minimum fractional derivative for inducing explosive behavior when the mean-filed density is fixed. In other words, the appearance of the first-order discontinuous transition could be adjusted via the fractional derivative. Interestingly, the threshold value α min does not increase with the enhancement of the mean-field density Q. Thus, an appropriate mean-filed density and sufficiently large fractional derivative is liable to induce the emergence of explosive behavior.
Besides, it is further verified that the backward transition point has no relationship with the mean-filed density numerically and theoretically. The mean-filed density could just regulate the forward direction to control the coexistence area of oscillatory and stationary state. Notably, our findings mainly focus on a new attempt in the fields of investigation and application of the explosive transition in natural systems showing fractional behaviors. Our results about understanding the role of fractional derivative on collective behaviors especially in oscillation quenching can be constructive in absorbing the underlying mechanism of many realistic chemical and physical systems with properties of memory, non-locality and history dependence. tation of Northwestern Polytechnical University (Grant Nos. CX2021035). The authors would like to thank the anonymous referees for their efforts and valuable comments.
Funding National Natural Science Foundation of China (Grant Nos. 11772254, 11972288) and Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (Grant Nos. CX2021035) Data availability Data will be made available on reasonable request.
Conflict of interest
The authors declare that they have no conflict of interest, with respected to the research, authorship and publication.
|
2021-12-08T16:11:55.600Z
|
2021-12-06T00:00:00.000
|
{
"year": 2022,
"sha1": "cc4ea509086fd82493350622c1a17d1cebc978a6",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1084963/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "cb1397c0da1e5cf1f44a962b8a12079b71043172",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
268756399
|
pes2o/s2orc
|
v3-fos-license
|
Electromagnetic Field Drives the Bioelectrocatalysis of γ-Fe2O3-Coated Shewanella putrefaciens CN32 to Boost Extracellular Electron Transfer
The microbial hybrid system modified by magnetic nanomaterials can enhance the interfacial electron transfer and energy conversion under the stimulation of a magnetic field. However, the bioelectrocatalytic performance of a hybrid system still needs to be improved, and the mechanism of magnetic field-induced bioelectrocatalytic enhancements is still unclear. In this work, γ-Fe2O3 magnetic nanoparticles were coated on a Shewanella putrefaciens CN32 cell surface and followed by placing in an electromagnetic field. The results showed that the electromagnetic field can greatly boost the extracellular electron transfer, and the oxidation peak current of CN32@γ-Fe2O3 increased to 2.24 times under an electromagnetic field. The enhancement mechanism is mainly due to the fact that the surface modified microorganism provides an elevated contact area for the high microbial catalytic activity of the outer cell membrane’s cytochrome, while the magnetic nanoparticles provide a networked interface between the cytoplasm and the outer membrane for boosting the fast multidimensional electron transport path in the magnetic field. This work sheds fresh scientific light on the rational design of magnetic-field-coupled electroactive microorganisms and the fundamentals of an optimal interfacial structure for a fast electron transfer process toward an efficient bioenergy conversion.
Introduction
Microbial fuel cells (MFCs) are bioelectrochemical systems that convert chemical energy into electrical energy using electroactive microbes (EAMs) as catalysts [1], representing a clean energy technology [2].Moreover, they can be used to treat waste/wastewater and have been widely investigated due to their dual efficacy [3,4].EAMs can utilize an electrode as the terminal electron acceptor for reduction, with the electrode serving as either the electron donor or acceptor depending on whether it functions as the anode or cathode [5,6].They possess several extracellular electron transfer (EET) strategies for anaerobic respiration, including a direct electron transfer (DET) mediated by outer membrane C-type cytochromes (OM c-Cyts) [7] and nanoconductors [8,9], and an indirect electron transfer (IET) mediated by exogenous or endogenous electron mediators.However, limited by the inefficient EET process and the slow transmembrane process, the power density of MFC is far from reaching the levels of industrial application [10,11].Hence, there lies a pressing need to construct a simple and highly efficient approach that expedites the EET process.
With the burgeoning advancement in the field of nanoscience, the use of nanomaterial for MFC modification has received widespread attention from researchers and has achieved significant results [12].A lot of studies have proven that the implementation of functional nanomaterials can significantly reduce charge transfer resistance, leading to a notable enhancement in microbial colonization and biofilm growth [13][14][15].Additionally, this innovative approach has demonstrated a considerable potential for improving the efficiency of electron transfer to extracellular receptors [16].Furthermore, highly conductive nanomaterials can serve as electron shuttle channels, which can greatly improve the efficiency of an EET [17].In these contexts, researchers have applied various advanced nano-functional materials, including metal oxides, carbon-based nanomaterials, metal-based nanomaterials, conductive polymers, and their composites, to the study of MFC [18].Initially, researchers used a single or composite nanomaterial to modify the anode surface to increase the surface area for bacterial growth and optimize the electrode surface properties [19].Nonetheless, a noteworthy observation has emerged, revealing that within the natural biofilm formed on the anode surface, a majority of bacteria are located at a considerable distance from the functional nanomaterial.Consequently, only the bacteria inhabiting the innermost layer of the biofilm maintain direct contact with the nanomaterials positioned on the electrode surface [20].The efficiency of EET is improved only to a limited extent by relying on slow electron jumps in redox centers for electron transfer between bacteria [21].To address these issues, researchers then proposed the strategy of using nanomaterials hybridized with biofilms [22].Through a disordered mixed contact of functional nanomaterials with bacteria inside biofilms, the nanomaterials can help facilitate long-distance electron transfer, further improving the efficiency of EET [23].However, the establishment of a tightly coupled and efficient pathway for electron transfer between EAMs and conductive non-biological surfaces continues to be elusive.To exploit the active sites on the bacterial outer membrane, researchers have proposed the use of nanomaterials to modify the surface or interior of bacteria [24][25][26][27].The efficient construction of the microbial-nanomaterial interface can optimize the EET efficiency and enhance the power generation performance of MFC.As an example, the strategic utilization of the carbon particle point modification on bacterial surfaces has demonstrated its ability to enhance bacterial adhesion and facilitate the formation of biofilms.Owing to the presence of surface carbon particle points, the maximum current and power output are increased by 7.34 times each, and the EET efficiency is improved [28].Silver nanoparticles successfully introduced into the transmembrane and outer membrane dramatically enhance the charge extraction rate of MFC, achieving a maximum current density of 5 mA/cm 2 and a power density of 0.66 mW/cm 2 [29].The concept of a single-cell electron collector has been proposed.The team utilized dopamine in situ polymerization on the surface of S. oneidensis MR-1 cells to form a primary electron collector, followed by the further assembly of a more efficient electron collector through FeS NPs biomineralization.The remarkable electron transfer rate and electron recovery efficiency have led to a record-breaking bioelectricity generation in S. oneidensis MR-1, achieving an impressive power output of 3.21 W/m 2 [30].
Magnetic nanoparticles (MNPs), regarded as a pivotal class of functional nanomaterials, have garnered extensive attention owing to their remarkable nanoscale characteristics and distinctive magnetic properties, thus triggering substantial research endeavors.Among the MNPs, magnetic hematite (γ-Fe 2 O 3 ) is considered one of the most ideal materials for various applications due to its inherent biocompatibility, oxidative stability, high surface area, and good magnetism [31].Also, Shewanella belongs to the group of dissimilatory metal-reducing bacteria with a unique EET behavior, using iron oxides as terminal electron acceptors to complete metabolic and electron transfer processes.Moreover, the trivalent iron ions in magnetic hematite have reducibility, which is an important electron carrier property for OM c-Cyts and iron oxide proteins.However, there are currently a few reports on the application of γ-Fe 2 O 3 for the cell surface modification of MFC power generation bacteria.
At the same time, recent investigations have demonstrated that the application of a magnetic field (MF) can facilitate the rapid proliferation of biofilms, enhance the electrochemical activity of power-producing microorganisms, shorten startup time, improve open-circuit voltage, reduce reactor resistance, and enrich power-producing bacteria [32][33][34].Therefore, the application of an MF in MFCs has been extensively studied.
A reduced startup time and an enhanced biofilm electrocatalytic activity were observed with the application of a 100 mT magnetic field to a single-chamber microbial fuel cell using mixed wastewater [35].A range of MFCs, including single-chamber, double-chamber, and three-electrode cell designs, were fabricated utilizing pure cultured Shewanella.Remarkably, under the influence of MF, high voltage outputs were obtained consistently across all configurations.The study revealed that the application of an MF stimulation resulted in an enhanced secretion of mediators and improved catalytic activity.As a consequence, the electron exchange efficiency was markedly improved [36].In addition, research has indicated that an appropriate magnetic field intensity can increase power generation and reduce internal resistance, while a stronger magnetic field can suppress MFCs performance [37].
In addition to the static magnetic field mentioned above, researchers have also introduced an electromagnetic field (EMF).An EMF can serve as a driving force for controlling the metabolic kinetics of EAMs, and a constant high-intensity magnetic field can inhibit microbial metabolism and normal growth [38].Therefore, the short-term intermittent application of a magnetic field can have a good regulatory effect on bioelectrocatalysis.Pulse electromagnetic fields (PEMFs) can enhance the enrichment of exoelectrogenic bacteria and accelerate extracellular electron transfer, thereby improving the power generation efficiency.A PEMF causes changes in microbial community and uniformity, leading to a decrease in the microbial diversity of the biofilm [39].Applying a 2 mT solenoid magnetic field (SOMF) in an osmotic microbial fuel cell (OMFC) increased the coulomb efficiency in a study by 20-30%, producing a current density of 26.58 ± 12 mW m −2 , power density of 266.29 mA m −2 , and shortening startup time by 1-2 days.However, performance was reduced when the electromagnetic flux of the coil was increased to 3 mT [40].In particular, the synergistic application of MNPs and a MF exhibits tremendous potential in enhancing bioelectrochemical electricity generation, facilitating the production of high-value byproducts and efficiently removing pollutants from wastewater sludge [41][42][43].However, there are still a few reports on the bioelectrochemical system coupling controllable electromagnetic fields with magnetic nanoparticles.
In this study, the magnetic nanomaterials γ-Fe 2 O 3 and S. putrefaciens CN32 were selfassembled to form hybrid bacteria CN32@γ-Fe 2 O 3 (Figure 1).The magnetic nanoparticles in the hybrid bacteria can serve as electron shuttle media and be coupled with electromagnetic fields to construct an MNPs hybrid bacteria-coupled electromagnetic field system to accelerate the bioelectrocatalytic process at the interface, opening up new channels for electron transfer and to improve the EET transfer efficiency and power generation performance.The experimental findings unequivocally demonstrated that the magnetic nanomaterials on the surface of the bacteria enhanced the DET mediated by OM c-Cyts and significantly improved the efficiency of the bacterial and interface EET processes.Under magnetic field stimulation conditions, electrochemical results showed that the MFC system constructed by CN32@γ-Fe 2 O 3 had a smaller peak spacing, a more negative anodic peak potential, a larger oxidation-reduction area, and a lower charge transfer resistance compared to controls.This experiment provides an uncomplicated, efficient, and low-cost technique for modifying S. putrefaciens CN32 using magnetic nanomaterials, which can promote the EET process and has potential value for BES.
Cultivation of Heterotrophic Bacteria
The LB plates were prepared and sterilized by autoclaving at a high temperature (121 °C, 20 min).They were then quickly poured into clean culture dishes before completely cooling down.Each dish contained about 20 mL of solution and was stored in a refrigerator at 4 °C after cooling down.The preserved S. putrefaciens CN32 strain was retrieved from the ultra-low temperature freezer at −80 °C, activated on a solid medium, and continued to cultivate in a constant-temperature incubator at 30 °C for 14 h.It was taken out and stored in the refrigerator at 4 °C for later use.In a super-clean workbench, a single colony of S. putrefaciens CN32 strain grown on a solid medium was picked and put into the liquid medium after high-temperature sterilization.The conical bottle was placed on a shaker (220 rpm, 30 °C, 16.5 h) for incubation.Then, it was distributed into 6 centrifuge tubes and centrifuged (6 min, 6000 rpm) to remove the supernatant.The resulting bacteria were resuspended in 18 mM lactic acid and 80 mL of M9 buffer as the anolyte and blown with nitrogen for 30 min to ensure that the experiment was performed under strict
Cultivation of Heterotrophic Bacteria
The LB plates were prepared and sterilized by autoclaving at a high temperature (121 • C, 20 min).They were then quickly poured into clean culture dishes before completely cooling down.Each dish contained about 20 mL of solution and was stored in a refrigerator at 4 • C after cooling down.The preserved S. putrefaciens CN32 strain was retrieved from the ultra-low temperature freezer at −80 • C, activated on a solid medium, and continued to cultivate in a constant-temperature incubator at 30 • C for 14 h.It was taken out and stored in the refrigerator at 4 • C for later use.In a super-clean workbench, a single colony of S. putrefaciens CN32 strain grown on a solid medium was picked and put into the liquid medium after high-temperature sterilization.The conical bottle was placed on a shaker (220 rpm, 30 • C, 16.5 h) for incubation.Then, it was distributed into 6 centrifuge tubes and centrifuged (6 min, 6000 rpm) to remove the supernatant.The resulting bacteria were resuspended in 18 mM lactic acid and 80 mL of M9 buffer as the anolyte and blown with nitrogen for 30 min to ensure that the experiment was performed under strict anaero-bic conditions.The obtained bacteria were named CN32.CN32@γ-Fe 2 O 3 heterotrophic bacteria were also prepared using a similar method, wherein 7.5 mM of γ-Fe 2 O 3 MNPs was added at the same time as the single colony.A mixture of 2 mg of γ-Fe 2 O 3 MNPs and 20 µL of polytetrafluoroethylene (PTFE) was applied evenly on the surface of the carbon cloth (CC, 1 × 1 cm) and dried in a vacuum drying oven (110 • C, 3 h) for use as a functionalized electrode, denoted as CC@γ-Fe 2 O 3 .Then, it was used with S. putrefaciens CN32 in the BES system and named CN32 CC@γ-Fe 2 O 3 .
Application of Magnetic Fields and Data Acquisition
The schematic diagram of the electromagnetic field-coupled MFC device (Figure S1).The MFC half-cell device was placed in the coil of an integrated (SPG-03) high-frequency induction heating device.The industrial chiller was started and the magnetic field intensity was adjusted to 1-2 mT.For the i-t Curve (IT) test, intermittent magnetic stimulation was used with a time interval of 1 h, and magnetic stimulation was repeated 3 times.Continuous magnetic field application was used for electrochemical impedance spectroscopy (EIS), differential pulse voltammetry (DPV), and cyclic voltammetry (CV) testing until the test was completed.The three-electrode system of the single-chamber MFC was connected to an electrochemical workstation (CHI660 or CHI1040), with 1 × 1 cm carbon cloth (CC), saturated calomel electrode (SCE), and 2 × 2 cm CC serving as the working electrode, reference electrode, and counter electrode, respectively.The other end was connected to a computer to collect electrochemical data.CN32, CN32@γ-Fe 2 O 3 , and CC@γ-Fe 2 O 3 were used as bioanodes for MFC systems coupled with a magnetic field, denoted as CN32 + MF, CN32@γ-Fe 2 O 3 + MF, and CN32 CC@γ-Fe 2 O 3 + MF, respectively.
Bacterial Characterization and Pretreatment
The cells were fixed in a 4% paraformaldehyde solution for 12 h and then dehydrated with a gradient of anhydrous ethanol, ranging from 30% to 100%, to ensure the complete morphology of the bacteria.The soaking time was 30 min, and after the bacterial liquid was centrifuged, the upper layer of the liquid was removed, and the sample was subjected to freeze-drying.The bacterial morphology was meticulously examined using cuttingedge techniques, including Field Emission Scanning Electron Microscopy (FESEM) and Transmission Electron Microscopy (TEM).FESEM was performed at an operating voltage of 10 kV, while TEM was conducted at an impressive operating voltage of 200 kV.The properties and content of the surface elements of the material were analyzed using Energy Dispersive Spectroscopy (EDS).
Electrochemical Testing of the S. putrefaciens CN32-Magnetic Field Coupling System
CV was conducted in the voltage range of −0.8 to 0.6 V relative to SCE, with a scan rate of 1-100 mV s −1 .DPV was performed in the voltage range of −0.9 to 0.7 V with a potential increment of 0.004 V. EIS was rigorously carried out over a wide frequency range spanning from 0.1 Hz to 100 kHz.A voltage of −0.45 V was applied, accompanied by a perturbation signal of 10 mV, ensuring precise and accurate measurements.IT was measured under a constant voltage of 0.2 V to analyze the electrochemical oxidation-reduction reaction occurring at the electrode interface.
Assembly of CN32@γ-Fe 2 O 3
To demonstrate the feasibility of the hybridization of the magnetic nanomaterial γ-Fe 2 O 3 with microorganisms, a morphology analysis of CN32 and CN32@γ-Fe 2 O 3 was performed using SEM and TEM. Figure S2 presents the X-ray diffraction pattern of the modified material γ-Fe 2 O 3 nanoparticles.The structure reveals characteristic diffraction peaks at 2θ = 30.2,35.5, 43.2, 53.7, 57.3, and 62.8 corresponding to the (220), (311), (400), ( 422), (511), and (440) crystal planes.These planes align with the standard JCPDS No. 00-039-1346 crystal structure features [44], confirming the material's identity as γ-Fe 2 O 3 .By comparing (Figure 2a,b), it can be seen that the unmodified S. putrefaciens CN32 was rod-shaped and had a smooth surface.After γ-Fe 2 O 3 MNPs were added for hybrid cultivation, the bacteria surface became rough due to the particle encapsulation, indicating the successful hybridization of γ-Fe 2 O 3 MNPs on the surface of the bacteria.As shown in Figure 2c, the elemental analysis results clearly indicate that elements C, P, Fe, and O were uniformly distributed on the surface of the bacteria, further confirming the successful preparation of CN32@γ-Fe 2 O 3 .Corresponding data on the elemental composition of the surfaces are presented in Figure S3.The surface elemental analysis data reveal the presence of carbon (C), oxygen (O), iron (Fe), and phosphorus (P), with C and P attributed to bacterial surface constituents and Fe and O indicative of γ-Fe 2 O 3 modifications on the surface.
Materials 2024, 17, 1501 6 of 14 1346 crystal structure features [44], confirming the material's identity as γ-Fe2O3.By comparing (Figure 2a,b), it can be seen that the unmodified S. putrefaciens CN32 was rodshaped and had a smooth surface.After γ-Fe2O3 MNPs were added for hybrid cultivation, the bacteria surface became rough due to the particle encapsulation, indicating the successful hybridization of γ-Fe2O3 MNPs on the surface of the bacteria.As shown in Figure 2c, the elemental analysis results clearly indicate that elements C, P, Fe, and O were uniformly distributed on the surface of the bacteria, further confirming the successful preparation of CN32@γ-Fe2O3.Corresponding data on the elemental composition of the surfaces are presented in Figure S3.The surface elemental analysis data reveal the presence of carbon (C), oxygen (O), iron (Fe), and phosphorus (P), with C and P attributed to bacterial surface constituents and Fe and O indicative of γ-Fe2O3 modifications on the surface.
Exploration of the Optimal Coating Amount of γ-Fe2O3 MNPs
To further investigate the effect of the surface modification of γ-Fe2O3 MNPs on bacterial bioelectrocatalytic and to identify the optimal coating amount, we assembled halfcells with different CN32@γ-Fe2O3 bioanodes, further conducted CV, DPV, EIS, and IT electrochemical tests.Figure 3a reveals the lack of a discernible redox peak in the CV curve of the naturally occurring S. putrefaciens CN32 strain, and only a weak redox peak at approximately −0.45 V (vs.SCE) is observed, which can be attributed to the presence of endogenous electron mediators.It is worth noting that an obvious reversible redox peak pair appears at around −0.4 V and 0 V (vs.SCE) of heterozygous bacteria at all concentrations, which correspond to the endogenous electron mediators and outer membrane C-type cytochrome protein response, respectively.Interestingly, from the CV analysis, it is evident that the oxidation peak current value exhibits a notable upward trend with an increase in the coating amount.This trend reaches its zenith at a coating amount of 7.5 mM γ-Fe2O3 MNPs, after which it gradually diminishes.At the same time, it can be observed that the cathodic and anodic peak separation is minimal for CN32@γ-Fe2O3 + 7.5 mM, indicating a faster electrochemical reaction.γ-Fe2O3 MNPs may serve as efficient electron conduits, facilitating bridging the gap for electron transfer both intra-and inter-cellularly, thus overcoming the limitations associated with long-distance electron transmission and enhancing extracellular electron transfer efficiency [45].However, high concentrations of MNPs will decrease the electrocatalytic activity of electricity-producing bacteria, inhibiting their growth and metabolic activity.
Exploration of the Optimal Coating Amount of γ-Fe 2 O 3 MNPs
To further investigate the effect of the surface modification of γ-Fe 2 O 3 MNPs on bacterial bioelectrocatalytic and to identify the optimal coating amount, we assembled half-cells with different CN32@γ-Fe 2 O 3 bioanodes, further conducted CV, DPV, EIS, and IT electrochemical tests.Figure 3a reveals the lack of a discernible redox peak in the CV curve of the naturally occurring S. putrefaciens CN32 strain, and only a weak redox peak at approximately −0.45 V (vs.SCE) is observed, which can be attributed to the presence of endogenous electron mediators.It is worth noting that an obvious reversible redox peak pair appears at around −0.4 V and 0 V (vs.SCE) of heterozygous bacteria at all concentrations, which correspond to the endogenous electron mediators and outer membrane C-type cytochrome protein response, respectively.Interestingly, from the CV analysis, it is evident that the oxidation peak current value exhibits a notable upward trend with an increase in the coating amount.This trend reaches its zenith at a coating amount of 7.5 mM γ-Fe 2 O 3 MNPs, after which it gradually diminishes.At the same time, it can be observed that the cathodic and anodic peak separation is minimal for CN32@γ-Fe 2 O 3 + 7.5 mM, indicating a faster electrochemical reaction.γ-Fe 2 O 3 MNPs may serve as efficient electron conduits, facilitating bridging the gap for electron transfer both intraand inter-cellularly, thus overcoming the limitations associated with long-distance electron transmission and enhancing extracellular electron transfer efficiency [45].However, high concentrations of MNPs will decrease the electrocatalytic activity of electricity-producing bacteria, inhibiting their growth and metabolic activity.
ity of the native bacterial cells.After modification with γ-Fe2O3 MNPs, Rct was significantly reduced because of the improved conductivity of EAMs, which was beneficial for enhancing the power generation performance of MFCs.Therefore, we chose 7.5 mM γ-Fe2O3 MNPs as the optimal coating concentration to enhance the CN32 electrocatalytic activity and improve the EET efficiency.The following experiments used CN32@γ-Fe2O3 prepared by 7.5 mM γ-Fe2O3 MNPs hybridization for research.In addition, the DPV of CN32@γ-Fe 2 O 3 had a clear peak around −0.1 V (Figure 3b), which can be attributed to the OM c-Cyts, and the peak current density was higher than that of the undecorated bacteria, which was consistent with the trend of the CV curve.The hybrid bacteria with 7.5 mM γ-Fe 2 O 3 MNPs had the highest electrocatalytic response.However, there was not much change in the oxidation peak attributed to self-secreted electron mediators around −0.45 V, which meant that the magnetic nanomaterials main enhanced DET mediated by OM c-Cyts, probably by improving the conductivity through the encapsulation of γ-Fe 2 O 3 MNPs on bacterial surfaces, constructing a long-distance electron transport channel, and creating an efficient conductive network together inside and outside the biofilm.As expected, the fact that 7.5 mM was the optimal coating concentration was confirmed in the experiment.From Figure 3c, it can be seen that CN32@γ-Fe 2 O 3 7.5 mM had the highest stable current, which was due to the magnetic nanoparticles promoting the growth and enrichment of EAMs, expediting the growth of a biofilm on the electrode surface, and improving the output current.
Through the study of EIS, which was used to evaluate the conductivity of bioanodes [46], it was found that all the electrodes had similar impedance spectra, composed of a distinct semicircle and a straight line in Figure 3d.At the electrode-electrolyte interface, the diameter of the semicircle in the impedance spectrum represented the electron transfer resistance (R ct ).A smaller R ct value indicated a greater efficiency in electron transfer, corresponding to a faster rate of electron transfer at the interface.The wild-type S. putrefaciens CN32 had the highest charge-transfer resistance, indicating the poor electrical conductivity of the native bacterial cells.After modification with γ-Fe 2 O 3 MNPs, R ct was significantly reduced because of the improved conductivity of EAMs, which was beneficial for enhancing the power generation performance of MFCs.Therefore, we chose 7.5 mM γ-Fe 2 O 3 MNPs as the optimal coating concentration to enhance the CN32 electrocatalytic activity and improve the EET efficiency.The following experiments used CN32@γ-Fe 2 O 3 prepared by 7.5 mM γ-Fe 2 O 3 MNPs hybridization for research.
Effect of EMF on Functionalized CN32@γ-Fe 2 O 3
Further, to explore the impact of EMF on bioelectrocatalysis, we applied the same field strength EMF to CN32@γ-Fe 2 O 3 doped with different concentrations and carried out CV, EIS, and IT electrochemical test analyses.As seen in Figure 4a, the CN32@γ-Fe 2 O 3 + MF redox peak currents were both substantially increased compared to the natural S. putrefaciens CN32 and showed reversible redox curves.CN32 CC@γ-Fe 2 O 3 7.5 mM + MF electrocatalytic activity was the highest.The electrocatalytic activity of the electroproducing bacteria may have been enhanced due to the stimulation of expression of OM c-Cyts and changes in oxidoreductase activity, which could be attributed to the presence of γ-Fe 2 O 3 /EMF.Figure 4b demonstrates that EMF has the capability to decrease the charge transfer resistance, thus enhancing the extracellular electron transfer capacity of electroactive bacteria.Notably, the current density exhibited a progressive increase over time, which corresponds to the process of bacteria enrichment on the electrode surface leading to the formation of a biofilm (Figure 4c).Meanwhile, CN32 CC@γ-Fe 2 O 3 7.5 mM + MF has the largest amount of power production, indicating that EMF can increase the specific enrichment of electroproducing bacteria at the anode.Interestingly, we also found that the effect of the applied EMF on the MFC was transient and reversible, with the current rising immediately when EMF was switched on, gradually decreasing when EMF was switched off, and gradually increasing with respect to the previous cycle during the three EMF stimulation cycles.This suggests that the EMF stimulation has a superimposed effect on MFCs, and the effect on the interior of the electroproducing bacteria is sustained, which ultimately promotes electron transfer and current generation, which is of great significance in revealing the mechanism of EET.The observed trend in the aforementioned test results aligns closely with the findings depicted in Figure 2, demonstrating an overall consistency.
Effect of EMF on Functionalized CN32@γ-Fe2O3
Further, to explore the impact of EMF on bioelectrocatalysis, we applied the same field strength EMF to CN32@γ-Fe2O3 doped with different concentrations and carried out CV, EIS, and IT electrochemical test analyses.As seen in Figure 4a, the CN32@γ-Fe2O3 + MF redox peak currents were both substantially increased compared to the natural S. putrefaciens CN32 and showed reversible redox curves.CN32 CC@γ-Fe2O3 7.5 mM + MF electrocatalytic activity was the highest.The electrocatalytic activity of the electroproducing bacteria may have been enhanced due to the stimulation of expression of OM c-Cyts and changes in oxidoreductase activity, which could be attributed to the presence of γ-Fe2O3/EMF.Figure 4b demonstrates that EMF has the capability to decrease the charge transfer resistance, thus enhancing the extracellular electron transfer capacity of electroactive bacteria.Notably, the current density exhibited a progressive increase over time, which corresponds to the process of bacteria enrichment on the electrode surface leading to the formation of a biofilm (Figure 4c).Meanwhile, CN32 CC@γ-Fe2O3 7.5 mM + MF has the largest amount of power production, indicating that EMF can increase the specific enrichment of electroproducing bacteria at the anode.Interestingly, we also found that the effect of the applied EMF on the MFC was transient and reversible, with the current rising immediately when EMF was switched on, gradually decreasing when EMF was switched off, and gradually increasing with respect to the previous cycle during the three EMF stimulation cycles.This suggests that the EMF stimulation has a superimposed effect on MFCs, and the effect on the interior of the electroproducing bacteria is sustained, which ultimately promotes electron transfer and current generation, which is of great significance in revealing the mechanism of EET.The observed trend in the aforementioned test results aligns closely with the findings depicted in Figure 2, demonstrating an overall consistency.
The Mechanism of MNPs and EMF Synergistically Enhance the DET Process
Further investigation was conducted to explore the mechanism of hybrid bacteria coupled with EMF to synergistically promote EET.As shown in Figure 5a, CN32@γ-Fe 2 O 3 exhibited a significant improvement in the oxidation-reduction peak current compared to CN32 or CN32 CC@γ-Fe 2 O 3 .The main reason was that the γ-Fe 2 O 3 MNPs modified on the bacterial surface had a strong interaction with the electrogenic bacteria, thereby enhancing microbial activity, in contrast to anode electrode modification [47].It is worth noting that the current density of the CN32 and CN32 CC@γ-Fe 2 O 3 coupled electromagnetic field was only slightly increased.However, the oxidation peak current of CN32@γ-Fe 2 O 3 + MF was 2.24 times higher than that of CN32@γ-Fe 2 O 3 , at 0.451 mA cm −2 and 0.201 mA cm −2 , respectively.This not only indicated that the surface modification of the bacteria was beneficial to the improvement of the electrocatalysis but also suggested that the EMF played a synergistic role in promoting extracellular electron transfer mediated by γ-Fe 2 O 3 MNPs coated on the bacterial surface.On the other hand, the oxidationreduction peak potential of CN32@γ-Fe 2 O 3 was −0.421 V and 0.053 V (vs.SCE), while that of CN32@γ-Fe 2 O 3 + MF was −0.394 V and −0.07 V.The peak-to-peak distance of CN32@γ-Fe 2 O 3 + MF was reduced by 0.15 V (0.324 V vs. 0.474 V).The reduction in peak-to-peak distance also demonstrated that MNPs and EMF synergistically accelerated EET.To further explore the interfacial redox kinetics, we tested the CV curves of different bioanodes at various scan rates, as shown in Figure S4.In the results of CN32@γ-Fe 2 O 3 and CN32@γ-Fe 2 O 3 + MF bioanodes, we found that the oxidation-reduction peak current and the square root of the scan rate showed a great linear relationship (5-100 mV s −1 ), indicating that diffusion control dominated the reaction process.
Materials 2024, 17, 1501 10 of 14 reversible redox peaks, but the peak currents and integral areas were significantly weaker than those of CN32@γ-Fe2O3 + MF.This may be the result of electron transfer mediated by OM c-Cyts other than MtrC/UndA.The demonstrated reversible extracellular electron transfer proves that OM c-Cyts MtrC/UndA is the major mediating protein because γ-Fe2O3 MNPs can act as terminal electron acceptors to promote EET, but the absence of OM c-Cyts (UndA and MtrC) cuts off the major respiratory chain of electron transfer.Meanwhile, ∆MtrC/UndA CN32@γ-Fe2O3 + MF has greater electrocatalytic activity compared to ∆MtrC/UndA CN32@γ-Fe2O3.This result illustrates that, firstly, the binding of MNPs to outer membrane pigment proteins promotes electron transfer, and, subsequently, EMFdriven electroproducing bacteria enhance electron transfer, corroborating the synergistic promotion of EET by γ-Fe2O3 MNPs and EMF (Figure 5c).Differential pulse voltammetry test results showed that the oxidation peak positions attributed to outer membrane pigment proteins by deletion bacteria at −0.1 V (SCE) were all positively shifted, predicting a slower electrocatalytic reaction compared to CN32@γ-Fe2O3 + MF, and the oxidation peak currents were all significantly reduced because direct electron transfer was limited by the absence of mediator proteins OM c-Cyts MtrC/UndA, which suggests a critical role for the synergistic facilitation of direct electron transfer by the γ-Fe2O3 MNPs and EMF in the synergistic promotion of DET (Figure 5d).The hypothetical mechanism of MNPs and EMF synergistically promoted EET, as shown in Figure 6.The effective interfacial electron transfer depended on close contact between the electron conduit and the receptor interface.Only the electrogenic bacteria at the outermost layer of the biofilm could perform interfacial electron transfer through direct contact.The electron transfer process at the distal end was extremely slow and may As shown in Figure 5b, the DPV curves of CN32@γ-Fe 2 O 3 and CN32@γ-Fe 2 O 3 + MF exhibit two peaks in the range of −0.8 V to 0.6 V.The peak at around −0.45 V is attributed to flavins, while the peak at around −0.1 V is related to outer membrane cytochrome protein-mediated DET.It is noteworthy that the anodic peak current density of CN32@γ-Fe 2 O 3 reaches 0.06 mA cm −2 , while the curve of CN32@γ-Fe 2 O 3 + MF shows a peak current of 0.146 mA cm −2 , which is 2.43 times higher than without the magnetic field.Correspondingly, the peak potential of CN32@γ-Fe 2 O 3 + MF exhibits a negative shift (−0.136V vs. −0.12V).Moreover, it is observed that the peak of a direct electron transfer for CN32 CC@γ-Fe 2 O 3 and CN32 CC@γ-Fe 2 O 3 + MF is not enhanced.This may be attributed to the fact that the heterogeneous coupled electromagnetic field dual system enhances the direct electron transfer in two ways: on the one hand, γ-Fe 2 O 3 MNPs on the bacterial surface act as electron transport channels to improve EET efficiency; on the other hand, the magnetic field promotes the specific enrichment of electrogenic bacteria, which produce additional magnetic electrons (electrons produced by magnetic-field-stimulated microorganisms) and transfer electrons through the newly formed magnetic channel, significantly enhancing the kinetics of electrochemical reactions and the efficiency of a direct electron transfer.As depicted in Figure S5, using EIS to study interfacial charge transfer behavior, CN32 modified with magnetic nanoparticles exhibited a lower R ct compared to the wild-type CN32.Further application of the EMF resulted in smaller charge transfer resistance, indicating that MNPs and EMF enhanced the conductivity of the bioanode, which is more conducive to extracellular electron transfer and the construction of a well-established biological and non-biological EET interface.
It was shown that MtrC and UndA are important OM c-Cyts in the EET process of S. putrefaciens CN32 [48].In order to further investigate the mechanism of electroactive bacterial EET, the electrocatalytic properties of MNPs and EMFs for the deletion of bacterium ∆MtrC/UndA CN32 were tested.No significant oxidative reduction current was observed for ∆MtrC/UndA CN32 and ∆MtrC/UndA CN32 + MF.It may be because the deletion of the major external receptor protein of S. putrefaciens CN32 hinders the original EET, and a large number of electrons are unable to carry out the normal EET process.Notably, after the MNPs encapsulated the deletion bacteria, the cyclic voltammetry results showed that ∆MtrC/UndA CN32@γ-Fe 2 O 3 and ∆MtrC/UndA CN32@γ-Fe 2 O 3 + MF still exhibited reversible redox peaks, but the peak currents and integral areas were significantly weaker than those of CN32@γ-Fe 2 O 3 + MF.This may be the result of electron transfer mediated by OM c-Cyts other than MtrC/UndA.The demonstrated reversible extracellular electron transfer proves that OM c-Cyts MtrC/UndA is the major mediating protein because γ-Fe 2 O 3 MNPs can act as terminal electron acceptors to promote EET, but the absence of OM c-Cyts (UndA and MtrC) cuts off the major respiratory chain of electron transfer.Meanwhile, ∆MtrC/UndA CN32@γ-Fe 2 O 3 + MF has greater electrocatalytic activity compared to ∆MtrC/UndA CN32@γ-Fe 2 O 3 .This result illustrates that, firstly, the binding of MNPs to outer membrane pigment proteins promotes electron transfer, and, subsequently, EMFdriven electroproducing bacteria enhance electron transfer, corroborating the synergistic promotion of EET by γ-Fe 2 O 3 MNPs and EMF (Figure 5c).
Differential pulse voltammetry test results showed that the oxidation peak positions attributed to outer membrane pigment proteins by deletion bacteria at −0.1 V (SCE) were all positively shifted, predicting a slower electrocatalytic reaction compared to CN32@γ-Fe 2 O 3 + MF, and the oxidation peak currents were all significantly reduced because direct electron transfer was limited by the absence of mediator proteins OM c-Cyts MtrC/UndA, which suggests a critical role for the synergistic facilitation of direct electron transfer by the γ-Fe 2 O 3 MNPs and EMF in the synergistic promotion of DET (Figure 5d).
The hypothetical mechanism of MNPs and EMF synergistically promoted EET, as shown in Figure 6.The effective interfacial electron transfer depended on close contact between the electron conduit and the receptor interface.Only the electrogenic bacteria at the outermost layer of the biofilm could perform interfacial electron transfer through direct contact.The electron transfer process at the distal end was extremely slow and may not have been fully utilized.However, when γ-Fe 2 O 3 was coated on the surface of bacteria, acting as an electron conduit, it enhanced the conductivity of the bacteria, allowing for a direct connection between bacteria through the electron conduit.This expanded the transfer distance of electrons, resulting in the formation of an inside-out electron transfer pathway in the biological membrane.Moreover, under the influence of magnetic fluid effects caused by the applied electromagnetic field, the catalytic activity of the outer membrane cytochrome protein and metabolic reductase was improved.It opened up new magnetic channels, connecting the cytoplasm and the outer membrane for the transfer of more electrons, thus accelerating the electron transfer process.The growth metabolism of microorganisms was stimulated by the electromagnetic field, inducing cells to secrete additional magnetic electrons, thereby improving the efficiency of interfacial electron transfer in the bioelectrochemical interface.
Materials 2024, 17, 1501 11 of 14 not have been fully utilized.However, when γ-Fe2O3 was coated on the surface of bacteria, acting as an electron conduit, it enhanced the conductivity of the bacteria, allowing for a direct connection between bacteria through the electron conduit.This expanded the transfer distance of electrons, resulting in the formation of an inside-out electron transfer pathway in the biological membrane.Moreover, under the influence of magnetic fluid effects caused by the applied electromagnetic field, the catalytic activity of the outer membrane cytochrome protein and metabolic reductase was improved.It opened up new magnetic channels, connecting the cytoplasm and the outer membrane for the transfer of more electrons, thus accelerating the electron transfer process.The growth metabolism of microorganisms was stimulated by the electromagnetic field, inducing cells to secrete additional magnetic electrons, thereby improving the efficiency of interfacial electron transfer in the bioelectrochemical interface.
Conclusions
In summary, a microbe hybrid system was successfully synthesized by coating γ-Fe2O3 MNPs on bacterial surface, and the bioelectrocatalytic performance of the hybrid system improved under an electromagnetic field.The γ-Fe2O3 MNPs improved the bacterial conductivity and served as an electron transport pathway for long-distance electron transfer, enhancing the efficiency of EET and the power generation performance of the MFC.In addition, the oxidation peak current of CN32@γ-Fe2O3 increased to 2.24 times under an electromagnetic field, which, due to the electromagnetic field, can greatly boost extracellular electron transfer.The enhancement mechanism is mainly due to the fact that the surface modified microorganism offers an elevated contact area for the high microbial catalytic activity of the outer cell membrane's cytochrome, while the magnetic nanoparticles provide a networked interface between the cytoplasm and the outer membrane for boosting a fast multidimensional electron transport path under a magnetic field.This work successfully constructed a hybrid-coupled bioelectrochemical system that
Conclusions
In summary, a microbe hybrid system was successfully synthesized by coating γ-Fe 2 O 3 MNPs on bacterial surface, and the bioelectrocatalytic performance of the hybrid system improved under an electromagnetic field.The γ-Fe 2 O 3 MNPs improved the bacterial conductivity and served as an electron transport pathway for long-distance electron transfer, enhancing the efficiency of EET and the power generation performance of the MFC.In addition, the oxidation peak current of CN32@γ-Fe 2 O 3 increased to 2.24 times under an electromagnetic field, which, due to the electromagnetic field, can greatly boost extracellular electron transfer.The enhancement mechanism is mainly due to the fact that the surface modified microorganism offers an elevated contact area for the high microbial catalytic activity of the outer cell membrane's cytochrome, while the magnetic nanoparticles provide a networked interface between the cytoplasm and the outer membrane for boosting a fast multidimensional electron transport path under a magnetic field.This work successfully constructed a hybrid-coupled bioelectrochemical system that synergistically promotes an
Figure 4 .
Figure 4. CN32@γ-Fe2O3 + MF electrochemical tests.(a) CV curve with a scan rate of 5 mV s -1 .(b) EIS curve.(c) IT curve.The downward arrow indicates an opening EMF, while the upward arrow indicates a closing EMF.
Figure 4 .
Figure 4. CN32@γ-Fe 2 O 3 + MF electrochemical tests.(a) CV curve with a scan rate of 5 mV s −1 .(b) EIS curve.(c) IT curve.The downward arrow indicates an opening EMF, while the upward arrow indicates a closing EMF.
Figure 6 .
Figure 6.Schematic diagram illustrating the mechanism of how γ-Fe2O3 MNPs and EMFs synergistically promote EET.The downward arrow represents the direction of electron transfer.
Figure 6 .
Figure 6.Schematic diagram illustrating the mechanism of how γ-Fe 2 O 3 MNPs and EMFs synergistically promote EET.The downward arrow represents the direction of electron transfer.
|
2024-03-31T15:49:45.527Z
|
2024-03-26T00:00:00.000
|
{
"year": 2024,
"sha1": "355cdc38c7a97736201832bea331cc8eb6ed0bd7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/17/7/1501/pdf?version=1711459468",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "33f8c627a96ba42684623fe83a3d79466d8f6481",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256949867
|
pes2o/s2orc
|
v3-fos-license
|
Effect of tropical forest disturbance on the competitive interactions within a diverse ant community
Understanding how anthropogenic disturbance influences patterns of community composition and the reinforcing interactive processes that structure communities is important to mitigate threats to biodiversity. Competition is considered a primary reinforcing process, yet little is known concerning disturbance effects on competitive interaction networks. We examined how differences in ant community composition between undisturbed and disturbed Bornean rainforest, is potentially reflected by changes in competitive interactions over a food resource. Comparing 10 primary forest sites to 10 in selectively-logged forest, we found higher genus richness and diversity in the primary forest, with 18.5% and 13.0% of genera endemic to primary and logged respectively. From 180 hours of filming bait cards, we assessed ant-ant interactions, finding that despite considered aggression over food sources, the majority of ant interactions were neutral. Proportion of competitive interactions at bait cards did not differ between forest type, however, the rate and per capita number of competitive interactions was significantly lower in logged forest. Furthermore, the majority of genera showed large changes in aggression-score with often inverse relationships to their occupancy rank. This provides evidence of a shuffled competitive network, and these unexpected changes in aggressive relationships could be considered a type of competitive network re-wiring after disturbance.
processes, often acting as ecosystem engineers 32,33 . The ubiquity, functionality and competitive structuring in ant communities may make ants some of the most cost-effective indicators of tropical forest disturbance [34][35][36] .
Competition among ants has been recognized as a key process in the structuring of their communities 24,[37][38][39][40][41][42] , with interactions between certain taxonomic groups, when competing over a common resource, often showing consistent uni-directional aggressive behaviour(s) from one group to the other 43,44 . It has been proposed that such patterns in behaviour can dictate ant spatial distributions (occupancy) 40,45,46 , abundances 47 and potentially even species richness 48,49 . Importantly, the removal of competitive groups in a community may create competitive release, changing the abundance or distribution of other groups by allowing them to newly access and exploit previously denied resources [50][51][52] . The abundance and/or occupancy of a taxonomic/functional group in the community is thus likely to be related to components of their competitive ability over other constituent groups. In other words, a change in frequency of one competitive node is likely to have a cascading or ripple effect across the competitive network, rather than just a 'one-step effect' on only the directly linked nodes.
Testing the influence of disturbance on ant community composition and competition: The effect of anthropogenic forest disturbance on animal communities, such as ants, has been of interest to ecologists 38,[53][54][55][56][57] , but despite their critical functional roles and apparent importance of competition for structuring their communities, studies specifically examining the effect of disturbance on competitive interactions within communities are relatively lacking. Here, we conducted a study that first investigated the difference in ant community composition between primary and selectively-logged sites in an extensive Bornean forest. Primary forest acted as the control because it had not undergone anthropogenic forest disturbance in the form of logging or clearing for agriculture 58 . Selectively-logged forest acted as the disturbed forest comparison, as the removal of specific trees from a forest area results in canopy gaps, destroyed undergrowth, and the presence of extensive road networks 59 . In identifying both the similarities and differences in community composition between the sites, we then set out to observe inter-genus interactions at bait cards, allowing us to explore how ant abundance and occupancy relates to variation in competitive interactions for the shared ant genera between the forest types. As a proxy for competitive ability we observed each genus-genus interaction on a bait card and scored whether it was an aggressive, neutral or submissive response for each individual. In the absence of prior experiments of this nature, we tested a null hypothesis in which aggression-scores among genera observed in primary forest would be the same in logged forest, regardless of any change in relative abundances or occupancy between forest types. Alternatively, changes in abundance or occupancy could positively correlate with aggression-score (intuitive hypothesis) causing a shuffling of dominance in the competitive network, as variations to the presence of numerically dominant ants has been shown to increase interspecific aggression in a previous study 60 .
Methods
Study Area. Study sites were situated in the Malaysian state of Sabah, on the island of Borneo, at the site of the Stability of Altered Forest Ecosystems project 58 . We established 10 control (undisturbed) sites in primary lowland dipterocarp rainforest within the Maliau Basin Conservation Area (4°44″N, 116°58″E), and another 10 disturbed sites located in selectively-logged lowland dipterocarp rainforest within the Ulu Segama Forest Reserve (4°43″N, 117°35″E). The primary forest had never been logged and has an average aboveground tree biomass of 350 t.ha −1 , in contrast to the selectively-logged forest which was logged in the 1970's and again in the 1990's, removing an estimated 228 t.ha −1 of tree biomass 61 . The logged forest was characterized by a more open canopy with lower leaf area index and smaller trees 61 . The choice to use the Stability of Altered Forest Ecosystems project as a research location was important, as the design and location of study sites is considered to minimise the confounding factors that affect land-use change, such as latitude, slope and elevation 58 .
Sampling Design. Field data focused on sampling ground dwelling ants collected between February-June 2016. An initial sampling site in each forest type was chosen at random, and the remaining nine sites were spaced sequentially along a transect at intervals of 60-100 m to ensure independence. At each sampling site, 27 sampling points were established using a 3 × 9 grid pattern design (composed of three sets of 3 × 3 connected grids) with each sampling point spaced 5 m from its nearest neighbour (Fig. 1). Due to previously established differences in ant activity at different times of day 62,63 , ants were sampled for 2 hours from the first of the grids at 0800 h, the second grid at 1200 h and last grid at 1600 h.
Assessing community composition between forest types. Each of the three grids at a sample site was split into three columns, each with a different sampling method (Fig. 1). We used both active and passive traps in combination, allowing us to collect both fast and slow-moving genera [63][64][65] .
(a) For column 1, ant individuals were collected, and genus identified from leaf litter samples. We scraped up and collected all leaf litter within a 1 × 1 m quadrat at each sampling point 66 . Leaf litter collected from the three sampling points within each grid was pooled and placed into a single Winkler bag 67 (an extraction method that separates live specimens from dead vegetation; Fig. 1(i)). Each bag was left to hang for 3 days, a period shown to collect up to 90% of the ant species present in the sample 68 , with ants collected in a vial containing 70% ethanol to be later identified (total = 30 bags per forest type). (b) For column 2, a bait card was placed at each sampling point, with each card consisting of a sheet of graph paper (210 × 148.5 mm) placed flush with the ground and baited with two heaped teaspoons of a tuna and cat-food mix (mixed at a 15:1 weight ratio) ( Fig. 1(ii)). Ant genera were identified visiting a bait card by non-invasively re-watching recordings from a GoPro ™ video camera (Model: HERO3 White Edition, specifications in Supplementary Information) that was placed above the card for 40 mins per bait card (justification provided in Supplementary Information). Bait cards have been widely used to study food exploitation by ants [69][70][71] . To attract a range of ant genera we used tuna as a protein bait and cat food provides Scientific RepoRts | (2018) 8:5131 | DOI:10.1038/s41598-018-23272-y a carbohydrate nutrition base both of which have been highly effective in attracting ants in previous studies 70,[72][73][74] . Data from only the video recordings were used to investigate ant competitive interactions. (c) For column 3, we again used a bait card, but this was monitored for 40 mins by a human observer and any ants that crawled on to the card were invasively hand collected, placed in 70% ethanol and later identified ( Fig. 1(iii)). Collection was used to build a reference collection and to provide a search image to help identify ant genera from the column 2 video footage. This data was not used for the community composition analysis or competitive interactions because pilot study data indicated observer presence could disrupt ant presence and interactions.
Ants collected from columns 1 and 3 were identified to genus level under a stereo microscope with the help of a taxonomic identification key 75 and photographs from online databases 76,77 . We used genus level identification because this is an efficient method of taxonomic sorting that ant community composition studies have used previously [78][79][80] . Moreover, ants can be separated into functional groups at the genus level [79][80][81][82] , and importantly these functional groups have been shown to display differing levels of dominance/aggression 79,81,83 . Given this, we felt that genus-level identification was sufficient to begin to unpick the competitive mechanisms within such a highly diverse ant community.
Assessing competitive interactions among genera. From the column 2 video recordings, we noted the time and order of arrival for each genus, and any direct interactions with individuals belonging to other ant genera; processes which have been shown to affect competitive ability in previous studies 60,70,73 . Interactions were categorized as neutral or competitive: neutral interactions were when the individuals of one genus did not change the behaviour of individuals in the other genus; competitive interactions were when individuals of one genus showed aggression toward individuals of another genus 69,70 . For all competitive interactions, genera were further classified as either aggressive, where individuals of that genus showed aggression and/or forced individuals of the other to retreat from the bait; or submissive, where an individual fled from the bait when confronted by another from a different genus.
Rate of competitive interactions were defined as the total number of competitive interactions over the 40 min observation, and per capita number of competitive interactions was calculated by dividing the total number of interactions by the total abundance of ants on a bait card over the 40 min observation period. We assigned an aggression-score to each genus involved in a pairwise genus-genus interaction (0 = Submissive; 1 = Neutral; 2 = Aggressive), and then calculated a mean score for each pairwise genus-genus combination. Consequently, we had multiple aggression-scores for each genus -one for each of the other genera that the target genus interacted with. Only genera that had interactions with >3 other genera were analysed. Scores were produced separately for interactions observed in the primary and logged forest, and the difference between forest-specific scores were calculated to represent an aggression-score change. Considering unequal representation of the genera in samples because of colony and sampling method variations, we weighted the aggression scores based on their sample size to encompass the variability. The mean of the differences in a genus' aggression-score between forest types were also calculated to represent a change in an aspect of their competitive status between primary and logged forest. The mean of this competitive change was then compared to changes in occupancy to investigate whether variations in competitive interactions are associated with variations in community composition. Community Composition. A Generalised Linear Model (GLM) with quasipoisson error distribution was used to examine the effect of forest type on genus richness. Mean Shannon diversity (H') was also calculated and a linear regression used to compare ant genera community evenness between forest types 85 . Occupancy refers to the number of sampling points in which the focal genus was present at out of all sampling points (120 per site). We used occupancy rather than abundance to reduce any sampling bias that could be introduced if sampling was near or far away from a colony (nest) entrance 86 . Further Mann-Whitney U tests were used to compare the occupancy of genera in each sampling method. We used a Spearman's rank correlation to compare changes to the rank occupancy of genera between forest types.
Competitive Interactions. We used Mann-Whitney U tests to examine the ratio of neutral to competitive interactions in primary and logged forests, and we used a GLM with binomial error distribution to test the effect forest type had on this ratio. To assess whether ant interactions changed between forest type we employed a mixed-effects model (GLMER) using the R package lme4 87 . To compare the level of competitive interactions between forest types, the interactions were categorised into single binary response variables: 1 = Competitive (where between two individuals one is dominant, and one is submissive), 0 = Neutral (where between two individuals both are neutral). GLMERs were also used to examine the effect of forest type on the proportion, the rate and the per capita number of competitive interactions at each bait card. The random effect of site, containing 10 levels per forest type, was included in the models to account for the nested structure of our sampling design. Following this, we used a Spearman's rank correlation to compare changes in aggression-score with changes in occupancy rank between forest types.
Results
Variation in community composition between forest types. The combined leaf litter and bait card data identified 54 genera, with 32 identified exclusively from the leaf litter, three exclusively observed visiting the bait traps, and 19 identified using both collection methods. The two forest types had 68.5% (N = 37) of genera in common, with 18.5% (N = 10) being exclusive to primary forest and the remaining 13.0% (N = 7) exclusive to logged forest (Fig. S1b). Taking the mean genus richness across 120 sampling points per forest type (n = 30 leaf litter + 90 camera observations), we found primary forest had a significantly higher richness than logged forest (4.91 vs. 3.80; GLM: Z 229 = 3.258, P = 0.001; Fig. 2; Table S1). Furthermore, primary forest displayed a significantly higher community evenness in comparison to logged forest (Mean H': Primary = 2.03, Logged = 1.84; t 118 = 2.759, P = 0.007; Table S1).
Of the 22 genera that were observed to interact, 17 had interactions with >3 other genera and so these were focused on for the analysis. A plot of the rank occupancy of these 17 genera showed a change in community composition between forest types, due primarily to an increase in occupancy of Lophomyrmex and a decrease in occupancy of Diacamma, Pheidole and Odontoponera (S = 368, Rho = 0.55, P = 0.022; Fig. 3). Communities in both habitat types were dominated by Odontoponera and Pheidole, despite both genera showing small reductions in their occupancy.
Difference in competitive interactions among genera between forest types. We observed 615
interactions among 22 of the 54 genera identified. When standardising the amount of time observed in each forest type, we saw three times more interactions in primary than in logged forest (489 and 146 respectively), despite very little difference in total ant abundance in the two forest types (Primary: 1659 vs. Logged: 1622). One hundred and sixty-three interactions were competitive and 452 neutral. Interestingly, whilst a significantly higher proportion of neutral interactions were found in primary forest (Competitive: 120, Neutral: 369; W = 1, P = 0.012) this was not seen to the same degree in logged (Competitive: 43, Neutral: 83; W = 18.5, P = 0.061), with the difference between forest types close to being significantly different (GLM: Z 17 = −1.89, P = 0.059; Table S1).
The competitive network visually appeared simplified in logged relative to primary forest, with several pairwise interactions observed in primary forest not observed in logged forest (Fig. 4). The trend in the proportion of ant competitive interactions suggested a higher proportion in disturbed forest, but this was not significantly different between primary and logged forest (Mean: 0.296 vs. 0.343; GLMER: Z 147 = 1.354, P = 0.176; Fig. 5a; Table S1). Both the rate and the per capita number of competitive interactions occurring at bait cards were significantly higher in primary than in logged forest (rate: mean = 0.035 vs 0.017 min −1 , LMER: t 147 = 3.775, P < 0.05; per capita: mean = 0.028 vs 0.012 per individual, LMER: t 147 = 2.712, P < 0.05; Fig. 5b,c; Table S1).
Our null expectation was that genera shared between the primary and logged forest will show a consistent pattern of pair-wise dominance for any interactions observed. However, we found a number of genera that showed a decrease (Lophomyrmex, Myrmicaria and Polyrhachis) or increase in aggression-score (Acanthomyrmex, Odontoponera, Pheidologeton) in the logged relative to primary forest (Figs 4 and 6; Fig. S3). Furthermore, change in rank occupancy did not appear to explain these observed differences as we found no correlation between mean change in aggression-score and change in rank occupancy (S = 462, rho = −0.269, P = 0.374; Fig. 7). Smaller genera such as Pheidole and Nylanderia saw only minor changes in their mean aggression-scores (0.063 and 0.025 respectively; Figs 6 and 7; S4 and S5) and these genera remained high in rank occupancy in primary and logged forest (respectively: rank 2 remaining 2, and rank 4 remaining 4; Fig. 3). By contrast, the particularly large dominant genera of Odontomachus and Oecophylla were absent in most leaf litter samples of logged forest, which matched average declines in the competitive score of the larger ant genera (>20 mm) between forest types (Mean score change: −0.323). However, many genera showed contrary patterns of competitive score and occupancy changes, displaying idiosyncratic patterns (Fig. 7). For example, Odontoponera became considerably more dominant over Polyrhachis and Camponotus in logged than in primary forest (Mean: 0.189; Figs 6a and S3), despite having a lower occupancy (−0.183; Fig. 7). By contrast, Lophomyrmex became less dominant relative to Acanthomyrmex (−0.291 vs. 0.500; Figs 6 and S5), despite showing an increase in occupancy that was >2× greater than that of Acanthomyrmex (0.325 vs. −0.141; Fig. 7). Myrmicaria became more submissive in logged forest (−0.524; Figs 6 and S5), especially to Pheidole and Pheidologeton, yet saw almost no change in occupancy (+0.008; Fig. 7).
Discussion
Our findings can be split into two main components. First, we found a difference in ant community composition between forest types, with lower species richness and community evenness in the logged compared to primary forest. This contributes to the growing evidence base that anthropogenically disturbed habitats may lead to reductions in invertebrate taxa 57,66,82,[88][89][90] , and more broadly to reductions in biodiversity with alterations to animal communities 4,5,7 . Loss of invertebrate biodiversity is of concern considering the indirect impact on ecosystem processes 8,91 , particularly in the Indo-Malayan region where our study site is located, which has been predicted to lose up to 42% of biodiversity by 2100 26 . Second, we provide a novel insight into the relationship between variation in shared ant community and difference in the competitive interactions; considered to influence community structuring which in turn underpins key ecosystem functions 92 . Differences in the aggression-scores of genera shared between forest types meant that we could reject our null hypotheses, yet this did not appear to relate with occupancy change as no positive correlation was found, meaning we also must reject our so-called intuitive hypothesis. Instead, we found that an aggression-score change showed no correlation with occupancy change across genera and some genera showed an inverse relationship. Together this suggests a shuffling of apparent aggressive behaviour and could be considered a re-wiring of the competitive network after disturbance.
Community composition differences. The change in community composition between the forest types in our study are consistent with previous studies showing lower ant diversity in logged forest 56,57 and genus-specific differences in tolerance to forest disturbance 38,54,93,94 . Differences in tolerance of ground-dwelling ants to disturbance may be mediated by microclimate variations and be contributing to variation in community composition alongside competition 67,79,[95][96][97][98] , especially since ground temperatures have been reported to be higher in the logged versus primary forest areas we studied 99 . Body size determines the constraints to which insects such as ants can perform under varying temperatures, including measures of competitive ability 24 . For instance, large ants have been found to be less abundant in logged forest 8,100 , and ant groups have been shown to competitively exclude other groups of similar sizes 24 .
Community competitive interactions. The rate and the per-capita number of competitive interactions was significantly lower in the logged compared to primary forest, indicating a net reduction in competitive intensity in logged forest. Yet in contrast, a previous study in a temperate region showed that more open habitats increased rather than decreased competitive intensity 70 . A decrease in vegetation structure from logging 101 , is likely to alter the niches that ants have evolved to exploit and therefore may have effects on the performance of ants arriving and exploiting the bait cards, that are difficult to predict. Indeed, predicting the direction of effect on competition in diverse community networks, even at the genera level, is tough 71 . Resultantly, the development of general rules to predict competition intensity over a spatial and temporal scale under varied climatic and environmental conditions and stochastic processes remains a significant challenge.
Notably though, despite several previous studies observing and reporting competitive interactions within ant communities 11 Changes in competitive interactions among genera. The change in aggression-scores across genera between the logged and primary forest showed little consistency in the relationship with rank occupancy, aligning more with an idiosyncratic pattern. Previous studies have suggested that compositional changes (abundance and occupancy) may alter the dominance of a genus or alter the position of species in an interactive network in ants 49 or in bacterial communities 103 . Whilst we standby the view that competition for food is an important process structuring communities 41 , our results do suggest that a number of other interacting factors could be having a large effect, and these may even be other components of competitive behaviour such as the discovery-dominance trade off that considers arrival time to a food resource 73,104 .
The idiosyncratic relationship we observed could be due to a form of dual process where the potential cascading effects of community compositional change, through physical changes to the environment, allows for a competitive advantage through an increase in overall competitive ability or competitive release, through changes in the competitive network, or vice versa 41,43,95 . Previous data showing idiosyncratic type responses of plant and animal communities to disturbance supports this 12,105 . For instance, in the logged forest, the reduction in occupancy of predatory and dominant genera like Odontomachus could have weakened any competitive exclusion they placed on other genera in the community, allowing others such as Odontoponera to increase in aggressive activities and thus dominate a resource. Nevertheless, we may not have detected a distinct change in aggression-score as a result of this cascading process. The reduced occupancy of these genera in logged forest may be due to equal declines in their food resources (e.g. termites) because of changes in vegetation structure and climate 82 . Previous studies showing lower predatory genera abundances in logged forest 8 , along with competitive release following the removal of dominant ant species supports the trends seen here 51,106,107 . Competitive release in logged forest may also explain why we see decreases in genus richness, given dominant species have been shown to promote species richness 49 .
The change in environment between forest types may further induce cascading effects, with idiosyncratic changes in aggression between genera being seen as a result of competitive disadvantages 43,95 . In primary forest, Lophomyrmex shows aggressive dominance over Acanthomyrmex but in logged the roles are reversed. This respective switch in dominance is most likely a result of them competing for the same niche space in each forest type, as they are considered functionally equivalent 82 and interestingly are the same size 24 . However, as the environment changes, different traits may become advantageous and the competitive status is reversed. The decrease in aggression-score of Lophomyrmex and Myrmicaria could also be explained by their increase in occupancy, allowing for greater opportunity to compete and be aggressively dominated in logged forest. Notably, genera that saw few changes in their score such as Pheidole or Nylanderia may be a result of their subdominant position in the competitive hierarchy in the community allowing them to still compete for patchy resources and remain relatively high in occupancy, despite changes in the genus showing high aggression or disturbance effects 41,49,108 .
In conclusion, we provide a new insight into how a competitive network can change as a result of disturbance which helps us to better understand the mechanistic processes that reinforce the difference in 'end-point' community structure. Future studies can also look at how altered competitive processes may impact on ecosystem function given the important functional role of ants in tropical forests, such as scavenging, seed dispersal, predation and soil turnover 32 .
Data availability. The data used in this study is available on Zenodo (https://doi.org/10.5281/zenodo.1198302) and can also be accessed through the S.A.F.E. project website (www.safeproject.net; Dataset ID: 1).
|
2023-02-18T14:47:59.626Z
|
2018-03-23T00:00:00.000
|
{
"year": 2018,
"sha1": "81754a36211191a0a1954e612366eb017b7ba573",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-018-23272-y.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "81754a36211191a0a1954e612366eb017b7ba573",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
252967806
|
pes2o/s2orc
|
v3-fos-license
|
A Data-Driven Method for Locating Sensors and Selecting Alarm Thresholds to Identify Violations of Voltage Limits in Distribution Systems
—Stochastic fluctuations in power injections from dis- tributed energy resources (DERs) combined with load variability can cause constraint violations (e.g., exceeded voltage limits) in electric distribution systems. To monitor grid operations, sensors are placed to measure important quantities such as the voltage magnitudes. In this paper, we consider a sensor placement problem which seeks to identify locations for installing sensors that can capture all possible violations of voltage magnitude limits. We formulate a bilevel optimization problem that minimizes the number of sensors and avoids false sensor alarms in the upper level while ensuring detection of any voltage violations in the lower level. This problem is challenging due to the nonlinearity of the power flow equations and the presence of binary variables. Accordingly, we employ recently developed conservative linear approximations of the power flow equations that overestimate or underestimate the voltage magnitudes. By replacing the nonlinear power flow equations with conservative linear approximations, we can ensure that the resulting sensor locations and thresholds are sufficient to identify any constraint violations. Additionally, we apply various problem reformulations to significantly improve computational tractability while simultaneously ensuring an ap- propriate placement of sensors. Lastly, we improve the quality of the results via an approximate gradient descent method that adjusts the sensor thresholds. We demonstrate the effectiveness of our proposed method for several test cases, including a system with multiple switching configurations.
I. INTRODUCTION
Distributed energy resources (DERs) are being rapidly deployed in distribution systems. Fluctuations in the power outputs of DERs and varying load demands can potentially cause violations of voltage limits, i.e., voltages outside the bounds imposed in the ANSI C84.1 standard. These violations can cause equipment malfunctions, failures of electrical components, and, in severe situations, power outages.
To mitigate the impacts of violations, distribution system operators (DSOs) must identify when power injection fluctuations lead to voltages exceeding their limits. To do so, sensors are placed within the distribution system to measure and communicate the voltage magnitudes at their locations. Due to the high cost of sensors and the structure of distribution systems, sensors are not placed at all buses. The question arises whether a voltage violation occurring at a location where a sensor is not placed can be detected.
Various studies have proposed sensor siting methods to capture constraint violations and outages in power systems. For instance, references [1] and [2] focus on a cost minimization problem that aims to capture all node (e.g., voltage magnitude) and line (e.g., power flow) outages. However, these references assume that a power source/generation is only located at the root node, which is not always the case, especially in the distribution systems where DERs can be located further down a feeder. Additional research efforts such as [3]- [5] seek to locate the minimum number of sensors to achieve full observability for the system. Alternatively, instead of considering full observability for the entire system, [6] considers satisfying observability requirements given a probability of observability at each bus. Other research efforts, such as [7] and [8], focus on voltage control schemes that prevent voltage violations. These efforts take a control perspective rather than extensively considering how to best place sensors. There is also research on siting phasor measurement units, which are utilized as sensors in power grids [9].
In this paper, we consider a sensor placement problem which seeks to locate the minimum number of sensors and determine corresponding sensor alarm thresholds in order to reliably identify all possible violations of voltage magnitude limits in a distribution system. We formulate this sensor placement problem as a bilevel optimization with an upper level that minimizes the number of sensors and chooses sensor alarm thresholds and a lower level that computes the most extreme voltage magnitudes within given ranges of power injection variability. This problem additionally aims to reduce the number of false positive alarms, i.e., violations of the sensors' alarm thresholds that do not correspond to an actual voltage limit violation.
In contrast to previous work, this problem does not attempt to ensure full observability of the distribution system. Rather, we seek to locate (a potentially smaller number of) sensors that can nevertheless identify all voltage limit violations for any power injections within a specified range of power injection variability. With a small number of sensors, the proposed formulation also provides a simple means to design corrective actions if voltage violations are encountered in real-time operations. By restoring voltages at these few critical locations to within their alarm thresholds, the system operator can guarantee feasibility of the voltage limits for the full system. This guarantee is obtained by our sensor placement method purely by analyzing the geometric properties of the feasible set. We do not consider the design details of the feedback control protocol and thus dynamic properties of the sensors such as latency are not relevant in our approach.
Due to the nonlinear nature of the AC power flow equations, computing a globally optimal solution is challenging. We utilize conservative linear approximations of the power flow equations to convert the lower-level problem to a linear program [10]. This bilevel problem can be reformulated to a single-level problem using the Karush-Kuhn-Tucker (KKT) conditions with binary variables via a big-M formulation [11]- [13]. In this paper, we consider a duality-based approach, which has substantial computational advantages over traditional KKT-based approaches.
Note that conservativeness from the conservative linear approximations may increase the number of false positive alarms. We therefore propose an approximate gradient descent method as a post-processing step to further improve the quality of the results. This method iteratively adjusts the sensor thresholds while ensuring that all violations are still detected.
In summary, our main contributions are: (i) A bilevel optimization formulation for a sensor placement problem that minimizes the number of sensors needed to capture all possible violations of voltage limits while minimizing the number of false positive alarms. (ii) Reformulations that substantially improve the computational tractability of this bilevel problem. (iii) An approximate gradient descent method to improve solution quality. (iv) Numerical demonstration of our proposed problem formulations for a variety of test cases, including networks with multiple switching configurations. This paper is organized as follows. Section II formulates the sensor placement problem using bilevel optimization. Section III proposes different techniques to reformulate the optimization problem. Section IV provides our numerical tests. Section V concludes the paper and discusses future work.
II. SENSOR PLACEMENT PROBLEM
This section describes the sensor placement problem by introducing notation, presenting the bilevel programming formulation that is the focus of this paper, and detailing the objective function that simultaneously minimizes the number of sensors and reduces the number of false positive alarms.
A. Notation
Consider an n-bus power system. The sets of buses and lines are denoted as N = {1, . . . , n} and L, respectively. One bus in the system is specified as the slack bus where the voltage is 1∠0°per unit. For the sake of simplicity, the remaining buses are modeled as PQ buses with given values for their active (P ) and reactive (Q) power injections. Extensions to consider PV buses, which have given values for the active power (P ) and the voltage magnitude (V ), are straightforward.
(The controlled voltage magnitudes at PV buses imply that voltage violations cannot occur at these buses so long as the voltage magnitude setpoints are within the voltage limits.) The set of all nonslack buses is denoted as N P Q . The set of neighboring buses to bus i is defined as N i := {k | (i, k) ∈ L}. Subscript (·) i denotes a quantity at bus i, and subscript (·) ik denotes a quantity associated with the line from bus i to bus k, unless otherwise stated. Conductance (susceptance) is denoted as G (B) as the real (imaginary) part of the admittance.
To illustrate the main concepts in this paper, we consider a balanced single-phase equivalent network representation rather than introducing the additional notation and complexity needed to model an unbalanced three-phase network. Our work does not require assumptions regarding a radial network structure, and we are able to handle multiple network configurations (i.e., a network with a set of topologies), as discussed later in Section III-H. Extensions to consider other limits such as restrictions on line flows, alternative characterizations of power injection ranges such as budget uncertainty sets, and unbalanced three-phase network models impose limited additional complexity.
B. Bilevel optimization formulation
The main goal of this problem is to find sensor location(s) such that sensor(s) can capture all possible voltage violations. We formulate this problem as a bilevel optimization with the following upper-level and lower-level problems.
• Upper level: Determine sensor locations and alarm thresholds such that whenever the voltages at the sensors are within the chosen thresholds, the voltages at all other buses are within pre-specified safety limits. • Lower level: Find the extreme achievable voltages at all buses given the sensor locations, sensor alarm thresholds, and the specified range of power injection variability. The sensor locations and alarm thresholds output from the upper-level problem are input to the lower-level problem, and the extreme achievable voltage magnitudes output from the lower-level problem are used to evaluate the bounds in the upper-level problem. We first introduce notation for various quantities associated with the voltage at bus i: : Lower (Upper) sensor alarm threshold.
: Lowest (Highest) achievable voltage obtained from the lower-level problem. We formulate the following bilevel optimization formulation: where c is the cost function associated with sensor installation, s is a vector of sensor locations modeled as binary variables (1 if a sensor is placed, 0 otherwise). All bold quantities are vectors. The quantities L i (s, U, U ) and U i (s, U, U ) are the solutions to the lower-level problems which, for each i ∈ N P Q , are given by where P j and Q j denote the active and reactive power injections at bus j within a particular lower-level problem, θ jk := θ j − θ k denotes the voltage angle difference between buses j and k, and superscripts max and min denote upper and lower limits, respectively, on the corresponding quantity.
The quantities L i and U i are functions of s, U , and U as shown in (1d), but these dependencies are omitted later in this paper for the sake of notational brevity. For the upperlevel problem, the objective function in (1a) minimizes a cost function c(s, V, V ) associated with the sensor locations s and alarm thresholds V, V while ensuring that the extreme achievable voltage magnitudes calculated in the lower-level problem, V i , V i are within safety limits as shown in (1b). The cost function c(s, V, V ) will be detailed in the following subsection. In the lower-level problem, the objective function (2a) computes the maximum or minimum voltage magnitude for each PQ bus i ∈ N P Q . For each lower-level problem, constraints (2b)-(2c) are the power flow equations at each bus j, constraint (2e) enforces the voltage magnitudes to be within voltage alarm thresholds (if a sensor is placed at the corresponding bus), and constraints (2f)-(2g) model the range of variability in the net power injections. Constraint (2d) sets the angle reference for the power flow equations.
C. Cost function
Overly restrictive sensor thresholds can potentially trigger an alarm even when there are no voltage violations actually occurring in the system, thus resulting in a false positive. To reduce both the number of sensors and the number of false positive alarms due to unnecessarily restrictive alarm thresholds, our cost function, c(s, V , V), is: where where δ is a specified cost of placing a sensor. When s i = 1, the objective c(s, V , V) in (4) seeks to reduce the restrictiveness of the sensor alarm thresholds to have fewer false positives. Changing the value of δ in (4) models the tradeoff between placing an additional sensor and making the sensor range more restrictive. This is a crucial part of our formulation since our main goal is to identify a small number of critical locations that carry sufficient information about the feasibility of the entire network. Beyond the clear financial benefit of having to place fewer sensors, this also provides a simple and practical mechanism for deploying corrective actions in realtime. Indeed, when the system operator encounters a voltage violation, a reactive power compensation protocol that brings the voltages at these few critical locations to within the alarm thresholds will guarantee feasibility of the voltage limits for the entire network.
III. REFORMULATIONS OF THE SENSOR PLACEMENT PROBLEM
The bilevel problem (1) is computationally challenging due to the non-convexity in the lower-level problem induced by the AC power flow equations in (2b)-(2c) and the presence of two levels. Moreover, as we will show numerically in Section IV, traditional methods for reformulating the bilevel problem into a single-level problem suitable for standard solvers using the Karush-Kuhn-Tucker (KKT) conditions turn out to yield computationally burdensome problems. In this section, we provide methods for obtaining a tractable version of the bilevel sensor placement problem. We first use recently proposed conservative linear approximations of the power flow equations to convert the lower-level problem to a more tractable linear programming formulation that nevertheless retains characteristics from the nonlinear AC power flow equations. We then use various additional reformulation techniques that yield significantly more tractable problems than standard KKT-based reformulations. These reformulations first yield a (single-level) mixed-integer bilinear programming formulation that can be solved using commercial mixed-integer programming solvers like Gurobi. Via further reformulations that discretize the sensor alarm thresholds, we transform the bilinear terms to obtain a mixed-integer linear program (MILP) that has further computational advantages and is suitable for a broader range of solvers.
A. Conservative Linear Power Flow Approximations
To address challenges from power flow nonlinearities, we use a linear approximation of the power flow equations that is adaptive (i.e., tailored to a specific system and a range of load variability) and conservative (i.e., over-or underestimate a quantity of interest to avoid constraint violations). These linear approximations are called conservative linear approximations (CLAs) and were first proposed in [10]. As a sample-based approach, the CLAs are computed using the solution to a constrained regression problem. They linearly relate the voltage magnitudes at a particular bus to the power 4 injections at all PQ buses. An example of an overestimating CLA of the voltage magnitude at bus i is the linear expression such that the following relationship is satisfied for some specified range of power injections P and Q: where and superscript T denotes the transpose. We replace the AC power flow equations in (2b)-(2d) with a CLA as in (5) for all i ∈ N P Q . The bilevel problem in (1) is modified by replacing the lower-level problem in (2) by In (6), superscripts i denote quantities associated with the i th lower-level problem. Replacing the AC power flow equations with CLAs yields a linear programming formulation for the lower-level problem rather than the non-convex lower-level problem in (1). Comparing (6b)-(6c) and (2b)-(2e), we see that satisfaction of (6b)-(6c) is sufficient to ensure satisfaction of (1b), assuming that the conservative linear approximations do indeed reliably overestimate and underestimate the voltage magnitudes. Thus, the resulting optimization problems are sufficient to ensure that the sensor locations and alarm thresholds will identify all potential voltage limit violations. As a result, solving the reformulation (6) will compute sensor locations and thresholds that avoid false negatives, i.e., alarms will always be raised if there are indeed violations of the voltage limits. The computational advantages provided by linearity of the reformulated lower-level problem come with the potential tradeoff of additional false positives, i.e., spurious alarms. Various methods proposed in [10] for improving the accuracy of the CLAs naturally yield variants of (6) that reduce the number of false positives. Additionally, in Section III-G, we describe a method for post-processing the sensor alarm thresholds to further reduce the occurrences of false positives.
B. Reformulation using KKT constraints
With a linear lower-level problem, we can apply standard techniques for reformulating the bilevel problem (6) as a (single-level) mixed-integer linear program (MILP). These techniques replace the lower-level problem (6a)-(6d) with its KKT conditions that are both necessary and sufficient for optimality of this problem [12] and also apply McCormick envelopes [14] to convert the bilinear product of the continuous and discrete variables in (1c) to an equivalent linear form. The resulting single-level problem still involves bilinear constraints associated with the complementarity conditions. These bilinear constraints are traditionally addressed using binary variables in a "Big-M" formulation. Commercial MILP solvers are applicable to this traditional reformulation, which we denote throughout the paper as the "KKT formulation". This formulation is obtained by defining the lower-level coupling quantities L i and U i using the KKT conditions given below: where the operator is the element-wise multiplication; e i is the i th column of the identity matrix; λ, µ, γ := (γ 1 , γ 2 , . . . , γ 2m ) T , and π := (π 1 , π 2 , . . . , π 2m ) T are dual variables associated with the voltage and power injection bounds in the constraints (6b)-(6d). Note that the solution to the set of equations in L i is completely decoupled from that in U i . Equations (7b)-(7i) are the KKT conditions of the lower-level problem. Equation (7b) is the stationarity condition. The primal feasibility conditions in (7c)-(7e) are similar to the constraints (6b)-(6d) in the original problem with conservative linear approximations of the power flow equations. The complementary slackness conditions are (7f)-(7i) and the dual feasibility condition is (7j). Observe that the complementary slackness conditions give rise to nonlinear functions due to the multiplication of the dual variables λ, µ, γ, and π with the primal variables P and Q. To handle these nonlinearities, traditional methods for bilevel optimization replace these products using additional binary variables and a big-M reformulation. This requires bounds on the dual variables that are difficult to determine, and bad choices for these bounds can result in either infeasibility or poor computational performance [13].
C. Duality of the lower-level problem
As an alternative to the KKT formulation, one can reformulate a bilevel problem into a single-level problem by dualizing the lower-level problem. This technique can only be usefully applied to problems with specific structure where the optimal objective value of the lower-level problem is constrained in the upper-level problem in the appropriate sense (max ≤ · or min ≥ ·). In this special case, we can significantly improve tractability compared to the KKT formulation.
Let y i be the vector of all dual variables associated with the lower-level problem L i and y i be the vector of all dual variables associated with lower-level problem U i . Let I be the identity matrix of appropriate dimension. By dualizing the lower-level problem (6), we obtain the following: where A = −I, I, a 1,1 , . . . , a n,1 , −a 1,1 , . . . , −a n,1 , b = [(−P max ) T , (−Q max ) T , (P min ) T , (Q min ) T , U 1 − a 1,0 , . . . , U n − a n,0 , − U 1 + a 1,0 , . . . , − U n + a n,0 ] T .
Due to strong duality of the linear lower-level problem, the dual (8a) (and (9a)) has the same objective value as (6a) and does not directly provide any advantages. However, the problem has a specific structure where objectives from each lower-level problem (8a) and (9a) only appear in a single inequality constraint (1b). Hence, we only need to show that there exists some choice of duals y i and y i for which (1b) is feasible. This allows us to obtain a single-level formulation by defining the lower-level coupling quantities via the following set of constraints: We refer to the formulation using (10) and (11) as the "bilinear formulation" due to the bilinear product of the sensor threshold variables (U and U) and the dual variables y i and y i in (10a) and (11a). Similar to the KKT formulation, using (10) and (11) leads to a single-level optimization problem. However, the latter has the major advantage that no additional binary variables are required (beyond those associated with the sensor locations in the upper-level problem) since there are no analogous equations to the complementarity condition. Our numerical results in Section IV show that modern mixedinteger programming packages like Gurobi can solve the bilinear formulation for much larger systems than are possible with the standard KKT formulation. However, if required, the bilinear constraints (10a) and (11a) can be further simplified with one additional reformulation that is discussed next.
D. Bilinear to mixed-integer linear programming
The bilinear formulation can be further converted into an MILP by discretizing the continuous-valued sensor thresholds. This formulation has the advantage of being within the scope of a larger range of mixed-integer programming solvers since not all of them can handle billinear forms. We partition the sensor threshold ranges into d discrete steps with size and define the vectors of threshold variables, U and U, as where Note that this discretization exploits the fact that any sensor threshold will necessarily be above the lower voltage limit V min i and below the upper voltage limit V max i . Equations (13a)-(13c) imply that when η 0,i = 0, no sensor is placed (i.e., s i = 0). Using this discretization, the constraints (10a) and (11a) now contain bilinear products of binary variables. These products can be equivalently transformed into a mixed-integer linear (as opposed to bilinear) programming formulation using McCormick Envelopes [14]. With McCormick Envelopes and discrete sensor thresholds, the problems (10) and (11) become a MILP that can be computed using any MILP solver. We refer to the reformulation of the lower-level problems (10) and (11) using the discretization (12) as the "MILP formulation".
To further improve tractability, we can remove unnecessary binary variables by inspecting data from the samples of power injections used to compute the conservative linear approximations of the power flow equations. Let b be a bus where the voltage magnitude never reaches the highest discretized sensor threshold value (i.e., V min + (d − 1) ) in any of the sampled power injections. Given a sufficiently comprehensive sampling of the range of possible power injections, we can then simplify the discretized representation of the sensor alarm threshold as: A similar simplification can be used for the upper sensor thresholds. This pre-screening thus eliminates binary variables associated with sensor thresholds at buses that will never violate their voltage limits. We henceforth call this data-driven simplification technique "binary variable removal" (BVR).
E. A sensor placement heuristic for benchmarking
For comparison with our proposed sensor placement approach, we describe a heuristic alternative that exploits the traditional behavior of distribution systems where voltage magnitudes typically drop as one moves down a feeder away from the substation. This behavior suggests that the ends of each feeder may be good locations for locating sensors since the voltage magnitude limits are likely to be violated here first.
This heuristic technique avoids the computational burden from computing the conservative linear power flow approximations and solving some reformulation of the bilevel optimization problem. However, it can fail to eliminate false negatives and produce sub-optimal results in terms of the number of required sensors. This is especially true in systems with (i) DERs towards the ends of some branches, (ii) multiple operating topologies, and (iii) a large number of branches. Illustrative examples of systems with such characteristics are provided in Section IV-D.
F. Comparisons of each formulation
The previous subsections present several problem reformulations that convert the bilevel sensor placement problem (1) into various single-level problems that can be solved with mixed-integer solvers like Gurobi. Each reformulation has different computational characteristics and yields solutions with differing accuracy. We next compare the KKT formulation (7) described in Section III-B with the duality-based bilinear formulation (10) and (11) described in Section III-C according to the numbers of decision variables and constraints.
Both formulations involve bilinear terms but the bilinear formulation is more compact. Consider a system with b PQ buses, of which there are r buses where the voltage magnitudes may violate their limits after the pre-screening described in Section III-D. The total number of decision variables in the KKT formulation is 5·b·r +2·b (2·b·r from power injections, 3·b·r from dual variables, b from the voltage thresholds, and b from the sensor locations). Our proposed duality-based bilinear formulation involves only 3 · b · r + 2 · b decision variables. The reduction happens because the variables corresponding to power injections are entirely removed by duality.
Regarding the number of constraints, the duality-based bilinear formulation does not have the stationarity conditions, primal feasibility, or power injections directly involved, resulting in a reduction of 2 · b · r + b · r + 2 · b · r = 5 · b · r constraints. The implications of these differences on tractability is assessed via the solution times presented in Section IV.
G. Approximate Gradient Descent
Solving any of the reformulated bilevel optimization problems may lead to false positives, i.e., alarms that occur in the absence of a voltage limit violation. This is both due to the limited number of sensors and the conservative nature of the linear power flow approximations used in the lowerlevel problem. These false positives are undesirable because they may lead system operators to take unnecessary corrective actions that could reduce the efficiency of system operations or put avoidable wear on system components.
To reduce the number of false positives, this section proposes a post-processing step that iteratively adjusts the sensor thresholds that result from the reformulated bilevel optimization problems. We refer to this post-processing step as the Approximate Gradient Descent (AGD) method. This method uses the voltage magnitudes associated with a large number of sampled power injections within the considered range of power injection variability. These samples can be the same as those used to compute the conservative linear approximations of the power flow equations.
In this section, let superscript k denote the k th iteration of the AGD method. Let AGD be a step size for adjusting the sensor thresholds and f k be the vector of the number of false positives from the sampled power injections using the sensor thresholds in the k th iteration. Using the sampled power injections, this method computes an "approximate gradient" indicating how small changes to the sensor alarm thresholds affect the number of false positives. The approximate gradient at iteration k is denoted as g k . We denote the set of buses with sensors as N s . Subscripts denote the bus number and superscripts denote the iteration number.
Let ∆f k i represent the change in the number of false positives among the sampled power injections using the sensor thresholds in the k th iteration when the sensor alarm threshold i is changed by AGD (leaving all other sensor thresholds unchanged). We then compute an approximate gradient g k by comparing the values of ∆f k i across different buses i: The approximate gradient g k thus points in the direction of the steepest reduction in the (empirically determined) number of false positives. Each iteration of the AGD method updates the sensor thresholds according to the update rule: The AGD method stops when taking an additional step would result in the appearance of false negatives, i.e., undetected violations of voltage limits. The AGD method for the upper voltage limits is calculated separately from the lower voltage limits. Fig. 1 shows the overall process for computing the sensor locations and thresholds starting from computing the conservative linear approximations of the power flow equations (a pre-processing step), solving a reformulation of the bilevel problem, and adjusting thresholds via the approximate gradient descent method (a post-processing step).
H. A system with multiple possible network configurations
Sensors are placed once and operate for extended periods of time during which network conditions may change. In addition to the power injection variability considered in the bilevel problem (2f)-(2g), the network may be reconfigured using switches, resulting in multiple topologies. Network reconfiguration can significantly affect which locations in the network are best suited for sensor placement.
To address this, our sensor placement formulation can be extended to consider a set of possible network topologies. In this context, the goal of the sensor placement problem is to determine locations for sensors and sensor alarm thresholds that will reliably identify voltage limit violations for any topology within the considered set. The sensor locations must be selected once and remain consistent across all topologies, but the sensor alarm thresholds may vary between different topologies. With m different topologies for an n-bus system, the computational complexity is similar to solving a sensor placement problem for an (n × m)-bus system with one configuration. For each topology, we compute different conservative linear approximations of the power flow equations and introduce additional lower-level problems (i.e., for a specific bus, each configuration introduces different constraints (10a)-(10c) and (11a)-(11c)). The sensor locations have one set of binary variables since they remain the same across topologies.
IV. NUMERICAL TESTS
In this section, we perform numerical experiments on a number of test cases to analyze the sensor locations and thresholds, demonstrate the advantages of our problem reformulations and the post-processing step, and compare results and computational efficiency from different problem formulations.
The test cases we use in these experiments are the 10-bus system case10ba, the 33-bus system case33bw, and the 141bus system case141 from MATPOWER [15]. For the CLAs, we minimize the 1 error with 1000 samples in the first iteration and 4000 additional samples in a sample selection step, and we choose a quadratic output function of voltage magnitude. (See [10] for a discussion on computationally efficient iterative methods for computing CLAs and variants of CLAs that approximate different quantities in order to improve their accuracy.) All power injections vary within 50% to 150% of the load demand values given in the MATPOWER files except for case33bw where we consider a variant with solar panels installed at buses 18 and 33. The loads at these two buses vary within -200% to 150% of the given values. We implement the single-level reformulations of the sensor placement problem in MATLAB using YALMIP [16] and use Gurobi as a solver with a MIP gap tolerance of 0.005, unless otherwise stated. The AGD step size is AGD = 2 × 10 −4 per unit. The value of δ in the objective (4) is 0.02. In case10ba, case33bw, and case141, the lower voltage limits are 0.90, 0.91, and 0.92, respectively, and the computation times for the CLAs are 58, 198, and 1415 seconds, respectively.
A. Sensor locations
We compare the quality of results and the computation time from the following reformulations: (i) the KKT formulation (7), (ii) the duality-based bilinear formulation (10) and (11), (iii) the MILP formulation without applying the binary variable removal (BVR) pre-processing, and (iv) the MILP formulation with the BVR pre-processing.
The first test case is the 10-bus system case10ba, which is a simple single-branch distribution network. We consider a variant where the loads are 60% of the values in the MATPOWER file. The results from each formulation place a sensor at the end of the branch (the furthest bus from the substation) with an alarm threshold of 0.9 per unit (at the voltage limit). Fig. 2a compares computation times from the four formulations. The the KKT formulation takes 26.7 seconds while the bilinear, the MILP without BVR, and the MILP formulations take 1.96, 1.95, and 1.54 seconds, respectively. Since the sensor threshold for the KKT and both MILP formulations is at the voltage limit, AGD is not needed. On the other hand, the bilinear formulation gives a higher alarm threshold. As a result, the AGD method is applied as a post-processing step to achieve the lowest possible threshold without introducing false alarms. The number of false positives reduces from 5.48% to 0%. Executing the AGD method takes 0.11 seconds.
The second test case is the 33-bus system case33bw, which has multiple branches. Table I shows the computation times for the bilinear and the two MILP formulations. We exclude the computation time for the KKT formulation since the solver fails to find even a feasible (but potentially suboptimal) point within 55000 seconds (15 hours). Our final test case is the 141bus system case141. Similar to the 33-bus system, the solver could not find the optimal solution for the KKT formulation within a time limit of 15 hours. Table I again shows the results for this test case, and Figs. 2b and 2c compare the computation times for the bilinear and MILP formulations. Table I shows both the computation times and the results of randomly drawing sampled power injections within the specified range of variability, computing the associated voltages by solving the power flow equations, and finding the number of false positive alarms (i.e., the voltage at a bus with a sensor is outside the sensor's threshold but there are no voltage violations in the system). The results for the 33-bus and 141bus test cases given in Table I illustrate the performance of the proposed reformulations. Whereas the KKT formulation is computationally intractable, our proposed reformulations find solutions within approximately one minute, where the MILP formulation with the BVR method typically exhibits the fastest performance. The solutions to the reformulated problems place a small number of sensors (two to four sensors in systems with an order of magnitude or more buses). No solutions suffer from false negatives since all samples where there is a voltage violation trigger an alarm. There are a number of false alarms prior to applying the AGD that after its application decrease dramatically to a small fraction of the total number of samples (3.02% and 0.01% in the 33-bus and the 141bus systems, respectively). These observations suggest that our sensor placement formulations provide a computationally efficient method for identifying a small number of sensor locations and associated alarm thresholds that reliably identify voltage constraint violations with no false negatives (missed alarms) and few false positives (spurious alarms).
B. MIP gap tolerance, solver times, and solution quality
Each of the problem formulations involves a mix of continuous and binary variables, thus requiring solution from mixedinteger programming solvers like Gurobi. These solvers are based on branch-and-bound algorithms that iteratively update upper and lower bounds on the optimal solution, terminating when the difference between these bounds (i.e., the "MIP Gap") converges to a value less than a specified tolerance. The choice of MIP gap tolerance can directly affect the computation time and the quality of the solution (e.g., sensor thresholds and number of sensors).
We investigate the effect of two different choices for the MIP gap tolerance, 0.3% and 0.1%, on the bilinear and MILP formulations for the test cases case33bw and case141. The (as in the MILP formulation with case33bw). However, the results are not uniformly improved by tightening the MIP gap tolerance, as the 0.1% tolerance leads to the same number of sensors and more false positives than the 0.3% tolerance after applying the AGD method. This suggests a potential benefit of assessing the performance of multiple "nearly optimal" solutions obtained with different MIP gap tolerances.
C. Multiple configurations
The previous results described the sensor placements for the case10ba, case33bw, and case141 systems in their nominal network topologies. We next demonstrate the effectiveness of our problem reformulations for variants of these systems with multiple network configurations. We consider a variant of the case33bw system with three different network configurations and two solar PV generators installed at buses 18 and 33. The first configuration is the nominal topology given in the MATPOWER version of the test case. In the second configuration, the line connecting buses 6 and 7 is removed and a new line connecting buses 4 and 18 is added. The third configuration removes the line connecting buses 6 and 26 and adds a new line connecting buses 25 and 33. All network configurations are displayed in Fig. 3. Table IV shows the results from using the bilinear and MILP formulations to solve the multiple-configuration problem for this case. The results generally mirror those from the single-network-configuration test cases shown earlier in that computation times are still reasonable (approximately a factor of four larger) and there are no false negatives and a small number of false positives after applying the AGD method.
We note that some configurations may not need to utilize all available sensors. To show this, we describe an experiment that considers each configuration separately, possibly leading to different sensor locations for each configuration. In this experiment, we compare the number of sensors and the locations of the sensors with those in the previous experiment. As Table V shows, configurations 1 and 2 require only two sensors while configuration 3 requires only one sensor as opposed to three-sensor solution obtained from the multiple-configuration problem. This demonstrates the need to jointly consider network topologies in one problem for such situations.
D. Comparison to a heuristic sensor placement technique
We next demonstrate the performance of the heuristic sensor placement technique described in Section III-E in the context of multiple network configurations for the three-configuration variant of the test case case33bw. We study the effects of two versions of this heuristic: (i) place sensors at the end of all branches based on configuration 1 and (ii) place sensors at the ends of branches considering all configurations. The results are shown in Table VI. For the first version of this heuristic, we place four sensors. This technique works well in configuration 1; however, it introduces false negatives (failures to alarm in cases with voltage limit violations) in approximately 10% to 20% of the power injection samples for configurations 2 and 3 since the sensors are instead located in the middle of some branches and thus do not capture all possible violations. To reduce the number of false negatives, we next consider sensor locations based on all configurations, i.e., the second version of the heuristic. This results in six sensors being placed and only one occurrence of a false negative. Comparing to our approach (refer to Table IV), we only need three sensors (as opposed to six sensors) to detect all violations. This shows the necessity of using an optimization formulation to obtain sparse sensor placement solutions, since a naive approach where sensors are placed at all buses will always avoid false positives.
V. CONCLUSION
This paper has formulated a bilevel optimization problem that seeks to minimize the number of sensors needed to detect violations of voltage magnitude limits in an electric distribution system. We first addressed the power flow nonlinearities in the lower-level problem via previously developed conservative linear approximations of the power flow equations. Due to computational challenge from the bilevel nature, we therefore exploited structure specific to this problem to obtain single-level mixed-integer programming formulations that avoid introducing unnecessary additional discrete variables. We also demonstrated how to obtain a mixedinteger-linear programming formulation by discretizing the sensor thresholds. Furthermore, we developed extensions to these reformulations that consider the possibility of multiple network topologies. Our proposed sensor placement reformulations require substantially less computation time than standard reformulation techniques. We also developed a postprocessing technique that reduces the number of false alarms via an approximate gradient descent method. The combination of the bilevel problem reformulation and this post-processing technique allows us to compute sensor locations and alarm thresholds that result in few false alarms and no missed alarms, as validated numerically using out-of-sample testing.
In our future work, we seek to identify where the violations occur using the information obtained from CLAs and solutions from the sensor placement problem. Furthermore, we intend to use the sensor locations and thresholds resulting from the proposed formulations to design corrective control actions which ensure that all voltages remain within safety limits.
|
2022-10-19T01:15:51.842Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "799089787e13982d81a7506b26d676aab5a86b28",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "799089787e13982d81a7506b26d676aab5a86b28",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
}
|
226279423
|
pes2o/s2orc
|
v3-fos-license
|
Determination of organically bound fluorine sum parameters in river water samples—comparison of combustion ion chromatography (CIC) and high resolution-continuum source-graphite furnace molecular absorption spectrometry (HR-CS-GFMAS)
In this study, we compare combustion ion chromatography (CIC) and high resolution-continuum source-graphite furnace molecular absorption spectrometry (HR-CS-GFMAS) with respect to their applicability for determining organically bound fluorine sum parameters. Extractable (EOF) and adsorbable (AOF) organically bound fluorine as well as total fluorine (TF) were measured in samples from river Spree in Berlin, Germany, to reveal the advantages and disadvantages of the two techniques used as well as the two established fluorine sum parameters AOF and EOF. TF concentrations determined via HR-CS-GFMAS and CIC were comparable between 148 and 270 μg/L. On average, AOF concentrations were higher than EOF concentrations, with AOF making up 0.14–0.81% of TF (determined using CIC) and EOF 0.04–0.28% of TF (determined using HR-CS-GFMAS). The results obtained by the two independent methods were in good agreement. It turned out that HR-CS-GFMAS is a more sensitive and precise method for fluorine analysis compared to CIC. EOF and AOF are comparable tools in risk evaluation for the emerging pollutants per- and polyfluorinated alkyl substances; however, EOF is much faster to conduct. Graphical abstract
Introduction
Substitution of hydrogen with fluorine in organic molecules affects their chemical and physical properties, e.g., increased chemical and thermal stability and/or oleophobic as well as hydrophobic properties [1,2]. These molecules are described according to Buck et al. as per-and polyfluorinated alkyl substances (PFASs) [3]. PFASs are organic compounds in which all the hydrogen atoms on at least one carbon atom are replaced by fluorine [3]. Highly fluorinated organic substances are used as surfactants in technical applications, e.g., as water and grease protection agents in carpets [4], clothing [5], and food packaging [6], and as fire-extinguishing agents [7]. During production, use, and disposal of these industrial products, PFASs enter the environment. Because of their extreme persistence, PFASs accumulate in the abiotic environment, e.g., ground [8] and surface water [9], soil [10], and air [11], as well as in the biotic environment [12,13]. Overall, there are large amounts of human exposure pathways so that PFASs can be detected in human serum [14,15] as well as in human breast milk [16]. The concerning notice that PFASs can be found ubiquitously, even in the arctic environment Lennart Gehrenkemper and Fabian Simon contributed equally to this work as first authors. [17], and that there are potential negative effects on the environment and human health [18] lead to first limitations. Since 2009, perfluorooctanesulfonic acid (PFOS) and, since 2019, perfluorooctanoic acid (PFOA) are listed in annexes of the Stockholm Convention on Persistent Organic Pollutants [19]. For these reasons, the production and use of PFOS and PFOA must be reduced respectively avoided [19]. Since then, PFOS and PFOA are substituted with shorter-(≤ 6 perfluorinated carbons [20]) and longer-chained (≥ 7 perfluorinated carbons [20]) PFASs, which aren't potentially less persistent or risky [21]. The huge amount of different PFASs (> 4700 [22]) and this replacement lead to new analytical challenges. Because of their extreme persistence and vast anthropogenic emission, PFASs are emerging pollutants.
In this context, the environmental compartment water is particularly interesting. Especially through the ineffective removal of PFASs by conventional treatments of waste water treatment plants (WWTP) [23], PFASs accumulate in the aquatic environment and lead to contamination of ground and drinking water, thus entering the food chain (e.g., plants [24], animals [25,26], and humans [27]).
PFASs target analytic in aqueous samples by liquid chromatography coupled with mass spectrometry (LC-MS) and gas chromatography coupled with mass spectrometry (GC-MS) methods cover only a small proportion (7-53 PFASs for LC-MS [28][29][30], and 10-13 PFASs for GC-MS [31,32]) of the over 4700 different PFASs and vastly underestimate the quality and quantity of total organically bound fluorine (OF) [33]. This results in a huge gap in the PFAS mass balance with an unknown amount of potentially toxic and persistent PFASs. Consequently, a sum parameter method for organically bound fluorine is inevitable to cover the unknown proportion of PFASs.
PFASs in aqueous samples can be extracted using a sorbent (adsorbable organically bound fluorine, AOF) or alternatively using an organic solvent (extractable organically bound fluorine, EOF) [34]. Usually activated carbon (AC) is used as sorbent for AOF determination. Hence, AOF represents all PFASs present in water samples which are adsorbable on AC. Which PFASs are summed up in the EOF depends on the solid-phase material used during the extraction. Often, a weak anion exchanger (WAX) is used. Using WAX-solidphase extraction (SPE), the EOF covers only neutral and anionic PFASs. In the literature, there is also the use of hydrophilic-lipophilic balance (HLB) materials described, resulting in a wider range of extracted PFASs [9].
The sample preparation for the determination of AOF is carried out according to the standardized adsorbable organically bound halogen (AOX) method ISO 9562 (adsorption on AC and argentometric determination) [35]. Based on ISO 9562, Wagner et al. developed a combustion ion chromatography (CIC) method, applicable to determine the AOF as well. Upon adsorption, AC is combusted and fluorine is converted into hydrogen fluoride (HF), which is then adsorbed in a trapping solution. Subsequently, the analysis of fluoride was carried out using ion chromatography (IC) [36]. A complementary target analysis by Willach et al. connoted that the AOF of some highly contaminated aqueous samples can only be explained by < 5% with LC-MS approaches, which underlines the importance of an organically bound fluorine sum parameter [33].
The determination of EOF in aqueous samples was firstly described by Miyake et al. in 2007. For sample preparation, they used an SPE method comprising a WAX phase. The eluent was measured via CIC in accordance with the aforementioned AOF CIC approach. With a different analytical approach utilizing high resolution-continuum source-graphite furnace molecular absorption spectrometry (HR-CS-GFMAS), Metzger et al. developed a method for the determination of EOF using in situ formation of GaF in the graphite furnace for detection. GaF is the most sensitive diatomic molecule for fluorine analysis using HR-CS-GFMAS in surface water analysis [37]. In contrast to the previously developed method by Miyake et al. using WAX as SPE material, Metzger et al. used an HLB material for SPE.
The overall aim of this study is the comparison of fluorine analysis using either CIC or HR-CS-GFMAS. Additionally, a mass balance and sum parameter analysis of OF is applied, which is schematically described in Fig. 1. This approach involves the determination of TF, followed by the complementary adsorption respectively extraction of organic fluorine. By comparing the concentration and composition of EOF/AOF, it can be estimated, which sum parameter reflects OF better and which sum parameter is therefore advantageous in risk evaluation and understanding of the environmental prevalence of the emerging pollutant PFASs. Coherently, the accurate and sensitive determination of these sum parameters using either CIC or HR-CS-GFMAS plays an equally important role for risk evaluation. Revealing the most advantageous sum parameter for organically bound fluorine with the most sensitive analytical method is therefore the aim of this study.
Sampling
Water samples from the river Spree were taken on 4 th of June 2020 on ten spots along its way through Berlin, Germany. Coordinates of the sampling locations were tracked using a GPSMAP® 64SX (Garmin Ltd., Olathe, USA) and are listed in Table 1. A map of the sampling locations is shown in Fig. 2. Each sample was collected at 20-30 cm depth under the water surface and 1.5-2.0 m distance to the riverbank using a leached sample bottle (LDPE high performance bottles, VWR, Darmstadt, Germany) mounted on a telescope pole. Sample bottles were conditioned with river water before filling. On each spot, 6 samples of 500 mL were collected. Water temperature, conductivity, pH value, and O 2 concentration were measured in a separate vessel using a Multi 3430 Set G (Wissenschaftlich-Technische Werkstätten, Weilheim, Germany). Measured environmental parameters for each location are summarized in Table 1. Water samples and two blanks (deionized water) were filtered on the day of sample collection using nitro cellulose membrane filters with a pore size of 0.45 μm (LABSOLUTE®, Th. Geyer GmbH & Co. KG, Renningen, Germany) and stored in a refrigerator at 4°C in the dark to reduce the potential growth of microorganisms.
Total fluorine analysis
To determine the amount of total fluorine (TF), 1 mL of each filtrated sample was directly analyzed in triplicate by means of contrAA 800 HR-CS-GFMAS system (Analytik Jena AG, Jena, Germany) and the software ASpect CS 2.2.2.0 (Analytik Jena AG, Jena, Germany). HR-CS-GFMAS B measurements were performed following a protocol of Metzger et al. [9]. Zirconium-coated graphite furnaces with PIN platforms were prepared and conditioned as described previously [9]. Absorption of GaF formed in situ in the graphite furnace was measured at a wavelength of 211.248 nm. Injection of the sample as well as modifiers was conducted as follows: 2 μL deionized water, 16 μL sample, 9 μL 1 g/L gallium solution, 3 μL 10 g/L sodium acetate solution, 3 μL modifier mix (consisting of 0.1% (v/v) of palladium, 0.05% (v/v) of magnesium matrix modifier, and 20 mg/L zirconium standard) and 2 μL deionized water. For quantification, an external calibration of aqueous fluoride standard with concentrations of 0, 40, 80, 120, 160, 180, 200, 220, and 250 μg/L was used. To prevent enrichment of the analytes through evaporation of solvents, each sample vessel was covered with Parafilm® M purchased from Th. Geyer (Th. Geyer GmbH & Co. KG, Renningen, Germany). Samples were measured in instrumental triplicates. Additionally, TF analysis was carried out using CIC B consisting of a combustion system (AQF-2100H, Mitsubishi Chemical, Tokyo, Japan) and an IC (ICS Integrion, Thermo Fisher Scientific GmbH, Dreieich, Germany) controlled by the software Chromeleon 7.2.10 (Thermo Fisher Scientific GmbH, Dreieich, Germany). The combustion unit consisted of an autosampler (ASC-210) connected to the induction furnace unit (AQF-2100H) maintained at constant 1050°C operated by the NSX 2100 software (instrumental setup is summarized in Table 2 and Table 3). Before use, all ceramic boats were thermolytically cleaned for at least 15 min at 1050°C to avoid organic contamination. A liquid sample (0.5 mL) was loaded on a ceramic boat with a pipet (Transferpette, Brand GmbH + CO KG, Wertheim, Germany) and investigated via CIC B . Hydropyrolysis during combustion was enabled by a constant flow of dry O 2 (300 mL/min) and water supplied argon (150 mL/min). Combustion gases were absorbed in 5 mL of a freshly prepared 0.1 mM NH 3 absorption solution within the gas absorption unit (GA-210). Ion chromatography was performed using Dionex IonPac AG22 guard column (2 × 50 mm) as guard column and Dionex IonPac AS22 (2 × 250 mm) as analytic column (column temperature 30°C), operated with an eluent consisting of 4.5 mM Na 2 CO 3 and 1.4 mM NaHCO 3 and flow rate of 0.3 mL/min. Fluoride ions were detected by a conductivity detector using 250 mM H 3 PO 4 as suppressor regenerant. For calculation of detected peak areas and fluoride concentrations, the chromatography data system Chromeleon 7. Solid-phase extraction of extractable organically bound fluorine SPE was carried out following the optimized protocol of Metzger et al. [9] and was done in triplicate for each sample and in duplicate for deionized water as blanks. Therefore, HLB-SPE cartridges (OASIS®, Waters, Eschborn, Germany) and a vacuum chamber (HyperSep™, Thermo Fisher Scientific GmbH, Schwerte, Germany) were used. The SPE cartridges were rinsed with 3.0 mL methanol and twice with 3.0 mL of an acidified aqueous solution (deionized water acidified with HNO 3 to pH 2). The valves were closed, and the solid phases were covered with 2.5 mL of acidified aqueous solution (pH 2). Before the extraction step, the pH value of each filtrated sample was adjusted to pH 2 using HNO 3 . 250 mL of each sample was vacuumed through the cartridges; the solid phase was rinsed two times with 3.0 mL of the acidified aqueous solution (pH 2) and vacuum dried for 30 min. The extracted compounds were eluted by means of 1 mL methanol. Eluates were then evaporated to dryness in a vacuum spin evaporator system (RVC 2-25 CDplus, Christ Martin Gefriertrocknungsanlagen GmbH, Osterode am Harz, Germany) and stored until further analysis in a refrigerator at 4°C in the dark. Before the measurement using HR-CS-GFMAS B as described above (see Total fluorine analysis), samples were re-dissolved in 1 mL of methanol/water (1:1; v/v). For EOF calibration, a mixture of methanol/water (1:1; v/v) was used as solvent, resulting in concentrations at 0, 5, 10, 20, 40, 60, 80, 100, 120, 160, 200, and 250 μg/L fluoride. Combustion ion chromatography analysis was conducted at the Federal Institute for Materials Research and Testing (CIC B ). Therefore, 0.2 mL aliquots of the re-dissolved EOF samples were loaded on quartz wool (0.2 g)-packed ceramic boats, carefully evaporated prior to combustion and combustion gases absorbed in 4 mL of a freshly prepared 0.1 mM NH 3 absorption solution within the gas absorption unit (GA-210). The same calibration curve was used as for TF determination.
Adsorption and combustion of organic bound fluorine
Determination of the AOF in river water samples was divided in three steps, following a modified protocol of ISO 9562:2004-09: (i) adsorption of the organic fluorine on AC columns, (ii) combustion of the AC and absorption of released hydrogen fluoride in a trapping solution, and (iii) quantitative measurement of fluoride in the trapping solution using both IC and HR-CS-GFMAS.
(i) For the enrichment step, the pH value of each filtrated river water sample and each methodic blank, consisting of deionized water, was adjusted to pH 2 using HNO 3 .
Samples were prepared as triplicates. Aliquots of 100 mL were automatically vacuum pumped through triplex quartz containers (Analytik Jena AG, Jena, Germany) packed with two times 55-60 mg AC (50-150 μm, from Analytik Jena AG, Jena, Germany) and once with a cotton pellet (4.0 mm, Orbis Dental Handelsgesellschaft mbH, Münster, Germany). The adsorption columns were washed with 25 mL of an aqueous sodium nitrate solution (c(NaNO 3 ) = 0.01 mol/L) to remove ionic fluoride. (ii) For combustion, the AC was transferred quantitatively into ceramic sample boats and hydropyrolyzed in a combustion device (AQF-2100H, A1 Enviroscience GmbH, Düsseldorf, Germany) at 1000°C (CIC KO ). During the combustion process, a carrier gas flow of 200 mL/min (Ar) and oxygen gas flow of 400 mL/min were applied. Additionally, an argon stream-supported water supply (100 mL/min) was used according to Wagner et al. [36]. CIC KO combustion parameters are summarized in Table 4. During combustion, the adsorbed organically bound fluorine compounds were converted into HF, which was trapped in 10 mL of an aqueous phosphate solution (5 mg/L). (iii) The trapping solution was split for analysis via IC and HR-CS-GFMAS B .
(i) and (ii) steps of AOF analysis were conducted at the BfG in Koblenz, Germany. One set of trapping solutions out of the sample triplicates was shipped to Berlin by overnight express in cooled boxes and stored immediately in a refrigerator at 4°C in the dark to be measured by means of HR-CS-GFMAS B as described above (see Total fluorine analysis). HR-CS-GFMAS B calibration solutions for AOF measurements contained additionally 5 mg/L phosphate, prepared by dilution of an ortho-phosphate standard solution to match the matrix of the trapping solution, resulting in concentrations at 0, 1, 2, 5, 10, 15, 20, 30, 40, 60, 80, and 100 μg/L fluoride.
For IC analysis of the AOF trapping solutions at the BfG, an 881 compact IC pro system (Metrohm GmbH & Co. KG,
Limit of detection/limit of quantification
Instrumental limit of detection (LOD) and limit of quantification (LOQ) were determined for fluorine analysis using HR-CS-GFMAS B and CIC. Calculation was conducted according to DIN 32645 [38]. Therefore, 10 blank measurements (deionized water) and a calibration of the same day were taken. Subsequently, blank standard deviation (SD) was calculated, divided by the slope of the calibration curve, and multiplied by 3, resulting in the instrumental LOD value. Factor 10 was used for the determination of the instrumental LOQ.
Data analysis
All data plots were created using Origin ® 2020 software
Determination of LOD and LOQ
Results for LOD and LOQ are shown in Table 6. HR-CS-GFMAS B LOQ was the lowest with 2.7 μg/L, while CIC LOQs were around 10 μg/L. TF, EOF, and AOF concentrations mostly exceeded the LOQ of all instruments. Only one EOF sample (sample location 8) out of a triplicate was below CIC B LOQ and ten AOF samples (sample locations 1, 2, 3, 4, 6, and 10) out of the triplicates were below the CIC KO LOQ. Thus, according to the obtained LOD and LOQ, all instrumental methods are suitable for TF and EOF determination. For the analysis of AOF via CIC, a higher concentration factor should be chosen for quantitative measurements.
Total fluorine analysis
TF was determined using HR-CS-GFMAS B and CIC B and results are shown in Fig. 4. Samples were measured in technical and methodical triplicates. For HR-CS-GFMAS B , mean concentrations varied around~190 μg/L with maximum concentrations of 213.5 μg/L at sampling location 8 and minimum concentration of 169.8 μg/L at location 3. Relative SD of three independent samples was between 4.2 and 10.6%. For CIC B , mean concentrations varied around~210 μg/L with maximum concentrations of 269.8 μg/L at sampling location 3 and minimum concentration of 147.6 μg/L at location 9. Relative SD was between 0.9 and 8.3%. Similar concentration ranges were published for fluoride by Berliner Wasserbetriebe along the river Spree in Berlin [39]. On average, CIC B concentrations for TF were~20 μg/L higher than using HR-CS-GFMAS B . Additionally, relative SD was lower during CIC B TF determination compared with HR-CS-GFMAS B . As mentioned above, TF concentration mostly depends on inorganic fluoride concentration [40]. Recently published data for the rivers Rhine and Moselle indicated that maximum concentrations were around~130 μg/L respectively1 80 μg/L fluoride [41]. Hence, concentrations at Spree with2 00 μg/L were of the same order of magnitude.
Concentrations of sample locations 5, 6, 8, and 10 were in good agreement between the instrumental methods, while for sample locations 1, 2, 3, and 4, CIC B tended to provide higher TF concentrations in comparison to HR-CS-GFMAS B . For samples 7 and 9, CIC B tended to provide lower TF concentrations in comparison to HR-CS-GFMAS B . Overall, TF concentrations between the two instrumental methods were in the same order of magnitude. Higher TF concentrations provided by CIC B at sample locations 1, 2, 3, and 4 could be due to carry-over effects or cross contamination during the measurements. A higher uncertainty arises due to the instrumental methodology of CIC B while trapping HF, which results from the dilution of the samples by a factor of 20 during the trapping process. Variations of the trapping solution volume could also lead to variations, especially for sample locations 7 and 9, in which lower concentrations were detected for CIC B in comparison to HR-CS-GFMAS B . While a comparison of IC and HR-CS-GFMAS for fluoride was published before and results were in better agreement compared to this study [41], combustion-coupled IC might result in higher uncertainty compared to fluoride determination using IC solely. The intended purpose of using CIC rather than IC in this study was to provide a better comparison of total fluorine, because IC analysis provides only the detection of inorganic fluorine species, while CIC provides results for both inorganic and organic fluorine species summarized as TF.
AOF analysis
During the first two steps of AOF determination (see Adsorption and combustion of organic bound fluorine, Materials and methods section), analytes were adsorbed onto AC and, during combustion, converted and absorbed as HF in a trapping solution. For IC KO measurements, trapping solutions were directly analyzed. The trapping solution of one of the methodic triplicates
EOF analysis
Organic fluorine was extracted using HLB-SPE and methanol as eluent in methodic triplicate. The resulting extracts were aliquoted and analyzed by means of HR-CS-GFMAS B and CIC B . Results are shown in Fig. 6. In order to assure the same quality level, EOF values were corrected for methodic blank values according to von Abercron et al. [34]. EOF concentrations determined using HR-CS-GFMAS B varied around 0.05-0.55 μg/L, while CIC B EOF concentrations were lower, ranging up to 0.22 μg/L. EOF concentrations between CIC B and HR-CS-GFMAS B were in best agreement at sample locations 1, 3, and 4 with mean differences < 0.05 μg/L. The highest differences were observed at sample locations 8, 9, and 10 with mean differences > 0.3 μg/L. SDs of CIC B EOF Table 1. + = n = 2; * = n = 1; # = value was set to "0"-due to negative values upon blank correction Table 1 triplicates were relatively high compared to HR-CS-GFMAS B as shown in Fig. 6. While CIC B EOF values were relatively consistent, HR-CS-GFMAS B analysis revealed potential EOF hot spots at sample locations 2, 8, 9, and 10.
Although Miyake et al. used WAX-SPE and CIC, the herein published EOF concentrations determined using HLB-SPE and HR-CS-GFMAS B with about 0.05-0.55 μg/L were in good agreement with their EOF values (0.093 μg/L in unpolluted water and 0.562 μg/L at a contaminated site) [40].
EOF values published by Metzger et al. determined using the same SPE method and HR-CS-GFMAS were also in a similar range (0.05-0.30 μg/L for river water samples) [9].
While EOF concentrations determined using HR-CS-GFMAS B and CIC B showed the same trend for sample locations 1 and 3-7, EOF concentrations determined using HR-CS-GFMAS B provided higher concentration values at all sampling locations. The highest differences observed between EOF concentrations determined using HR-CS-GFMAS B and CIC B at sampling locations 2, 8, 9, and 10 could possibly be related to potential loss of volatile HF during CIC B measurements, possibly because of a leaky device or incomplete combustion of the samples. Furthermore, the variation of the trapping solution could be an important factor to consider, which might lead to higher variations of EOF concentrations determined using CIC B . The trend of the EOF concentrations determined using HR-CS-GFMAS B cannot be accurately recovered using CIC B probably because the concentrations were near the LOQ of CIC B . This could possibly lead to a higher uncertainty for concentrations near the LOQ. Therefore, HR-CS-GFMAS B provided a lower LOQ and was more sensitive especially while conducting EOF analysis. Overall, EOF concentrations were in the same order of magnitude while determined using either HR-CS-GFMAS B or CIC B . EOF concentrations presented in this study compared to EOF concentrations described in the literature were also in the same order of magnitude.
Comparison of AOF determined via CIC and EOF determined via HR-CS-GFMAS As shown above, different analytical methods (CIC ↔ HR-CS-GFMAS) are providing comparable results that were in the same order of magnitude for each sum parameter (TF, AOF, as well as EOF). Methodical triplicates of EOF and AOF are compared for the following discussion as well as in Fig. 7.
On average, the quotient between AOF determined via CIC KO and EOF determined via HR-CS-GFMAS B on each sampling location was a factor of about 4, resulting in a slope of about 0.25 in the scatter plot (see Fig. 8). SDs of AOF values were notably higher compared to the SDs of EOF values (average SD values were for AOF 0.54 μg/L and for EOF 0.02 μg/L). A similar trend between AOF and EOF along the sampling locations was observed. The mean values are in best agreement at sampling locations 3 and 7. Highest differences of the mean sum parameter values were observed at sampling locations 2 and 10, which could be due to different selectivity of AOF and EOF, resulting in varying compositions of the measured samples for AOF and EOF. HR-CS-GFMAS B provided noticeably less variation while EOF concentration ranges were similar compared to CIC KO AOF concentrations. Inferring, EOF analysis using HR- Table 1. + One sample out of the triplicate was negative after blank correction. * Two samples out of the triplicate were negative after blank correction and hence not taken into account CS-GFMAS is less time consuming, more sensitive, and more precise, and for the future prospective, possibly more relevant than AOF analysis conducted by CIC.
By plotting the EOF against the AOF, the scatter plot in Fig. 8 was obtained. The slope of 0.284 (± 0.143) expresses that AOF values are on average systematically higher than EOF values. All values but two (sample locations 2 and 9) were inside the 95% confidence interval. Overall, results were in good agreement and similar trends between determined sum parameters and instrumental approaches could be observed (see Fig. 7).
Mass balance
Proportionally, AOF determined by CIC KO made up 0.11-0.51% of mean TF (determined using CIC B ) along the river Spree in Berlin. EOF determined by HR-CS-GFMAS B made up 0.04-0.28% of mean TF (determined using HR-CS-GFMAS B ) along the river Spree in Berlin. In this study, it could be shown that TF is mainly depended on the inorganic fluoride content. In this context, the results are consistent with the previously published data by Miyake et al. [40]. Despite its small proportion, the OF is crucial due to the extreme environmental persistence and bioaccumulation of PFASs as well as potential severe negative health effects.
HR-CS-GFMAS vs. CIC
HR-CS-GFMAS and CIC are both powerful devices in (organically bound) fluorine trace analysis. HR-CS-GFMAS analysis is faster, more sensitive, and more precise compared to CIC respectively IC for fluorine analysis in the low microgram per liter range. When using combustion-coupled IC, the injection of a sample aliquot (~200-500 μL) in the sample boat and trapping of HF after combustion in a volume of1 0 mL results in high dilution factors (~1:20-1:50), which is disadvantageous for detection of low concentrations. Furthermore, the volume of the trapping solution varies, which results in different dilution factors during triplicate measurements of a sample. Additionally, potential loss of volatile HF, possibly because of a leaky device, or incomplete combustion, leads to an underestimation of the fluorine concentration, which can be reduced by means of a basic trapping solution. Consequently, the direct analysis of samples via HR-CS-GFMAS is preferable for EOF determination. Since only < 1% of TF depends on EOF or AOF, the sensitivity and precision in the lower microgram per liter concentration range of the analytical setup is more relevant for risk evaluation. Therefore, the outcome of our comparison study is that HR-CS-GFMAS is beneficial compared to CIC to determine OF.
AOF vs. EOF
Because of the higher AOF values compared to the EOF values, the AOF seems to represent a higher proportion of the OF. It could be concluded that even lower concentrations of OF are thus better recorded. On the other hand, determined EOF values scattered less and blank value correction had a negligible effect, making the EOF the more precise parameter. The overall higher concentrations of AOF samples could be due to contamination during adsorption of analytes on AC, washing off of inorganic fluoride and the subsequent combustion of AC. Since the determined EOF values are systematically lower than determined AOF values, the OF extraction could be incomplete using HLB phase SPE. According to the systematically lower EOF values, HLB-SPE is indeed more effective than WAX-SPE but further optimization for more accurate determination of OF is needed. With further optimization, EOF might be the superior sum parameter than AOF, but currently, EOF and AOF are equally important in risk evaluation.
The herein presented study is the first comparative study on HR-CS-GFMAS⇔CIC as well as AOF⇔EOF.
Compliance with ethical standards
Conflicts of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-11-09T14:55:56.199Z
|
2020-11-08T00:00:00.000
|
{
"year": 2020,
"sha1": "35d690bcb2b48cd4147389861853018be4f8d1e3",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc8473383?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9c30135149539cb9dee4aa3cd7ee9cce9b2f2b5a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
119135539
|
pes2o/s2orc
|
v3-fos-license
|
A Vietoris-Smale mapping theorem for the homotopy of hyperdefinable sets
Results of Smale (1957) and Dugundji (1969) allow to compare the homotopy groups of two topological spaces $X$ and $Y$ whenever a map $f:X\to Y$ with strong connectivity conditions on the fibers is given. We apply similar techniques in o-minimal expansions of fields to compare the o-minimal homotopy of a definable set $X$ with the homotopy of some of its bounded hyperdefinable quotients $X/E$. Under suitable assumption, we show that $\pi_{n}(X)^{\rm def}\cong\pi_{n}(X/E)$ and $\dim(X)=\dim_{\mathbb R}(X/E)$. As a special case, given a definably compact group, we obtain a new proof of Pillay's group conjecture"$\dim(G)=\dim_{\mathbb R}(G/G^{00}$)"largely independent of the group structure of $G$. We also obtain different proofs of various comparison results between classical and o-minimal homotopy.
Introduction
Let M be a sufficiently saturated o-minimal expansion of a field. We follow the usual convention in model theory [TZ12] to work in a sufficiently saturated structure, so we assume that M is κ-saturated and κ-strongly homogeneous for κ a sufficiently big uncountable cardinal (this can always be achieved going to an elementary extension). A set X ⊆ M k is definable if it first-order definable with parameters from M , and it is type-definable if it is the intersection of a small family of definable sets, where "small" means "of cardinality < κ". The dual notion of -definable set is obtained by considering unions instead of intersections. The hypothesis that M has field operations ensures that every definable set can be triangulated [vdD98].
We recall that, given a definable group G, there is a normal type-definable subgroup G 00 , called infinitesimal subgroup, such that G/G 00 , with the logic topology [Pil04], is a real Lie group [BOPP05]. If in addition G is definably compact [PS99], we have dim(G) = dim R (G/G 00 ) [HPP08], namely the o-minimal dimension of G equals the dimension of G/G 00 as a real Lie group. These results were conjectured in [Pil04] and are still known as Pillay's conjectures.
It was later proved that if G is definably compact, then G is compactly dominated by G/G 00 [HP11]. This means that for every definable subset D of G, the intersection p(D) ∩ p(D ∁ ) has Haar measure zero (hence in particular it has empty interior) where p : G → G/G 00 is the projection and D ∁ is the complement of D. Special cases were proved in [BO04] and [PP07].
The above results establish strong connections between definable groups and real Lie groups. The proofs are complex and based on a reduction to the abelian and semisimple cases, with the abelian case depending in turn on the study of the fundamental group and on the counting of torsion points [EO04]. A series of result of P. Simon [Sim15,Sim14,Sim13] provides however a new proof of compact domination which does not rely on Pillay's conjectures or the results of [EO04]. More precisely, [Sim14] shows that fsg groups in o-minimal theories admit a smooth left-invariant measure, and [Sim15] contains a proof of compact domination for definable groups admitting a smooth measure (even in a NIP context). The fact that definably compact groups in o-minimal structures are fsg is proved in [HPP08,Thm. 8.1].
Our main theorem sheds new light on the connections between compact domination and Pillay's conjectures, and concerns the topology of certain hyperdefinable sets X/E, where E is a bounded type-definable equivalence relation on a definable set X. Under a suitable contractibility assumption on the fibers of p : X → X/E (12.1), we obtain a homotopy comparison result between X and X/E, and in particular an isomorphism of homotopy groups π n (X) def ∼ = π n (X/E) in the respective categories. Similar results apply locally, namely replacing X/E with an open subset U ⊆ X/E and X with its preimage p −1 (U ) ⊆ X, thus obtaining π n (p −1 (U )) def ∼ = π n (U ).
For the full result see Theorem 11.8 and Theorem 12.2.
From these local results and a form of "topological compact domination" (13.2) we shall deduce that dim(X) = dim R (X/E), namely the dimension of X in the definable category equals the dimension of X/E in the topological category (Theorem 13.3). This yields a new proof of "dim(G) = dim R (G/G 00 )" for compactly dominated groups which does not depend on the counting of torsion points (for in fact it does not depend on the group structure!). Some comparison results between classical and o-minimal homotopy estabished in [DK85,BO02,BO09] also follow (see Corollary 12.3). In particular, if X = X(M ) ⊆ M k is a closed and bounded ∅-semialgebraic and st : X → X(R) is the standard part map, we can take E = ker(st) and deduce π n (X) def ∼ = π n (X(R)).
This work can be considered as a continuation of the line of research initiated [BM11]: while in that paper we focused on the fundamental group, here we manage to encompass the higher homotopy groups and more generally homotopy classes [X, Y ] of map f : X → Y is the relevant categories.
We have tried to make this paper as self-contained as possible. The proofs of the homotopy results are somewhat long but elementary and all the relevant notions are recalled as needed. The paper is organized as follows.
In Section 2 we recall the notions of definable space and definable manifold, the main example being a definable group G.
In Section 3 we introduce the logic topology on the quotient X/E of a definable set X by a bounded type-definable equivalence E.
In Section 4 we recall the notion of "normal triangulation" due to Baro [Bar10], and we show how to produce normal triangulations satisfying some additional properties.
In sections 5 and 6 we illustrate some of the analogies between the standard part map and the map G → G/G 00 , where G is a definably compact group and G/G 00 has the logic topology.
These analogies are further developed in Section 7, where we discuss various versions of "compact domination".
In Sections 8,9 we work in the category of classical topological spaces and we establish a few results for which we could not find a suitable reference. In particular in Section 8 we show that given an open subset U of a a triangulable space, any open covering of U has a refinement which is a good cover.
In Section 10 we recall the definition of definable homotopy. Sections 11,12 and 13 contain the main results of the paper, labeled Theorem A (11.8), Theorem B (12.2), and Theorem C (13.3), respectively, as the titles of the corresponding sections.
In Theorem A we prove that there is a homomorphism π def n (X) → π n (X/E) from the definable homotopy groups of X and the homotopy groups of X/E, under a suitable assumption on E. We actually obtain a more general result of which this is a special case.
In Theorem B we strengthen the assumptions to obtain an isomorphism: π def n (X) ∼ = π n (X/E). Since the standard part map can put in the form p : X → X/E for a suitable E, some known comparison results between classical and o-minimal homotopy will follow.
Finally, in Theorem C we add the assumption of "topological compact domination" to obtain dim(X) = dim R (X/E) and we deduce dim(G) = dim R (G/G 00 ) and some related results.
Acknowledgement. Some of the results of this paper were presented at the 7th meeting of the Lancashire Yorkshire Model Theory Seminar, held on December the 5th 2015 in Preston. A.B. wants to thank the organizers of the meeting and acknowledge support from the Leverhulme Trust (VP2-2013-055) during his visit to the UK. The results were also presented at the Thematic Program On Model Theory, International Conference, June 20-24, 2016, University of Notre Dame.
Definable spaces
A fundamental result in [Pil88] establishes that every definable group G in M has a unique group topology, called t-topology, making it into a a definable manifold. This means that G has a finite cover U 1 , . . . , U m by t-open sets and for each i ≤ m there is a definable homeomorphism g i : is an open subset of some cartesian power M k with the topology induced by the order of M . The collection (g i : U i → X i ) i≤m is called an atlas and g i is called a local chart.
Definable manifolds are special cases of definable spaces [vdD98]. The notion of definable spaces is defined through local charts g i : U i → U ′ i , like definable manifolds, with the difference that now U ′ i is an arbitrary definable subset of M k , not necessarily open. In particular every definable subset X of M k , with the topology induced by the order, is a definable space (with the trivial atlas consisting of a single local chart), but not necessarily a definable manifold.
We collect in this section a few results on definable spaces which shall be needed in the sequel. They depend on the saturation assumptions on M . The results are easy and well known to the experts but the proofs are somewhat dispersed in the literature.
Proof. Let x ∈ i∈I A i and fix a definable fundamental family (B ε : ε > 0) of neighbourhoods of x decreasing with ε (for example take B ε to be the points of X at distance < ε from x in a local chart).
Proof. The inclusion "⊆" is trivial. For the "⊇" direction let x ∈ i X i and suppose for a contradiction that Lemma 2.3. Let (X i : i ∈ I) be a small downward directed family of definable subsets of the definable space X. Suppose that H := i∈I X i is clopen. Then for every i ∈ I there is j ∈ I such that X j ⊆ int(X i ).
Proof. Fix i ∈ I.
Since H is open, H ⊆ int(X i ). Using the fact that H is also closed, we have H = H = i∈I X i = i∈I X i (by Lemma 2.2). The latter intersection is included in int(X i ), hence by saturation there is j ∈ I such that X j ⊆ int(X i ).
Logic topology
Let X be a definable set and consider a type-definable equivalence relation E ⊆ X × X of bounded index (namely of index < κ) and put on X/E the logic topology: a subset O ⊆ X/E is open if and only if its preimage in X isdefinable, or equivalently C ⊆ X/E is closed if and only if its preimage in X is type-definable. This makes X/E into a compact Hausdorff space [Pil04]. We collect here a few basic results, including some results from [Pil04,BOPP05], which shall be needed later.
Proof. By definition of logic topology, we need to show that p −1 (p(C)) is typedefinable. By definition, x belongs to p −1 (p(C)) if and only if ∃y ∈ C : xEy. Since E is type-definable, xEy is equivalent to a possibly infinite conjunction i∈I ϕ i (x, y) of formulas over some small index set I, and we can assume that every finite conjunction of the formulas ϕ i is implied by a single ϕ i . By saturation it follows that we can exchange ∃ and i , hence p −1 (p(C)) = {x : i ∃yϕ i (x, y)}, a type-definable set.
Proof. This is an immediate consequence of the fact that if a type-definable set is contained in a -definable set, there is a definable set between them. Proposition 3.3. Let y ∈ X/E and let D ⊆ X be a definable set containing p −1 (y). Then y is in the interior of p(D). Moreover, there is an open neighbourhood U of y such that p −1 (U ) ⊆ D.
Proof. By Proposition 3.1, Z = p(X \ D) is a closed set in X/E which does not contain y. Hence the complement O of Z is an open neighbourhood of y contained in p(D). We have thus proved the first part. For the second part note that, since X/E is compact Hausdorff, it is in particular a normal topological space. We can thus find a fundamental system of open neighbourhoods U i of y such that For our last set of propositions we assume that X is a definable space, possibly with a topology different from the one inherited from its inclusion in M k . Proposition 3.5. Assume p : X → X/E is continuous and let C be a definable subset of X. Then p(C) = p(C).
Proof. It suffices to observe that p(C) ⊆ p(C) ⊆ p(C) where the first inclusion holds because p is continuous and the second by Proposition 3.1.
Triangulation theorems
The triangulation theorem [vdD98] is a powerful tool in the study of o-minimal structures expanding a field. In this section we review some of the relevant results and we prove a specific variation of the normal triangulation theorem of [Bar10] for simplexes with real algebraic vertices.
Simplicial complexes are defined as in [vdD98]. They differ from the classical notion because simplexes are open, in the sense that they do not include their faces. As in [vdD98], the vertices of a simplicial complex are concrete points, namely they have coordinates in the given o-minimal structure M (expanding a field). More precisely, given n + 1 affinely independent points a 0 , . . . , a n ∈ M k , the (open) nsimplex σ M = (a 0 , . . . , a n ) ⊆ M k determined by a 0 , . . . , a n is the set of all linear combinations n i=0 λ i a i with λ 0 + . . . + λ n = 1 and 0 < λ i < 1 (with λ i ∈ M ). If we go to a bigger model N M , we write σ N for the set defined by the same formulas but with λ i ranging in N . We omit the subscript if there is no risk of ambiguity. A closed simplex is defined similarly but with the weak inequalities 0 ≤ λ i ≤ 1. In other words a closed simplex is the closureσ = cl(σ) of a simplex σ, namely the union of a simplex and all its faces.
An simplicial complex is a finite collection P of (open) simplexes, with the property that for all σ, θ ∈ P , σ ∩ θ is either empty or the closure of some δ ∈ P (a common face of the two simplexes). We shall say that P is a closed simplicial complex if whenever it contains a simplex it contains all its faces. In this case we write P for the collection of all closuresσ of simplexes σ of P and we callσ a closed simplex of P .
The geometrical realization |P | of a simplicial complex P is the union of its simplexes.
We shall often assume that P is defined over R alg , namely its vertices have real algebraic coordinates, so that we can realize P either in M or in R. In this case, we write |P | M or |P | R for the geometric realization of P in M or R respectively. Notice that a simplicial complex is closed if and only if its geometrical realization is closed in the topology induced by the order of M .
If L ⊆ P is a subcomplex of P , and σ ∈ P , we define |σ |L | R = σ ∩ |L| R and |σ |L | M = σ ∩ |L| M . To keep the notation uncluttered, we simply write σ |L when the model is clear from the context. Definition 4.1. A triangulation of a definable set X ⊆ M m is a pair (P, φ) consisting of a simplicial complex P defined over M and a definable homeomorphism φ : |P | M → X. We say that the triangulation φ is compatible with a subset S of X if S is the union of the images of some of the simplexes of P . Now, suppose that we have a triangulation φ : |P | M → X and we consider finitely many definable subsets S 1 , . . . , S l of X. The triangulation theorem tells us that there is another triangulation ψ : |P ′ | M → X compatible with S 1 , . . . , S l , but it does not say that we can choose P ′ to be a subdivision of P , thus in general |P ′ | M will be different from |P | M . This is going to be a problem if we want to preserve certain properties. For instance suppose that φ is a definable homotopy (namely its domain |P | M has the form Z × I where I = [0, 1]). The triangulation theorem does not ensure that ψ can be taken to be a definable homotopy as well.
The "normal triangulation theorem" of Baro [Bar10] is a partial remedy to this defect: it ensures that we can indeed take P ′ to be a subdivision of P , hence in particular |P ′ | M = |P | M , although ψ will not in general be equal to φ. The precise statement is given below. It suffices to consider the special case when X = |P | and φ is the identity. (1) P ′ is a subdivision of P ; (2) (P ′ , φ ′ ) is compatible with the simplexes of P ; (3) for every τ ∈ P ′ and σ ∈ P , if τ ⊆ σ then φ ′ (τ ) ⊆ σ.
From (3) it follows that the restriction of φ ′ to a simplex σ ∈ P is a homeomorphism onto σ and φ is definably homotopic to the identity on |P |. Since we are particularly interested in triangulations where the vertices of the simplicial complex have real algebraic coordinates, we prove the following proposition, which guarantees that the normal triangulation of a real algebraic simplicial complex can be also chosen to be real algebraic.
Proposition 4.5. Let P be a simplicial complex in M k defined over R alg and let L be a subdivision of P . Then there is a subdivision L ′ of P such that: (1) L ′ is defined over R alg ; (2) there is a simplicial homeomorphism ψ : |L| → |L ′ | which fixes all the vertices of L with real algebraic coordinates.
Proof. Since L is a subdivision of P , we have an inclusion of the zero-skeleta The idea is that the combinatorial properties of the pair (P, L) (namely the properties invariant by isomorphisms of pairs of abstract complexes) can be described, in the language of ordered fields, by a first order condition ϕ L,P (x) on the coordinatesx of the vertices. We then use the model completeness of the theory of real closed fields to show that ϕ L,P (x) can be satisfied in the real algebraic numbers. The details are as follows. For each v ∈ |L 0 | we introduce free variables x v 1 , . . . , x v k and letx v be the k-tuple x v 1 , . . . , x v k . Finally letx be the tuple consisting of all these variables x v i as v varies. We can express in a first order way the following conditions onx: . ,x vn are affinely independent; (2) If σ 1 and σ 2 are open simplexes of L with common face τ , then cl(σ 1 (x)) ∩ cl(σ 2 (x)) = cl(τ (x)); (3) If σ 1 and σ 2 are open simplexes of L with no face in common, then cl(σ 1 (x))∩ cl(σ 2 (x)) = ∅; These clauses express the fact that the collection σ(x) as σ varies in L is a simplicial complex L(x) (depending on the value ofx) isomorphic to L. Similarly we can define P (x) and express the fact that L(x) is a subcomplex of P (x). Our desired formula φ P,L (x) is the conjunction of these clauses together with the conditions i as the i-th coordinate of the vector v. By the model completeness of the theory of real closed fields, the formula can be satisfied by a tupleā of real algebraic numbers. The map sending each v toā v induces the desired isomorphism ψ : L → L ′ = L(ā).
Later we shall need the following.
Proposition 4.6. Let P be a simplicial complex, let X be a definable space and let f : The "moreover" part follows from Proposition 4.5. Indeed, if P is over R alg , we first obtain (P ′ , φ) as above. If P ′ is over R alg we are done. Otherwise, we take a subdivision P ′′ of P over R alg and a simplicial isomorphism ψ : P ′′ → P ′ , and replace (P ′ , φ) with (P ′′ , φ • ψ).
Standard part map
Let X = X(M ) ⊆ M be a definable set and suppose X ⊆ [−n, n] for some n ∈ N. Then there is a map st : X → R, called standard part, which sends a ∈ X to the unique r ∈ R satisfying the same inequalities p < x < q with p, q ∈ Q.
More generally, let X be a definable subset of M k and assume X ⊆ [−n, n] k for some n ∈ N. We can then define st : X → R k component-wise, namely st((a 1 , . . . , a k )) := (st(a 1 ), . . . , st(a k )).
Now let E := ker(st) ⊆ X × X be the type-definable equivalence relation induced by st, namely aEb if and only if st(a) = st(b). There is a natural bijection st(X) ∼ = X/E sending st(a) to the class of a modulo E, so in particular E has bounded index. The next two propositions are probably well known but we include the proof for the reader's convenience.
where X/E has the logic topology and st(X) ⊆ R k has the euclidean topology.
Proof. Every closed subset C of R k can be written as the intersection i C i of a countable collection of closed ∅-semialgebraic sets C i (where "∅-semialgebraic" means "semialgebraic without parameters"). We then have st(a) ∈ C if and only a ∈ C i (M ). This shows that the closed sets C ⊆ st(X) ⊆ R k in the euclidean topology correspond to the sets whose preimage in X is type-definable, and the proposition follows.
Thanks to the above result we can identify st : X → st(X) and p : X → X/E where E = ker(st). The next proposition shows that these maps are continuous. Proof. Let a ∈ X and let r := st(a) ∈ R k . Then st −1 (r) = n∈N {b ∈ X : |b − r| < 1/n}. This is a small intersection of relatively open subsets of X, so it is open in X by Lemma 2.1.
we may interpret the definining formula of X in R and consider the set X(R) ⊆ R k of real points of X. If we further assume that X is closed and bounded, then X ⊆ [−n, n] k for some n ∈ N, so we can consider the standard part map st : X → R k . It is easy to see that in this case st(X) coincides with X(R), so we can write X(R) = st(X) ∼ = X/E. Our next goal is to stuty the fibers of st : X → X(R). We need the following.
Definition 5.4. Given a simplicial complex P and a point x ∈ |P | (not necessarily a vertex), the open star of x with respect to P , denoted St(x, P ), is the union of all the simplexes of P whose closure contains x.
Proof. Let σ ∈ P be a simplex of minimal dimension included in St(x, P ) ∩ St(y, P ) and let z ∈ σ. We claim that z is as desired. To this aim it suffices to show that, given θ ∈ P , we have θ ⊆ St(x, P ) ∩ St(y, P ) if and only if θ ⊆ St(z, P ).
For one direction assume θ ⊆ St(x, P ) ∩ St(y, P ). Thenθ ∩σ is non-empty, as the intersection contains both x and y. It follows that there is a simplex δ ∈ P such thatθ ∩σ =δ. Notice that δ is included in St(x, P ) ∩ St(y, P ) since its closure contains x and y. Since σ was of minimal dimension contained in this intersection, it follows that δ = σ. But then x ∈ σ ⊆θ, hence θ ⊆ St(x, P ).
The following result depends on the local conic structure of definable sets.
Proposition 5.6. Let X be closed and bounded ∅-semialgebraic set and let st : Proof. By the triangulation theorem (Fact 4.2), there is a simplicial complex P over R alg and a ∅-definable homeomorphism f : X → |P | M . In this situation, ). Thus we can replace X with P and assume that X is the realization of a simplicial complex. Therefore, we now have a closed simplicial complex X(R) over the reals, which is thus locally contractible. More precisely, given y ∈ X(R) we can write {y} as an intersection i∈N S i where S i is the open star of y with respect to the i-th iterated barycentric subdivision of P . The preimage st −1 (y) can then be written as the corresponding intersection i∈N S i (M ) interpreted in M , and it now suffices to observe that each S i (M ) is an open star (Proposition 5.5), hence it is definably contractible (around any of its points).
Our next goal is to show that, much of what we said about the standard part map, has a direct analogue in the context of definable groups, with p : G → G/G 00 in the role of the standard part.
Definable groups
Let G be a definable group in M and let H < G be a type-definable subgroup of bounded index. We may put on the coset space G/H the logic topology, thus obtaining a compact topological space. In this context we have a direct generalization of Proposition 5.2.
Fact 6.1 ([Pil04, Lemma 3.2]). Every type-definable subgroup H < G of bounded index is clopen in the t-topology of G. In particular, the natural map p : G → G/H is continuous, where G has the t-topology and the coset space G/H has the logic topology.
If we further assume that H is normal, then G/H is a group and we may ask whether the logic topology makes it into a topological group. This is indeed the case [Pil04]. Some additional work shows that in fact G/H is a compact real Lie group [BOPP05]. In the same paper the authors show that G admits a smallest type-definable subgroup H < G of bounded index (see [She08] for a different proof), which is denoted G 00 and called the infinitesimal subgroup. When G is definably compact in the sense of [PS99], the natural map p : G → G/G 00 shares may of the properties of the standard part map.
Definition 6.2. Let us recall that a definable set B ⊆ X is called a definable open ball of dimension n if B is definably homeomorphic to {x ∈ M n : |x| < 1}; a definable closed ball is defined similarly, using the weak inequality ≤; we shall say that B is a definable proper ball if there is a definable homeomorphism f from B to a definable closed ball taking ∂B to the definable sphere S n−1 .
In analogy with Proposition 5.6, the following holds. Fact 6.3. [Ber09, Theorem 2.2]Let G be a definably compact group of dimension n and put on G the t-topology of [Pil88]. Then there is a decreasing sequence The proof in [Ber09, Theorem 2.2] depends on compact domination and the sets S n are taken to be "cells" in the o-minimal sense. For later purposes we need the following strengthening of the above fact, which does not present difficulties, but requires a small argument.
Corollary 6.4. In Fact 6.3 we can arrange so that, for each i ∈ N S i+1 ⊆ S i and S i is a definable proper ball of dimension n = dim(G).
Proof. By Lemma 2.3 we can assume that S i+1 ⊆ S i for every i ∈ N. Since M has field operations, a cell is definably homeomorphic to a definable open ball (first show that it is definably homeomorphic to a a product of intervals). In general it is not true that a cell is a definable proper ball, even assuming that the cell is bounded [BF09]. However by shrinking concentrically S i via the homeomorphism, we can find a definable proper n-ball C i with S i+1 ⊆ C i ⊆ S i . To conclude, it suffices to replace S i with the interior of C i .
Compact domination
A deeper analogy between the standard part map and the projection p : G → G/G 00 is provided by Fact 7.1 and Fact 7.2 below. The above fact was used in [BO04] to introduce a finitely additive measure on definable subsets of [−n, n] k ⊆ M k (n ∈ N) by lifting the Lebesgue measure on R k through the standard part map. In the same paper it was conjectured that, reasoning along similar lines, one could try to introduce a finitely additive invariant measure on definably compact groups (the case of the torus being already handled thanks to the above result). When [BO04] was written, Pillay's conjectures from [Pil04] were still open, and it was hoped that the measure approach could lead to a solution. A first confirmation to the existence of invariant measures came from [PP07], but only for a limited class of definable group. A deeper analysis lead to the existence of invariant measures in every definable compact group [HPP08] and to the solution of Pillay's conjectures, as discussed in the introduction. Finally, the following far reaching result was obtained, which can be considered as a direct analogue to Fact 7.1.
Fact 7.2 ([HP11]). Let G be a definably compact group and consider the projection
In the terminology introduced in [HPP08] the above result can be described by saying that G is compactly dominated by G/G 00 . Perhaps surprisingly, when the above result was obtained, Pillay's conjectures had already been solved, so compact domination did not actually play a role in its solution. In hindsight however, as we show in the last part of this paper (Section 13), compact domination can in fact be used to prove "dim(G) = dim R (G/G 00 )", as predicted by Pillay's conjectures (the content of Pillay's conjectures also includes the statement that G/G 00 is a real Lie group).
To prepare the ground, we introduce the following definition. In the rest of the section E is a type-definable equivalence relation of bounded index on a definable set X. Definition 7.3. We say that X is topologically compactly dominated by X/E if for every definable set D ⊆ X, p(D) ∩ p(D ∁ ) has empty interior, where p : X → X/E is the projection.
Since "measure zero" implies "empty interior", topological compact domination holds both for the standard part map (taking E = ker(st)) and for definably compact groups.
Notice that Definition 7.3 can be given for definable sets in arbitrary theories, not necessarily o-minimal, so it is not necessary that X carries a topology. However in the o-minimal case a simpler formulation can be given, as in Corollary 7.5 below.
We first recall some definitions. Let X be a definable space. We say that a typedefinable set Z ⊆ X is definably connected if it cannot be written as the union of two non-empty open subsets which are relatively definable, where a relatively definable subset of Z is the intersection of Z with a definable set.
Following [vdD98], we distinguish between the frontier and the boundary of a definable set, and we write ∂D := D \ D for the frontier, and δD := D \ D • for the boundary, where D • is the interior.
A basic result in o-minimal topology is that the dimension of the frontier of D is less than the dimension of D. Here we shall however be concerned with the boundary, rather than the frontier.
Proposition 7.4. Let X be a definable space. Assume that p is continuous and each fiber of p : X → X/E is definably connected. Then for every definable set D ⊆ X, p(D) ∩ p(D ∁ ) = p(δD).
In the light of the above proposition, topological compact domination takes the following form.
Corollary 7.5. Assume that X is a definable space, p : X → X/E is continuous, and each fiber of p is definably connected. Then X is topologically compactly dominated by X/E if and only if the image p(Z) of any definable set Z ⊆ X with empty interior, has empty interior.
Proof. Suppose that the image of every definable set with empty interior has empty interior. Given a definable set D ⊆ X, we want to show that p(D)∩p(D ∁ ) has empty interior. This follows from the inclusion p(D) ∩ p(D ∁ ) ⊆ p(δD) (Proposition 7.4) and the fact that δD has empty interior.
Good covers
By a triangulable space we mean a compact topological space which is homeomorphic to a polyhedron, namely to the realization |P | R of a closed finite simplicial complex over R. Our aim is to show that open subsets of a triangulable spaces have enough good covers. We are going to use barycentric subdivions holding a subcomplex fixed, as defined in [Mun84,p. 90]. We need the following observation.
Remark 8.2. Let P be a closed (finite) simplicial complex and let L be a closed subcomplex. Let P i be the i-th barycentric subdivision of P holding L fixed. Then for every real number ε > 0 there is i ∈ N such that for every closed simplexσ of P i , eitherσ has diameter < ε orσ lies inside the ε-neighbourhood of some simplex of L. Proof. We can assume that Y is the geometric realization |P | (over R) of a finite simplicial complex P .
Since Y is a metric space, so is O ⊆ Y . In particular O is paracompact, and therefore V has a locally finite star-refinement W ≺ V. We plan to show that O is the realization of an infinite simplicial complex L with the property that each closed simplex of L is contained in some element of W. Granted this, by Proposition 5.5 we can take U to be the open cover consisting of the sets St(x, L) for x ∈ O.
To begin with, note that we can write O as the union O = n∈N C n of an increasing sequence of compact sets in such a way that every compact subset of O belongs to C n for some n (it suffices to define C n as the set of points at distance ≥ 1/n from the frontier of O).
Since C 0 is compact, by the Lebesgue number lemma there is some ε 0 > 0 such that every subset of C 0 of diameter < ε 0 is contained in some element of W. Now let P 0 be an iterated barycentric sudivision of P with simplexes of diameter < ε and let L 0 be the largest closed subcomplex of P 0 with |L 0 | ⊆ C 0 . Notice that every closed simplex of L 0 is contained in some element of W.
Starting with P 0 , L 0 we shall define by induction a sequence of subdivisions P i of P = P 0 and subcomplexes L i of L 0 . For concreteness, let us consider the case i = 1.
The complex P 1 will be of the form P is the n-th iterated barycentric subdivision of P 0 holding the subcomplex L 0 fixed. To choose the value of n we proceed as follows. By the Lebesgue number lemma there is some ε 1 > 0 with ε 1 < ε 0 /2 such that every closed subset of C 1 of diameter < ε 1 is contained in some element of W. By taking a smaller value for ε 1 if necessary, we can also assume (by definition of L 0 ), that the closed ε 1 -neighbourhood of any closed simplexσ of L 0 is contained in some element of W. By Remark 8.2 there is some n 0 such that for every n ≥ n 0 and for every closed simplexσ of P (n) 0 , eitherσ is contained in the ε 1 neighbourhood of some λ ∈ L 0 , or the diameter ofσ is less then ε 1 . In both cases, ifσ is included in C 1 , then it is contained in some element of W.
We now define P 1 = P (n) 0 and we let L 1 be the biggest closed subcomplex of P 1 with |L 1 | ⊆ C 1 . The crucial observation is that L 0 is a subcomplex of L 1 , since both are subcomplexes of P 1 and |L 0 | ⊆ |L 1 |.
Since by construction each L i is a subcomplex of L i+1 , we can consider the infinite simplicial complex L := i∈N L i . We claim that its geometrical realization is O. Granted the claim, by construction each closed simplex of L := i∈N L i is contained in some W ∈ W, and the proof is finished.
To prove the claim notice that by construction i L i ⊆ O. To prove the equality we must show that L i is not too small. Consider for instance L 1 . We claim that if x ∈ O is such that its closed ε 1 -neighbourhood is contained in C 1 , then x ∈ |L 1 |. Indeed, consider the (open) simplex σ ∈ P 1 containing x. Then eitherσ has diameter < ε 1 or it is included in the ε 1 -neighbourood of |L 0 |, and in both cases σ is included in |L 1 |. The same argument applies for an arbirary i ∈ N instead of i = 1 and immediately implies the desired claim (since ε i → 0). where S n is the n-th sphere and we put on π n (Y ) the usual group operation if n > 0 (see [Hat02] for the details).
Homotopy
In the rest of this section we work in the classical category of topological spaces and we give a sufficient condition for two maps to be homotopic. Later we shall need to adapt the proofs to the definable category, but with additional complications.
Definition 9.2. Given a collection U of subsets of a set O and two functions f, g : Z → O, we say that f and g are U-close if for any z ∈ Z there is U ∈ U such that both f (z) and g(z) are in U .
The following definition is adapted from [Dug69, Note 4]. Definition 9.3. Let f : Z → Y be a function between two sets Z and Y . Let P be a collection of sets whose union P includes Z, and let U be a collection of subsets of Y . We say that f is (U, P )-small if for every σ ∈ P the image f (σ ∩ Z) is contained in some U ∈ U.
Lemma 9.4. Let U be a locally finite good cover of a topological space Y and let L be a closed subcomplex of a closed simplicial complex P defined over R. Let f : |L ∪ P (0) | R → Y be a (U, P )-small map (recall that P is the collection of all closures of simplexes of P ). Then f can be extended to a (U, P )-small map f ′ : |P | R → Y with the property that, for all U ∈ U and for every closed simplexσ of P , if f (σ |L∪P (0) ) ⊆ U , then f ′ (σ) ⊆ U .
Proof. Reasoning by induction we can assume that f ′ is already defined on |L∪P (k) | and we only need to extend it to |L ∪ P (k+1) |. Let σ ∈ P (k+1) . We can identifȳ σ with the cone over its boundary ∂σ, so that every point ofσ is determined by a pair (t, x) with t ∈ [0, 1] and x ∈ ∂σ. Let U 1 , . . . , U n be the elements of U containing f ′ (σ |L∪P (k) ) (notice that n > 0 by the inductive hypothesis), let V be their intersection, and let φ : Proposition 9.5. Let U be a locally finite good cover of a topological space Y , let P be a closed simplicial complex and let f, g : |P | R → Y be two maps. Assume that f and g are U-close. Then, f and g are homotopic.
Proof. Since f and g are U-close, the family V = {f −1 (U ) ∩ g −1 (U ) : U ∈ U} is an open cover of |P | R . By the Lebesgue number lemma (since we work over R) there is an iterated barycentric subdivision P ′ of P such that every closed simplex of P ′ is contained in some element of V. Then, by construction, for every σ ∈ P ′ there is U ∈ U such that f (σ) and g(σ) are contained in U .
Let now I = [0, 1] and consider the simplicial complex P ′ × I with the standard triangulation (as in [Hat02,p. 112, Proof of 2.10]). Consider the subcomplex P ′ × {0, 1} of P ′ × I and note that it contains the 0-skeleton of P ′ × I. Define f ⊔ g : . Note that f ⊔ g is (U, P ′ × I)-small. Since U is a good cover, by Lemma 9.4 we can extend it to a (U, P ′ × I)-small function H : |P × I| R → Y . This map is a homotopy between f and g.
Definable homotopies
Given a definable set Z and a -definable set Y , we say that a map f : Z → Y is definable if it takes values in a definable subset Y 0 of Y and is definable as a function from Z to Y 0 . We can adapt Definition 9.1 to the definable category as follows.
Definition 10.1. If Z is a definable space and Y is a -definable set, we let [Z, Y ] def denote the set of all equivalence classes of definable continuous maps f from Z to Y modulo definable homotopies. Similarly we write [Z, Y ] def 0 when we work with pointed spaces and homotopies relative to the base point z 0 ∈ Z. The n-th o-minimal homotopy group is defined as where S n is the n-th sphere in M . If n > 0 we put on π n (Y ) def a group operation in analogy with the classical case.
In [BO02] it is proved that if Y is a ∅-semialgebraic set, π 1 (Y ) def ∼ = π 1 (Y (R)), so in particular π 1 (Y ) def is finitely generated. This has been generalized to the higher homotopy groups in [BO10]. We shall later give a self-contained proof of both results. By the same arguments we obtain the following result of [BMO10]: given a definably compact group G there is a natural isomorphism π n (G) def ∼ = π n (G/G 00 ). The new proofs yield a stronger result: if p : G → G/G 00 is the projection, for every open subset O of G/G 00 , there is an isomorphism π n (p −1 (O)) def ∼ = π n (O). This was so far known for n = 1 [BM11]. Notice that p −1 (O) is -definable, whence the decision to consider -definable sets in Definition 10.1. With the new approach we obtain additional functoriality properties and generalizations, as it will be explained in the rest of the paper.
Theorem A
As above, let X = X(M ) be a definable space, and let E ⊆ X × X be a definable equivalence relation of bounded index. In this section we work under the following assumption.
Assumption 11.1 (Assumption A). X/E is a triangulable topological space and the natural map p : X → X/E is continuous.
The fact that X/E is triangulable allows us to apply the results of Section 8 regarding the existence of good covers. Note that the continuity of p is not a vacuous assumption because X/E has the logic topology, not the the quotient topology. By the results in Section 5 and Section 6 the assumption is satisfied in the special case X/E = G/G 00 (where G is a definably compact group) and also when X is a closed and bounded ∅-semialgebraic set and E = ker(st).
We shall prove that there is a natural homomorphism π def n (X) → π n (X/E). This will be obtained as a consequence of a more general result concerning homotopy classes. The following definition plays a crucial role in the definition of the homomorphism, and exploits the analogies between the projection p : X → X/E and the standard part map.
We say that f is U-approximable if it has a U-approximation.
In general, given f and U, we cannot hope to find f * which is a U-approximation of f . However we shall prove that, given U, every definable continous function f is definably homotopic to a U-approximable map. Proof. Let σ ∈ P . Since f is continous, f (σ) ⊆ f (σ) and by Proposition 3.5 we have p(f (σ)) = p(f (σ)), so if f is (p −1 (U), P )-small, it is also (p −1 (U), P )-small.
The following lemma shows that small maps are approximable. Proof. Define f * on the zero-skeleton of P by f * (0) (st(v)) = p(f (v)) for any vertex v of |P | M (since v has coordinates in R alg we can identify st(v) ∈ |P | R with v). Since f is (p −1 (V), P )-small, f * (0) is (V, P )-small and therefore, by Lemma 9.4 (and Lemma 11.4), we can extend f * (0) to a (V, P )-small map f * : |P | R → O. We claim that f * is a V-approximation of f . Indeed, fix a point z ∈ |P | M and let σ = σ M ∈ P be a simplex containing z.
The next lemma shows that every map f is homotopic to a small (hence approximable) map f ′ .
Moreover if P is defined over R alg , we can take P ′ defined over R alg . Notice that f ′ is homotopic to f (as φ is homotopic to the identity).
Lemma 11.7. Let U be a star-refinement of a good cover of O. Any two U- Proof. Let f * 1 and f * 2 be two U-approximations of f . Then and f * 1 and f * 2 are St(U)close and since U star-refines a good cover they are homotopic by Proposition 9.5.
We are now ready to state the main result of this section.
where the vertical arrows are induced by the inclusions.
(3) By the triangulation theorem, the same statements continue to hold if we replace everywhere |P | by a ∅-semialgebraic set. In the rest of the section we fix a closed simplicial complex P in M defined over R alg and we prove Theorem 11.8. We shall define a map p P O : We are only claiming that this will be the case if f is (p −1 (U), P )-small, which is a stronger property than being U-approximable. The reason for the introduction of this stronger property, is that we are not able to show that if two definably homotopic maps are U-approximable, then their approximations are homotopic. We can do this only if the maps are (p −1 (U), P )-small. The formal definition is the following.
Lemma 11.5 f ′ has a U-approximation f ′ * . We shall see (Lemma 11.11 below) that the homotopy class [f ′ * ] does not depend on the choice of P ′ , φ and f ′ * , so we can To prove that the definition is sound we need the following. Proof. Let y ∈ |P | R and let x ∈ |P | M be such that st(x) = y. By definition of approximation f * 0 (y) is U-close to p(f 0 (x)), which by hypothesis is U-close to p(f 1 (x)), which in turn is U-close to f * 1 (y). We deduce that f * 0 (y) is St(U)-close to f * 1 (y).
We can now finish the proof that Definition 11.9 is sound.
This shows that f ′ * has image contained in U and is a U-approximation of f ′ . It Lemma 11.14. Theorem 11.8(4) holds, namely we can fix a base point and work with relative homology.
Proof. It suffices to notice that all the constructions in the proofs can equivalently be carried out for spaces with base points.
Lemma 11.15. Theorem 11.8(5) holds, namely for any open set O ⊆ X/E there is a well defined group homomorphism Proof. We have already proved that there is a natural map p S n O : π n (p −1 (O)) def → π n (O). We need to check that this map is a group homomorphism. To this end, let S n−1 be the equator of S n . Recall that, given [f ], [g] ∈ π n (p −1 (O)) def , where f, g : S n → p −1 (O), the group operation [f ] * [g] is defined as follows. Consider the natural map φ : S n → S n /S n−1 = S n ∨ S n , and let where f ∨ g maps the first S n using f , and the second using g. A similar definition also works for π n (O). Now, we have to check that ). By the triangulation theorem we can identify S n with the realization of a simplicial complex P defined over R alg and, modulo homotopy and taking a subdivision, we can assume that f and g are (p −1 (U), P )-small where U is an open cover of O star-refinining a good cover. Let f * and g * be U-approximations of f, g respectively, so that p The proof of Theorem 11.8 is now complete.
Theorem B
In this section we work under the following strengthening of 11.1.
Assumption 12.1. X/E is a triangulable topological space and each fiber of p : X → X/E is the intersection of a decreasing sequence of definably contractible open sets.
By Proposition 3.4 the assumption implies in particular that p is continuos, so we have indeed a strengthening of 11.1. The above contractibility hypothesis was already exploited in [BM11,Ber09,Ber07] and is satisfied by the main examples discussed in Section 5 and Section 6. In the rest of the section we prove Theorem 12.2. The main difficulty is the following. The homotopy properties of a space are essentially captured by the nerve of a good cover, but unfortunately it is not easy to establish a correspondence between good covers of X/E in the topological category and good covers of X in the definable category. One can try to take the preimages p −1 (U ) in X of the open sets U belonging a good cover of X/E, but these preimages are only -definable, and if we approximate them by definable sets, we loose some control on the intersections. We shall show however, that we can perform these approximations with a controlled loss of the amount of "goodness" of the covers. Granted all this, the idea is to lift homotopies from X/E to X, with an approach similar to the one of [Sma57,Dug69], namely we start with the restriction of the relevant maps to the 0-skeleton, and we go up in dimension.
In the rest of the section fix an open set O ⊆ X/E. We need the following.
Lemma 12.4. Let V be an open cover of O.
Then there is a refinement W of V such that for every W ∈ W there is V ∈ V and a definably contractible definable Proof. Let y ∈ O. By our assumption p −1 (y) is a decreasing intersection i∈N B i (y) of definably contractible definable sets B i (y). Now let V (y) ∈ V contain y and note that p −1 (V (y)) is a -definable set containing p −1 (y) = i∈N B i (y). By logical compactness B n (y) ⊆ p −1 (V (y)) for some n = n(y) ∈ N. By Proposition 3.3 we can find an open neighbourood W (y) of y with p −1 (W (y)) ⊆ B n (y). We can thus define W as the collection of all the sets W (y) as y varies in O.
Proof. Let V and W be as in Lemma 12.4. By hypothesis, and by the property of W, we have that f (|∂σ| M ) ⊆ p −1 (W ) ⊆ B ⊆ p −1 (V ) for some definably contractible set B and some V ∈ V. Then, f can be extended to a definable map on |σ| M with image contained in B ⊆ p −1 (V ).
Definition 12.6. If W and V are as in Corollary 12.5, we say that W is semi-good within V.
Lemma 12.7. For any open cover U of O and any n ∈ N, there is a refinement W of U such that, given a n-dimensional closed simplicial complex P , a closed subcomplex L, and a (p −1 (W), P )-small definable continous map f : Proof. Reasoning by induction, it suffices to show that given k < n and an open cover U of O, there is a refinement W of U such that, given a n-dimensional closed simplicial complex P and a (p −1 (W), P )-small definable map f : To this aim, consider three open covers W ≺ V ≺ U of O such that V is a star-refinement of U and and W ≺ V is semi-good within V. Let σ ∈ P (k+1) be a (k + 1)-dimensional closed simplex such thatσ is not included in the domain of f . Since |∂σ| M ⊆ |σ ∩ P (k) | M ⊆ dom(f ) and f is (p −1 (W), P )-small, there is W ∈ W such that f (|∂σ| M ) ⊆ f (|σ ∩ P (k) | M ) ⊆ p −1 (W ). By the choice of W, there is V σ ∈ V such that we can extend f |∂σ to a map F σ : |σ| M → p −1 (V σ ) and define F : |L ∪ P (k+1) | R → p −1 (O) as the union of f and the various F σ for σ ∈ P (k+1) .
It remains to prove that F : |L ∪ P (k+1) | R → p −1 (O) is (p −1 (U), P )-small. To this aim let τ ∈ P be any simplex. By our hypothesis there is W ∈ W such that f (|τ ∩ P (k) | M ) ⊆ p −1 (W ). Now let V ∈ V contain W . By construction each face σ of τ belonging to L ∪ P (k+1) is mapped by F into p −1 (V σ ) for some V σ ∈ V. Moreover V σ intersects W , so it is included in St V (W ). The latter depends only on τ and not on σ and is is contained in some U ∈ U. We have thus shown that σ V σ is contained in some U ∈ U, thus showing that F is (p −1 (U), P )-small.
Definition 12.8. Let U be an open cover of O. If W is as in Lemma 12.7 we say that W is n-good within U. If the only member of U is O (or if the choice of U is irrelevant), we simply say that W is n-good. Lemma 12.9. Let n ∈ N and let W be an n + 1-good cover of O. If P is an ndimensional simplicial complex and f, g : |P | M → p −1 (O) are definable continous functions such that for every σ ∈ P there is W ∈ W such that f (σ) and g(σ) are contained in p −1 (W ), then f and g are definably homotopic.
Proof. Let I = [0, 1] and consider the simplicial complex P × I (of dimension n + 1) with the standard triangulation (as in [Hat02,p. 112, Proof of 2.10]). Consider the subcomplex P × {0, 1} of P × I and note that it contains the 0-skeleton of P × I. Define f ⊔g : |P ×{0, 1}| M → O as the function which sends (x, 0) to f (x) and (x, 1) to g (x). Note that f ⊔ g is (p −1 (W), P × I)-small by hypothesis. By Lemma 12.7 we can extend it to definable continuos function H : |P × I| M → p −1 (O). This map is a homotopy between f and g.
Lemma 12.10. Let n ∈ N. Let V be an open covering of O which is a star refinement of a n + 1-good cover W. Given an n-dimensional simplicial complex P and definable continuos maps f, g : which is a V-approximations of both f and g, then f and g are definably homotopic.
Proof. Let P ′ be an iterated barycentric subdivision of P such that for each σ ∈ P ′ there is V ∈ V such that f * (σ) ⊆ V . We claim that for each σ ∈ P ′ , there is a W ∈ W such that p • f (σ), p • g(σ) (and f * • st(σ)) are in W . Given this claim, we can conclude using Lemma 12.9.
To prove the claim, fix a σ ∈ P ′ and let V ∈ V be such that , and since St(V) refines W, there is W ∈ W with the same property. Proof. Let U be an n + 1-good covering of O, let V be such that St(V) is a star refinement of U and let W ≺ V be n + 1-good within V. Let T be a barycentric Proof. Let n = dim(P ), let V be a star-refinement of U and let W be n-good within V. Consider an iterated baricentric subdivision P ′ of P such that f * is , that is f * and g * are homotopic. We can now apply Lemma 12.11 to find a definable homotopy between f and g, and so [f ] = [g].
The surjectivity is immediate from Lemma 12.12.
Theorem C
In this section we work under the following strengthening of 12.1, where we considers definable proper balls (Definition 6.2) instead of definably contractible sets.
Assumption 13.1. X/E is a triangulable manifold, X is a definable manifold, and each fiber of p : X → X/E is the intersection of a decreasing sequence of definable proper balls.
We also need: Assumption 13.2 (Topological compact domination). The image under p : X → X/E of a definable subset of X with empty interior, has empty interior.
Both assumptions are satisfied by p : G → G/G 00 for any definably compact group G (see section 7 and Corollary 6.4).
To prove the theorem the idea is to exploit the following link between homotopy and dimension: given a manifold Y and a punctured open ball U := A \ {y} in Y , the dimension of Y is the least integer i such that π i−1 (U ) = 0.
Proof. Let n = dim(X) and N = dim R (X/E). Fix x ∈ X and let y = p(x). Let B 0 be an open definable ball containing p −1 (y). Since X/E is a manifold, there is a decreasing sequence of proper balls A i ⊆ X/E such that y = i∈N A i = i∈N A i . Now B 0 ⊇ p −1 (y) = i∈I p −1 (A i ) and p −1 (A i ) is type-definable (because A i is closed), so there is some i ∈ N with p −1 (A i ) ⊆ B 0 . Let A = A i and observe that p −1 (A) is -definable and contains the type definable set p −1 (y). Since the latter is a decreasiong intersection of definable proper balls, there is some definable proper ball B 1 such that x ∈ p −1 (y) ⊆ B 1 ⊆ B 1 ⊆ p −1 (A) ⊆ B 0 . Now let f : S n−1 → ∂B 1 = B 1 \B 1 be a definable homeomorphism (whose existence follows by the hypothesis that the ball is proper). By fixing base points, we can consider the homotopy class [f ] as a non-zero element of π def n−1 (B 0 \ x) (namely f is not definably homotopic to a constant in B 0 \ x).
We conclude that π n−1 (A \ y) = 0, and since A is an open ball in the manifold X/E this can happen only if n ≥ N . So far we have not used the full strength of the assumption, namely the topological compact domination.
Since A 0 is a ball, we have π N −1 (A 0 ) = 0 and, by Theorem 12.2, π def N −1 (p −1 (A 0 )) = 0 as well. In particular [f ] = 0 when seen as an element of π N −1 (p −1 (A 0 )) def . This is equivalent to say that f can be extended to a definable map F : D → p −1 (A 0 ), where D = S N −1 × I and F is a definable homotopy (relative to the base point) between f and a constant map.
Notice that dim(F (D)) ≤ dim(D) = N . Now assume for a contradiction that N < dim(X). Then dim(F (D)) < dim(X), and therefore F (D) has empty interior in X. By topological compact domination (p • F )(D) has empty interior in X/E, so in particular there is some y ∈ A 1 such that y / ∈ (p • F )(D). It follows that the image of F is disjoint from p −1 (y), namely F takes values in p −1 (A 0 ) \ p −1 (y) = p −1 (A 0 \ y) and witnesses the fact that f is null-homotopic when seen as a map into p −1 (A 0 \ y).
We can now reach a contradiction as follows. Since A 0 \ A 1 is a deformation retract of A 0 \{y}, the inclusion induces an isomorphism π N −1 (A 0 \A 1 ) ∼ = π N −1 (A 0 \ y). By the functoriality part in Theorem 12.2, there is an induced isomorphism π def N −1 (p −1 (A 0 \ A 1 )) ∼ = π def N −1 (p −1 (A 0 \ y)). Moreover, this isomorphism sends the homotopy class of f to the homotopy class of f itself, but seen as a map with a different codomain. This is absurd since f was not null-homotopic as a map to p −1 (A 0 \ A 1 ), while we have shown that it is null-homotopic as a map to p −1 (A 0 \ y).
As a corollary we obtain. Proof. By [BOPP05], G/G 00 is a compact abelian connected Lie group and by the previous result its dimension is n. It follows that G/G 00 is isomorphic to an ndimensional torus, so π 1 (G/G 00 ) ∼ = Z n and, by Theorem 12.2, π def 1 (G) ∼ = Z n as well.
To determine the k-torsion two approaches are possible. The first is to argue as in [EO04], namely to observe that G[k] ∼ = π def 1 (G)/kπ def 1 (G) and π def 1 (G) ∼ = Z n . Alternatively we can use the fact that G 00 is divisible [BOPP05] and torsion free [HPP08], so G and G/G 00 have isomorphic torsion subgroups. Since G/G 00 is a torus of dimension n, its torsion is known and we obtain the desired result.
Notice that in [EO04] both the isomorphism π def 1 (G) ∼ = Z n and the determination of the k-torsion of G is proved directly without using G/G 00 , while our argument is a reduction to the case of the classical tori.
|
2017-06-07T09:03:21.000Z
|
2017-06-07T00:00:00.000
|
{
"year": 2018,
"sha1": "6b0e7667b1cc4294ac3323efd8555ce34bfe9778",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.02094",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6b0e7667b1cc4294ac3323efd8555ce34bfe9778",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
214168397
|
pes2o/s2orc
|
v3-fos-license
|
A vision to strengthen resources and capacity of the Palestinian health research system: a qualitative assessment
1Swiss Tropical and Public Health Institute, Socinstrasse Basel, Switzerland (Correspondence to: M. AlKhaldi: moh.khaldi83@gmail.com). 2University of Basel, Petersplatz Basel, Switzerland. 3School of Public Health, Al-Quds University, Jerusalem, Palestine . 4Cardiovascular Institute, Glasgow University, Glasgow, United Kingdom of Great Britain and Northern Ireland, United Kingdom. 5Faculty of Medicine and Health Sciences, Najah National University, Najah University Hospital, Nablus, Palestine. 6School of Clinical Sciences, Bristol University, Bristol, United Kingdom. 7Institute for Global Health, University College London UCL, London, United Kingdom. 8Ministry of Health, Gaza, Palestine. 9Faculty of Nursing, The Islamic University of Gaza, Gaza, Palestine.
Introduction
Health Research System Resources and Infrastructure Capacity (HRSRIC) is recognized as a central functional pillar of HRS. Strengthening of HRSRIC contributes to addressing global health challenges and improving health outcomes in low-and middle-income countries. HRSRIC is a complex and context-sensitive issue that requires a combination of different analysis approaches applied at individual, institutional and societal levels (1). Therefore, HRSRIC consists of 2 components:(1) cre,ating and sustaining human and physical resources and infrastructural capacity;and (2) securing research funds. Both components are nescesary to conduct, absorb and utilize HR. This is certainly emphasized by the World Health Organization (WHO), which states that mobilizing and sustaining the sufficient resources (2) are important to generate high-quality knowledge to support evidence-informed decision-making (3). HRSRIC should be considered in any attempts to analyse the HRS, and this is clearly and fundamentally featured in conceptual HRS frameworks that help in better system understanding through a wide-range analysis approach (4-6), as other similar studies have found (7)(8)(9). These frameworks portray the HRS fabric, including HRSRIC, which forms a solid base and imperative priority for any HR advancement.
Mobilizing and equipping HRS with all resources is an ongoing process of empowering individuals, organizations and nations. Therefore, the WHO, Council on Health Research for Development, the Global Forum on Health Research, and other agencies such as the World Bank have explicitly and unanimously underlined the HR capacities (10,11). However, globally, numerous studies have revealed that there is an unequal distribution of resources for higher education and research. Moreover, many low-and middle-income countries have difficulties EMHJ -Vol. 26 No. 10 -2020 in building up their HR capacity to support effective national HRSs for better decision making (12,13). Some of these difficulties are: lack of qualified human resources and researchers, lack of research funding, and lack of infrastructural capacity (14). Unfortunately, HRSRIC remains one of the world's unmet challenges in managing HRSs (15)(16)(17), where the allocation is < 0.5% of national health budgets for HR (18) in the context of a 10/90 gap (19).
In response to this situation, a Health Research Capacity Strengthening (HRCS) strategy has been recently implemented worldwide to improve the ability of these countries to tackle the persistent and disproportionate burdens of disease. The strategy has gained a substantial investment from donors; hence, they are increasingly interested in evaluating the impact of their investments on HR and HS (20). The World Bank defines HS as "the combination of resources, organisation, financing, and management that culminate in the delivery of health services to the population" (worldbank.org/curated/ en/102281468140385647/Healthy-Development-the-World-Bank-strategy-for-health-nutrition-populationresults). Embracing the HRCS scope in this paper is a realistic guide with the adopted frameworks. The terms capacity strengthening and capacity building are often used interchangeably; the first refers to establishing a research infrastructure, while the second precisely denotes enhancing a pre-existing infrastructure (20).
Regionally, HR across the Middle East and North Africa region faces critical deficits; most notably are governance, resources, and capacity of knowledge production and application (4,21). Similar deficits exist in Palestine with insufficient understandings and conceptualizations about the reality of HR potential. In fact, in Palestine (an external-aid-dependent country), donors play an inadequate role in supporting the capacity of research institutions (22)(23)(24). This necessitates assessing and understanding local resources and international aid allocated towards HR capacity to ensure effective utilization of available funds (25). Therefore, addressing the aforementioned critical challenges and knowledge gap was one of the driving factors to conduct this study. Furthermore, this is the first study to examine HRSRIC. The study is crucial and it seeks to depict a clear and comprehensive picture towards assisting the policymakers in building a well-infrastructured, resourced and capacitated national HRS in a state being constructed like Palestine. The overall aim was to understand HRSRIC constituents to inform health policy-makers with evidence and insights towards resourceful and enabled HRS. To achieve the overall aim, we tried to assess the actual status, gaps and shortfalls, and to identify opportunities for improvement, allocation and optimization of resources and capacity components of the HRS in Palestine.
Study design
This study is part of a comprehensive system analysis. The design, methods and instruments used were typically similar to those of relevant local studies that dealt with other components of HRS and carried out in Palnestine. Identical participants, 104 key informants, were purposively selected from 3 sectors in the health field. Perceptions of participants from 3 sectors in Palestine were investigated: government, academia, and local and international nongovernmental organizations (NGOs). The approaches of collecting, managing and analyzing data through conducting 52 in-depth interviews and 6 foicus group discussions (FGDs) were also typically adopted and used in previous similar studies in Palestine(7-9).
Results
The overall responses were obtained from 104 experts who were involved in HRS and aware of the system relsources and capacity. The responses covered the findings in 3 key areas. Apart from the sociodemographic characteristics of the participants that were previously presented in other relevant published (7)(8)(9) and in press studies, 2 other areas were: (1) the overall existing situation, limiiting and facilitating factors of human resources, infrastructure, and facilities of the HRS; and (2) the situation of HRS financing, sources, gaps, and best solutions for optimization. HRS human resources and infrastructure and HR financing (HRF) were the main 2 findings on which the current study focused. Table 1 presents the findings about HRSRIC, which are classified into 3 themes: (1) overall landscape of the HRS-RIC; (2) obstacles related to HRSRIC; and (3) perceptions to improve the resources and capacity.
HRSRIC in Palestine
For the first theme, experts described the status of HRSRIC as experiencing a noticeable shortage. However, some experts pointed to plenty of qualified human resources, particularly in academia, but highlighted the fact that these were untapped and, as many experts alleged, not adequately trained. Various academics EMHJ -Vol. 26 No. 10 -2020 remarkably revealed that the Ministry of Health faced a chronicscarcity of essential medical supplies, academia suffers from acute financial crises, and the lack of most resources is due to the absolute control and restrictions imposed by the occupation. All responses about HR resources themed into 2 descriptive categories. The first category was the most frequent and represented the vast majority. The descriptive remarks ranged from "severe lack", "very weak", "limited", "scarce", and "inadequate". While the other responses, which formed the second category, comprised: "resources exist", "good", and "good but unsophisticated and insufficient". Academics participated in FGDs referred to the poor performance of HR. They admitted to the availability of resources and good capacity, but managing HRSRIC was said to be a central difficulty. Government experts recognized the lack of research budgets where they called for a 5% of the central health budget to be allocated to HR. Conversely, NGO experts alleged that the national health plan 2011-2013 allocated 1% to HR, but other experts from NGO sector argued that this percentage was not translated in the ground. The second theme reflected the main obstacles facing HRSRIC and was mainly correlated with the absence of a regulatory framework. Mismanagement of resources, a weak strategic leadership, duplication and individuality in HR efforts, brain drain, and insufficient experience and skills of current human resources were common hurdles reported by experts. Others pointed to other factors such as lack of sustainable and national funds, political turmoil, time constraints, and lack of investment plans in infrastructure innovation and technological development in all sectors.
The third theme presented perceptions to tackle these hurdles; the majority agreed on the centrality of having the political support to initiate a strategic dialogue to build a national HR body. Participants recommended that this body should be in charge of framing a development strategy and policy with emphasis on: (1) securing adequate and fixed budgets, stimulate the local support and invest donor funds appropriately to strengthen HR infrastructure; (2) advancing the capacities of strategic planning and optimal resources management; (3) fostering partnerships, fellowships, exchange programmes, learning institution approach and capacity building programmes, whether at the local or international level, to evolve the institutional and national HR resources and capacities; and (4) improving approaches to research prioritization exercises, integration, intra-inter-transdisciplinarity, and networking for better resources and capacity identification, allocation and utilization. the hands of the Palestinian government and institutions were tied in spending to HR. This was emphasized through their reflections that there was "no specific HR budget and allocation", "HRF is insufficient, scattered, unsustainable, and project-based", "HR is not a priorijty and underinvestment", "external, conditioned", and "a major challenge". With regard to the sources of this fund, the experts overwhelmingly agreed on the 2 main sources of HRF: (1) mainly from external resources and donations through international organizations; and (2) an individual resource, which means that researchers are financing their research at their personal expense.
HRF
The most important gaps that hindered appropriate and sustainable HRF were focused on the following 3 dimensions. The first was associated with the low official interest in HR, the absence of regulatory frameworks, financing, and investment strategies, and that lessimportant sectors were allocated greater funding. The second was notably related to bureaucratic procedures for financing and the conditions of the donors. The third dimension was the scarcity of national resources and political conditions. For better HRF, it is essential to promote the importance of HR and develop national HR agendas, to identify and guide resources appropriately.
To summarize, the findings indicate that a political commitment is essential to ensure sustainable financial resources for HR through possibly different channels, where the majority of proposed solutions tackle the financial scarcity of HR, such as: (1) establishing a national fund under the Ministry of Health/Palestinian National Institute of Public Health joint patronage with proper resources allocation and management; and (2) stimulating domestic financing and optimizing international funding on the basis of a long-term strategic partnership to ensure the pillars of HRS are firmly in place.
Discussion
This study dealt with the 2 most important pillars of the HRS (3), exploring the system resources and infrastructural capacity. As HRS is a complex and diverse subject (26,27) and under growing attention (2,28,29), the findings of this system analysis are expected to offer a worthwhile contribution to the understanding of both components in order to move forward towards a successful HRS based on a national strategy. To address the weakness of HRS-RIC and HRF, this strategy should be politically adopted, a matter of consensus, and backed by international players, to ensure a well-resourced and capacitated HRS.
Generally, skilled human resources in Palestine are increasing in spite of the institutional challenges. Other literature indicates the contrasting evidence that research personnel are limited with a lack of qualified experts (30), where the distribution of these resources is challenging because they are concentrated within academia and government. In addition, the competencies and freedom of movement of those personnel need to be improved, especially for those from the Gaza Strip. The Palestinian researchers in full-time equivalents are nearly 2000 equivalent to 564.1 researchers per 1 million inhabitants (31). The teaching faculty makes up 44% of the workers in the Palestinian higher education institutions; this ratio is not in harmony with international standards (two thirds for teaching and the rest as administration and services). Compared to that in the region, Egypt has almost 600 researchers, while Jordan is the highest with around 1900 (4). Overall, the number of researchers from the Eastern Mediterranean Region is relatively low (ranging from 29 to 1927 per million people) (32). However, the workforce can be seen as promising and improving compared with other HRSRIC components such as infrastructure, facilities and funding, where these components remain structurally and functionally weak, not only in the HRS but also in the HCS alike and strengthening them is often neglected (18,33,34).
Due to state fragility, national institutions, mainly government and academic, face severe financial crises that negatively affect performance (35). This not only hampers any HR development effort but also threatens the continuity of public services, particularly education and health. In view of capacity gaps, building a robust HRS will be unattainable as long as we lack a governing framework, strategic thinking in resources and capacity allocation, and sustainable investment for HR (4,21,33). In recent years, a growing number of projects have supported the Palestinian HR capacity through international and local parties, for instance, European Union-Horizon 2020, academic partnerships, United Nations agencies, Islamic Development Bank, governments (such as Palestinian-French Joint Committee, the Palestinian-German Science Bridge, Palestine-Quebec Science Bridge, and Norwegian Institute of Public Health through Norway's Minister of Foreign Affairs), Qatar Charity, Welfare Association, and local private sector (including banks, pharmaceuticals companies, and business people). To make a greater impact, such initiatives, projects and interventions are required to be structured, strategic and focused within the inclusive national framework and they ought to follow a long-term development vision.
Brain drain forms another intractable challenge in Palestine (30) due to a lack of incentives and discouraging environments. This issue is the focus of international debate in HR (2), and regionally, Arab states lose 50% of their newly qualified physicians and 15% of their scientists annually (4). Therefore, building or strengthening HRSRIC effort is an urgent priority. The effort of retaining and bringing back the intellectual capital and skilled human resources to the country, and training and educating the current health workforce should be applied at the individual and institutional levels as part of a comprehensive developmental strategy. To attain this target, 3 approaches should be followed: (1) HRCS strategy (20); (2) HRS operational and functional framework (2); and (3) ESSENCE, 7 basics for strengthening HR capacity (25). ESSENCE is an initiative that allows donors/funders to identify synergies, establish coherence and increase the value of resources and action for health research. Also, as EMHJ -Vol. 26 No. 10 -2020 it is system, its pillars affecting each other, applying these approaches in tackling HR production and quality, HR transfer, and HR translation is also essential, in orderto move synergistically to the empowering of the HRSRIC. These 3 operational components related to HRSRIC must be fundamentally embedded and well-functioning in any HRS (2).
The overall HRF is persistently scarce, as other comparable (7-9) and different (36)(37)(38) studies have affirmed. However, limited and volatile individual and institutional financing efforts (38) could have an impact if structured and brought into a collective framework. Certainly, as a relevant study proved (7)(8)(9), HR is still not high on the government priorities list due to many conflicting concerns. Different factors behind the lack of HRF, which agreed with some other studies, are: (1) HR and evidence-based concepts are not well entrenched among decision-makers (39,40); and (2) weakness of advocacy and pressurecampaigns to initiate a serious movement towards strengthening the HRS. Even with the donors' limited role, Palestinian HR primarily dependson the unsystematic external and individual funding with a clear lack of public domestic funding. In contrast, another study revealed that public investment is the main source in the region's countries and HRF is among the lowest globally and WHO Regional Office is a key body offering HRF (4). There are other funding gaps concerning the donors' conditions, influence and procedural difficulty (39,41) and the scarcity of national resources due to the political conditions. For sustainable HRF, HR should receive the commitment of a fixed budget, at least 1% of the national health expenditure (2), along with a national integrated and pooled fund under government stewardship financed by Palestinian and non-Palestinian entities' contributions (4,18).
It is worth mentioning that the study takes into account the impact of the current reality on the development of HRSRIC. It is important to shed light on the role and impact of the political situation on HR in relation to limiting or facilitating the strengthening efforts of human and financial resources and infrastructural potentials. In Palestine, as an exceptional case, there are 2 pathways to understanding the current political scene. First, the continuation of the Israeli occupation undermines any national development efforts by restricting the movement of individuals, supplies, goods and materials, properties demolition, raids, and seizing land and natural resources. Second, the consequences of the ongoing intra-Palestinian political division among major political factions in the Gaza and West Bank. These consequences are the multiplicity of administrations, mistrust, conflicting visions, duplication of agendas, budgetary deficits, disturbances throughout the public sector such as salaries reduction, overstaffing with low productivity, and compulsory collective retirement. Therefore, addressing these 2 factors, as part of the efforts to advance the HRS in general, and the HR resources and infrastructure in particular, is important and inevitable. Accordingly, the Palestinians, with a considerable role of the international institutions, are invited to a national workshop that seriously examines the opportunities of eliminating these obstacles and leads to launching efforts to enable and strengthen the resources and facilities of HRS in Palestine.
We made some proposals that could not be addressed in this study to become research ideas in the future. Among the most important of these ideas, initially considered also at the outset, is that a sectorial and more empirical national HR capacity assessment may be useful in determining precisely HRSRIC, such as assets, resources and facilities at the institutional, sectoral and national levels. Such assessment deserves to be implemented using qualitative and quantitative measurements. Once the HRS is structured, national comprehensive system analysis is required to investigate inputs, processes and outputs dimensions.
The study limitations can be summarized as follows: (1) knowledge gap of relevant local and regional literature and reports on the subject; (2) time constraint in mapping the definite existing capacities across the sectors, as well as in targeting more participants and targeting of additional relevant institutions; (3) difficulties related to gathering quantitative data on HR stakeholders and capacities in Palestine due to lack of data availability, quality, organization and accessibility; (4) field restrictions on the freedom of movement of the research team as a result of the closure and security checkpoints; and (5) environmental and political fluctuations and institutional changes that may escalate or reduce the role of the stakeholders on the one hand and funding flow to the health sector in general and to HR activities in particular on the other hand.
Conclusion
This system analysis is important not only to Palestine but also to other countries in the region in order to guide the HRSRIC strengthening and advancement endeavSours meaningfully. The overall status of HRSRIC in Palestine is insufficient/weak and major challenges persist where the pace of strengthening efforts is steady. Inadequate HR capacity for infrastructure, facilities, supplies and logistics is not addressed strategically. This applies to dozens of projects, which are dedicated to expand and boost HRSRIC, that have not been implemented through a national strategic approach. This scarcity of resources and loss of capacities is affected politically by perpetuating factors, such as the Israeli occupation and the intra-Palestinian divisions. A strategy for HRSRIC is crucially required be adopted. Human resources are conpsidered a promising side in HR compared with some other countries in the region. Thus, there is a necessity for investment in empowering the knowledge and competencies on HR subjects, enhancing an effective incentive system, and providing the required health facilities with a supportive environment to face the rising brain drain. Furthermore, the Palestinians loss of control over their EMHJ -Vol. 26 No. 10 -2020 resources and politicized foreign aid are also contributging to the misallocation and scarcity of resources.
In spite of the scarcity in the region, HR in Palestine is often funded by external donors and individual and institutional sources. This funding is still scarce, with a considerable lack of government funding. The HR in the successive budgets almost does not itemize in light of the lack of regional HRF, which is another major challenge characterized by scarcity, unsustainability and individuality. The reasons behind the scarcity of financial resources include donor conditions and procedures, allocation malpractices, and prevailing political conditions. Therefore, a plan to establish a national HR fund with a sound and adequate budget, perhaps by allocating 1% of the total health budget to reach 5% by 5 years, as well as ensuring diverse financing sources and a collective pooled contribution, may be a viable solution to explore.
Thus, in light of these compelling challenges in the Palestinian context, the issue of framing a strengthening agreed strategy for HRSRIC remains a national strategic demand. The strategy needs to be framed in the context of institutionalized national governance for HR in Palestine. The following aspects, promotional and professional incentives, educational capacity building programmes, infrastructure investment and facilities expansion, sustainable budgets and diverse funds, and local and international partnership and cooperation, are essential foundations that should be built in to the strategy. Eventually, HRSRIC strategy can be equated to the Palestinians' national struggle for building the pillars of the state institutions.
Acknowledgement
This study was part of a complete PhD research project through a cooperation agreement between the Swiss Tropical and Public Health Institute in Switzerland and Najah National University in Palestine. The University contributed to forming a research team, who supported and assisted in different fieldwork activities. The Swiss Federation through the Swiss Government Excellence Scholarships for Foreign Scholars is also acknowledged for providing the stipend of the principal investigator. Ultimately, special thanks to Ms. Doris Tranter, a freelancer editor, Mr. Lukas Meier from Swiss Tropical and Public Health Institute, Dr. Yousef Abu Safia, former Minister of Environment in Palestine, and Mr. Hamza Meghari who contributed to the revision of the study manuscript.
Funding: This work was jointly sponsored by the Swiss Federation through the Swiss Government Excellence Scholarships for Foreign Scholars and the Swiss Tropical and Public Health Institute. The second sponsor had a role in scientific and technical consultation and guidance.
Competing interests: None declared.
|
2019-12-19T09:15:39.621Z
|
2020-10-13T00:00:00.000
|
{
"year": 2020,
"sha1": "876b944d2037a568c3bad716b35d55dbb1ebb5e8",
"oa_license": null,
"oa_url": "https://doi.org/10.26719/emhj.19.096",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "00c50515351074ab85894cc9db9f236f6878523e",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
11375337
|
pes2o/s2orc
|
v3-fos-license
|
Downside Risk analysis applied to Hedge Funds universe
Hedge Funds are considered as one of the portfolio management sectors which shows a fastest growing for the past decade. An optimal Hedge Fund management requires an appropriate risk metrics. The classic CAPM theory and its Ratio Sharpe fail to capture some crucial aspects due to the strong non-Gaussian character of Hedge Funds statistics. A possible way out to this problem while keeping the CAPM simplicity is the so-called Downside Risk analysis. One important benefit lies in distinguishing between good and bad returns, that is: returns greater or lower than investor's goal. We revisit most popular Downside Risk indicators and provide new analytical results on them. We compute these measures by taking the Credit Suisse/Tremont Investable Hedge Fund Index Data and with the Gaussian case as a benchmark. In this way an unusual transversal lecture of the existing Downside Risk measures is provided.
Introduction
Hedge Funds are considered as one of the portfolio management sectors which shows a fastest growing for the past few years [1,2]. These funds have been in existence for several decades but they do not have become popular until the 1990's. It is said that the Hedge Funds are capable of making huge profits but sometimes also suffer spectacular losses. Due to their at least apparent high and unpredictable fluctuations, it is necessary to keep the risks we take when we trade with Hedge Funds under rigorous control [3,4].
The Capital Asset Pricing Model (CAPM) is the classic method for quantifying the risk of a certain portfolio [5,6]. Basically, the so-called Ratio Sharpe [6] evaluates the quality of a certain asset by normalizing the asset growth expectation with its volatility, that is its standard deviation. Thus, based on the fact that the asset growth expectation must be large and volatility low, a good Hedge Fund holds a high Ratio Sharpe. And the better the Hedge Fund the more attractive and advisable is to invest in this fund. Typically, Hedge Fund managers begin to trade with an specific Hedge Fund only when this fund gets an annual Ratio Sharpe greater than 1 [1]. It is considered that only if Ratio Sharpe crosses this threshold the fund can provide benefits after the trading costs removal.
However, the CAPM theory is sustained under the hypothesis that underlying asset is Gaussian distributed. In this case, investor only needs to know the mean and the variance of the return. As it has been observed [7,8,9,10], this appears to be an unrealistic scenario in financial markets with important implications in the risk analysis within the mean-variance framework (see for instance [11]). The situation is much more dramatic in the Hedge Fund universe since these funds are clearly non-Gaussian having wild fluctuations and strong asymmetries in price changes. These funds are characterized by their big sensitivity to the market crashes and by trading with products such as derivatives that show a pronounced skewness in their price distribution. For instance, a very well-known commodity trader adviser (CTA) Hedge Fund had a poor Ratio Sharpe (0.19) but, despite this mediocre mark, their earnings during the 2000 raised beyond the 40% [1]. Conversely, after 31 months of trading, the famous fund Long-Term Capital Management (LTCM) had an appealing ratio (4.35) and nothing seemed to forecast its posterior debacle [1].
These two examples are not exceptional cases, and they make us reexamine the validity of the CAPM theory. There is evidence that the CAPM method is not complete enough for evaluating the risks involved in the Hedge Fund management.
Our aim here is to explore some alternatives in the context of the so-called Downside Risk analysis [1,11,12]. The main contributions of the current paper are the following. We introduce some of the most popular Downside Risk indicators sparse in the literature. We also provide new analytical results related to these risk measures and finally make a large set of empirical measurements to the Credit Suisse/Tremont Investable Hedge Fund Index Data. The application of all these Downside Risk indicators to the same data set has thus allowed to revisit them in a transversal way which is quite unusual in the literature. We have finally found that representing the indicators in terms of a modified Ratio Sharpe (see below) is more appropriate than doing it as a function of the investor's goal. This replacement with respect what is typically done in the literature makes possible a better data collapse and it in turn enables to compare easily the performance between different Hedge Fund trading styles.
The paper is structured as follows. Section 2 briefly describes the data set used for the Downside Risk indicators. The following section is devoted to present the backgrounds of the Downside Risk approach. Afterwards, we present the Adjusted Ratio Sharpe in Section 4, the Sortino ratios in Section 5, and the Gain-Loss Ratio is left to Section 6. The equivalence between the Omega function and the Gain-Loss Ratio under specific circumstances is given in Section 7 while a discussion about the error behind the risk measures we use is left to Section 8. Finally, Section 9 provides some conclusions.
The Hedge Fund data set
There are several third-party agencies that collect and distribute data in Hedge Fund performance [1]. For this paper, we have used the data supplied by the Credit Suisse/Tremont (CST) Index [13]. This company is a joint venture between Credit Suisse and Tremont Advisers Inc which combines their resources to set several benchmarks for a large collection of Hedge Fund strategies. They provide a master index and series of sub-indices that represent the historical returns for different Hedge Fund trading styles [1,13].
The weight of each fund in an index is given by the relative size of its assets under management. This makes the CST Index the first asset-weighted indices in the industry. Asset-weighting, as opposed to equal-weighting, provides a more accurate depiction of an investment in the asset class. In addition, CST has a web site [13] that provides an up-to-date and historical data and allows the user to select and download data. Information available is public. The selection of funds for the CST indices is done every quarter. The process starts by considering all 2,600 US and offshore Hedge Funds contained in the TASS database, with the exception of funds of funds and managed accounts.
In the present case, we have analyzed the monthly data for these indices during the period between 31st December of 1993 until the 31st January of 2006. This period corresponds to 145 data points for each Hedge Fund style. This is not a very large data set but it is enough to perform a reasonably fair and reliable statistical estimation of the quantities we here deal with.
In Fig. 1 we show the indices dynamics that were all normalized to 100 at the beginning of 1994. We also show the monthly logarithmic return change R(t) = ln[S(t + ∆)/S(t)] where S(t) is current price index and ∆ is fixed and equals to one month. In what follows the return change is always over one month and this is the reason why we avoid to specify the value ∆. Table 1 shows how the mean-variance framework fails to explain the statistics of the majority of Hedge Fund styles monthly returns. The kurtosis can raise to values larger than 20 while the skewness is usually negative and may take values larger than 3.
The Downside Risk Metrics: Main definitions
For the reasons mentioned above, the so-called Downside Risk analysis has been gaining wide acceptance in recent years [3,4,11]. One important benefit of the Downside Risk lies in distinguishing between good and bad returns: Good returns are greater than the goal, while bad returns are the ones below the goal. The Downside Risk measures incorporate an investor's goal explicitly and define risk as not achieving the goal. In this way, the further below the goal, the greater the risk. And, in the opposite side, returns over the goal does not imply any risk. Within this approach, a portfolio's riskiness may be perceived differently by investors with different goals. This is more realistic than the CAPM theory approach where all investors have the same risk perception.
Typically, the target return is set to be to the minimum acceptable return for Table 1 Main statistical values for the whole set of Hedge Fund style indices during the period between 31st December of 1993 until the 31st January of 2006. We show the first moment, the standard deviation, the kurtosis and the skewness for the monthly returns. Most of the indices have a kurtosis larger than one and some of them also have a non negligible negative skewness. The mean-variance framework might fit well only for very few of them (the "Managed Futures" and the "Equity Market Neutral" styles). considering profitable the trading operation. And the statistical risk would be then associated with the unsuccessful tentatives of obtaining a higher return than the target return. However, the target return could also be related to the maximum loss that a Hedge Fund can afford measuring risk in a somewhat similar way as the Value at Risk measures do [14]. We will here cover a broad window ranging the annual target return values from -30% to +30%.
We consider the price of the asset S at time t and its initial price S 0 at time t = 0. Let us thus define the Excess Downside as where R ≡ ln(S/S 0 ) is the subsequent monthly return change and T is the target return. Observe that the mathematical expression for the Excess Downside is identical to the one for the payoff of the European put option [14].
We can ineed study the first and second moments of the Excess Downside D(T ). Recall that the first moment is defined as while the second moment reads where p(R) is the probability density function (pdf) of the return R. The square root of the second moment (3) is also called Excess Downside Deviation (EDD).
which is the first moment of the return and is the return variance. Both are directly computed from historical data. Observe that the ratio d/σ for the Hedge Fund data indices may differ significantly from the Gaussian case, specially for T 's lower than µ (λ > 0). We discuss deeper the results of this plot in the next section.
However, before proceeding we shall notice that the CST Index results have been plotted with error bars for the y-axis. The bars represent the standard error of the measurements. Figure 2 plots an algebraic combination of averages and we apply standard rules of error propagation of each standard error measurement. We have done it done for the rest of trading style indices with very similar conclusions but in order to lighten the information in the plot we do not show these errors bars. We neither show the standard error for the x-axis. Their error bars do not change for different values of λ and they typically affect the second significative digit of lambda. All these comments are also valid for the rest of figures and empirical results in the paper. We will go a bit further on this topic in Section 8.
The Adjusted Ratio Sharpe
A first possible extension of the method aims to keep the CAPM approach but with a rough correction based on the empirical EDD computed as defined in Eq. (3). This is probably the simplest sophistication to the mean-variance framework. Its interest is based on the fact that it is aimed to replace the volatility σ by a more appropriate risk measure such as d(T ). We recall that the CAPM measures risk of a certain asset with the well-known where r is the risk-free interest rate ratio, µ = E [R] and σ 2 = Var[R]. Johnson etal. [15] propose an Adjusted Ratio Sharpe as "the Ratio Sharpe that would be implied by the fund's observed Downside Deviation if returns were distributed normally".
Let us study further this risk measure both analytically and empirically. We first assume that the returns are Gaussian: We also define a modified Ratio Sharpe with the quotient This variable λ is very important not only in this rough correction of the Ratio Sharpe but also in the analysis we will perform for other alternative risk indicators we will show herein. The effort to represent the risk measures in terms of λ is of the new contributions of this paper. The forthcoming measures in the next sections can be all represented exclusively in terms of lambda if the underlying is Gaussian distributed. Nonetheless, it will be also helpful to keep on working with the λ even for the empirical data set where Gaussian hypothesis is weakly sustained. Therefore, from the Excess Downside Deviation (3) and assuming Gaussian returns we can write where N(·) is the probability function while its derivative is denoted by a prime and reads exp (−λ 2 /2) / √ 2π. Figure 2 plots the d(T )/σ ratio in terms of the modified Ratio Sharpe (8) for a broad range of target returns T , from -30% until 30% annualized rates. We represent data in terms of λ by taking the empirical averages µ and σ (cf. Eqs. (4)- (5)). When λ < 0 (T > µ), we can observe that the empirical d(T )/σ follows reasonably well the Gaussian curve given by Eq. (9). However, when λ > 0 (T < µ), the results may differ significantly from the Gaussian curve. These more pronounced differences for T < µ can be adduced to the effect of the negative skewness. In this case, we are mainly taking the left wing of the distribution where returns are smaller than the average. The more negative the skewness is, the larger the d(T )/σ ratio. For instance, the "Event Driven" index has a skewness more negative than -3 and it describes one of the curves with higher values of the d(T )/σ ratio. In contrast, the "Equity Market Neutral" index remains even below the Gaussian curve mainly because of its slighlty positive skewness.
We can finally numerically invert the ratio d(T )/σ obtained from Eq. (9). We will thus obtain the so-called Adjusted Ratio Sharpe. That is: the lambda that corresponds to the Excess Downside Deviation ratio in the event of returns are Gaussian and then accomplishing the equality (9). We show the resulting empirical results in Table 2 for the special case when T = 5%. The Adjusted Ratio Sharpe may also differ significantly from the Ratio Sharpe placed far outside its standard error region. (14) and observe that the historical data results are not very far from the Gaussian hypothesis. The error bars in the CST Index correspond to the standard error for each λ measurement.
The Sortino and Upside Potential ratios
Sortino etal. propose in their works [16,17,18,19] more drastic modifications of the Ratio Sharpe. In contrast with the Adjusted Ratio Sharpe, the pair of indicators we here introduce are self-consistent measures that do not need to assume that the underlying asset is Gaussian. All averages involved in these indicators are directly computed from the historical data.
However, these two new ratios have a similar apperance than the Ratio Sharpe.
In both cases, the risk-free rate is replaced by the target return and the volatility by the Excess Downside Deviation. We here also claim that the modified Ratio Sharpe λ will still be useful to represent all Hedge Fund styles into a single plot.
The first tentative is the so-called Sortino Ratio (SoR) defined as follows: where d(T ) is the EDD given by Eq. (3). There is a sophistication made by the same Sortino whose computation also replaces return average µ by the excess return average. This new measure is worried about the return average up to the target return. Thus, the Upside Potential Ratio (UPR) is defined as where Both measures behave as the Ratio Sharpe. The greater ratio corresponds to the better asset. Let us calculate the quotients given Eqs. (11) and (12) in case that returns were Gaussian and here comes our new contribution on the study of the SoR and UPR measures. We first need the Gaussian d(T ) which is already given by Eq. (9). In such a case the Sortino Ratio reads while, again taking into account that under the Gaussian hypothesis the upside average (13) is the Upside Potential Ratio reads We note that our results on both risk measures have been expressed in terms of the modified Ratio Sharpe λ = (µ − T )/σ. And we here claim that this is still convenient even in the case we do not have a Gaussian distribution for the returns.
Some special and limiting cases are where upper and lower bounds respectively correspond to the limiting cases T → ∞ and T → −∞.
These measures could be annualized as it was done with the Ratio Sharpe.
Recall that the monthly Ratio Sharpe is annualized when we multiply the ratio by the factor √ 12. However, in principle, this is not as easy in the present case as it has been thoroughly investigated in Ref. [17].
We represent the Sortino and Upside Potential ratios in Figs. 3 and 4. We there plot the Gaussian case as a benchmark and in terms of lambda. The empirical data is also shown in terms of λ to get the results comparable. For the empirical data we have computed the risk measures for a broad range of target returns T , from -30% until 30% annualized rates. At first sight, we do not perceive much differences between the two plots. We may however say that UPR is able to scatter in a slightly better way the different trading styles. In both risk measures and in the linear scale, the differences with respect the Gaussian curve become important only for positive λ (µ > T ). However, the UPR risk measure is the only that can be also represented in a logarithmic scale. The relative distances to the Gaussian curve are quite symmetric for negative and positive λ's. For this reason, we consider more powerful the UPR. In addition and from the error bars, we can also state that the replacement of the mean by the upside average does not bring much more noise to the UPR risk measure in comparison to the SoR risk measure noise. In general, we can also say that the large errors bars does not allow to get reliable conclusions for positive and moderate values of lambda.
In both risk measures and for a broad range of lambda, the index style closer to the Gaussian curve corresponds to the "Equity Market Neutral". The set of trading styles could be sorted in several groups and without observing big discrepancies in that classification depending on which risk measure we take. The resulting groups are also consistent with the ones we can identify in Fig. 2. Particularly, for large λ (negative T ) we can easily see that the risk in most of the "Event Driven" indices and the "Fixed Income Arbitrage" index is comparable. The reason why Gaussian curve is beyond most of the indices risk measures for positive lambda should be found in the negative and nonnegligible skewness in the data set of these Hedge Fund indices.
The Gain-Loss Ratio
Bernardo and Ledoit [20] propose another risk measure called Gain-Loss Ratio (GL). This mesaure is probably the most well-grounded measure between the existing alternatives to the CAPM theory. In contrast with the Sortino ratios, GL have no comparable magnitudes with the Ratio Sharpe.
In the simplest scenario, the attractiveness of an investment opportunity is measured by the Gain-Loss Ratio which is the fraction between the averages of positive and negative parts of the payoff after removing the trading costs included in the target return T .
The framework provides an alternative approach to "asset pricing in incomplete markets that bridges the gap between the two fundamental approaches in finance: model-based pricing and pricing by no arbitrage" [20]. The GL(T ) ratio constitutes the basis to an alternative asset pricing framework. By limiting the maximum Gain-Loss ratio, the admissible set of pricing kernels can be restricted and it is also possible also to constrain the set of prices that can be assigned to assets [20]. In other words, we admit that there are arbitrage opportunities but limited in a certain range of prices. In the same way, Bernardo and Ledoit [20] state that the theoretical no arbitrage assumption is related to the mathematical demand that the GL is 1.
We can now explore this risk measure in a similar way as done in the previous sections and this corresponds to our new contribution to this risk indicator. Following the notation used above we have that and Note that from these definitions we can obtain the following expression where we also take into account the definition of λ given by Eq. (8).
We have already obtained the µ + (T ) average in the case that the returns are Gaussian. From Eqs. (15) and (20), we thus have and Therefore, the Gain-Loss Ratio reads . (22) Note that no arbitrage corresponds to λ = 0 which means that average µ equals the target return T . Also observe that the GL has not time units. This means that the annual GL should have the same value as the monthly GL. This is a very interesting and powerful property that avoids any discussion about the way we derive the annualized risk indicator as it happens with the Sortino ratios and the Adjusted Ratio Sharpe.
The bounds of the ratio are which respectively correspond to T → ∞ and T → −∞. The GL, at least in the Gaussian framework, is a non decreasing function in terms of lambda whose fastest growing correspond to λ > 0 regime, that is T < µ. Thus, this risk measure is very sensitive to small changes when λ > 0 while for negative lambda the ratio does not provide too much information. Figure 5 confirms this very last statement. The same plot also depicts the empirical results for a broad range of annualized target returns between −30% and +30%. The more pronounced Gaussian behaviour and in a broader domain of lambda again corresponds to the "Equity Market Neutral" strategy although other styles such as the "Managed Futures" also follows nicely the curve. In contrast with the SoR and UPR risk measures, it is much more difficult to detect groups with similar values. Therefore, in this sense, previous measures seem to be more appropriate than the Gain-Loss Ratio.
We could also observe that the Gaussian GL gives a good measure for moderate values of positive lambdas (0 < λ < 1) to an important number of indices. In fact, within this regime, there are only two trading styles below the Gaussian performance. This behaviour appears to be also in clear contrast with the previous risk indicators.
The Omega function
There exists another risk measure which may represent a different way of evaluating the Gain-Loss ratio. Their authors do not tell anything about this fact [21,22]. The GL ratio authors are mainly worried about the benchmark risk-adjusted probability measure while the so-called Omega function authors consider that their indicator can be looked simply as another representation of the probability distribution of the underlying. This is certainly a different perspective but we here show that the Omega function is equivalent to the GL under quite general conditions. As to the case of the Gain-Loss ratio, we do not look at the fundamentals of the economic theory that lies behind but simply focuss on the statistical properties of the downside averages we take.
Keating and Shadwick [21,22] and proposes the following Omega function measure: where using that F (R) is the cumulative distribution function of the return R, i.e., We now try to evaluate the expressions for I 1 and I 2 . Firstly, integrating by parts we have Before proceeding further we show how the second summand is zero under some circumstances. We note that The limit value will thus depend on the way the pdf p(R) decays to 0 as R → −∞. As far as p(R) decays faster than 1/R 2 , this second summand can be neglected. This will be for instance the case of the Gaussian and the Laplace distributions or even a power law with a tail index larger than 2. However, the next steps of these calculations would not be applicable to the power law distributions with a slower decay, that is: p(R) ∼ 1/|R| α with 1 < α ≤ 2 as R → −∞. In this latter case, all the moments of the pdf are infinite.
We now come back to Eq. (25). Observe that it is also possible to rewrite the expression as and finally see (cf. Eq. (18)) that Secondly, we can do the same with I 2 . Under the condition that the pdf decays faster than 1/R 2 as R → ∞, similar calculations apply and lead us to state that (cf. Eq. (19)) Therefore, according to the values derived for I 1 and I 2 and the definition given by Eq. (17), we find that the Omega function and the Gain-Loss Ratio coincide since We must however insist that this is only true when the pdf asymptotically decays faster than 1/R 2 when |R| → ∞. In such a case, the Omega will have the same bounds and behavior as the GL and results in Section 6 can be also applied to the Omega function. In favour of Ω indicator, we may say that its way to handle the empirical data is more reliable (less noisy) specially when we have a small number of points. We have left to Appendix A some other new results related to what Keating and Shadwick call the "Omega risk".
Error analysis: A discussion
The error analysis of the Downside Risk averages is a quite unexplored territory. We do not have here the aim of making a deep analysis on this topic since this deserves a whole paper. We would however like to provide at least few insights under the perspective of the Hedge Funds universe we have here studied.
Under the hypothesis that underlying follows a Brownian motion (returns are Gaussian distributed) we can quantify the error of the upside averages, the downside average and the excess downside deviation. The easiest thing to do is to assign its error magnitude to the standard deviation of those estimators. In such a case, one can obtain and If we join these expressions with the upside and downside averages given by Eqs. (15) and (21), we can derive the standard deviations and subsequently the standard errors: where M is the data points used for the estimation. The result of these operations is also given in Fig. 6. One should notice that the values of these functions are constrained between 0 and 1/ √ M − 1 and their behaviours are quite similar being antisymetric with respect to lambda. We can observe that when we take M = 145 which corresponds to our Hedge Fund data set length the relative error is always below the 15%. For the EDD case, the calculation is a bit longer but one can also obtain Following the same procedure as to the one for the upside and downside average, we can finally obtain its standard error. The aspect of its behaviour both in a linear and logarithmic scales is provided in Fig. 7. We can observe that the relative error is again below 15% in almost the whole regime of lambda where we get the empirical observations, that is: −1.5 < λ < 1. In absolute values the error increases as lambda becomes more negative but its relative value might be quite high for λ > 1.
The moderate values for this relative errors certainly let us believe that the risk measures provided in the previous Sections are reliable enough. We must however take these bars to the assignation of the errors of these magnitudes with real care since these estimators are strongly biassed as one can see from the definitions in Section 3. The standard deviation would solely provide the order of magnitude of that error. A more accurate analysis of the empirical probability density we have behind each Hedge Fund style would be required to get a more reliable estimation of the degree of confidence of our observations.
We can go a bit further in this discussion and also check the convergence of these estimators. We have simulated a Bownian realization for the monthly returns with the empirical mean and volatility of the CST Index shown in Table 1. Having 20, 100 and 200 simulated timesteps, we have computed the Excess Downside Deviation in the same way as in previous sections. Figure 8 shows that the convergence to the theoretical curve is quite fast. With 100 steps we already have a reasonable estimation of the EDD and with a larger data set what we mainly achieve is a reduction of the error bars of our estimation. We have done the same procedure with the upside and downside averages obtaining similar conclusions. Therefore, this experiment also seems to confirm that the results provided in previous sections are quite reliable. At least under the Gaussian assumption, the convergence of our estimators is fairly reached for the number of points empirically available.
Final remarks
Hedge Funds have enjoyed increasing levels of popularity coupled with opacity and some myths [1,2]. We here have followed this recent interest in studying the Hedge Funds for an academic purpose (see for instance Refs. [23,24] published in this journal). This is possible since data such as the Credit Suisse/Tremont Investable Hedge Fund Index is now easily available.
The strong non Gaussian character of financial markets have led to consider risk measures alternatives to CAPM theory in the context of the Downside Risk [1,3,4,11]. The measures are able to distinguish between good and bad returns compared to our own personal target T in a very simple manner. While the CAPM theory takes the average return growth and the return variance, the Downside Risk framework uses another two statistical measures which indeed keep folded some information of higher order moments. In particular, we have focused on the following risk measures: the Adjusted Ratio Sharpe, the Sortino ratios and the Gain-Loss Ratio from both a theoretical and empirical points of view. We have seen that the Downside Risk framework provides quite robust measurements and it appears to be the most natural extension to the CAPM theory and its mean-variance framework.
The Hedge Funds is a field where these risk measures have most promising future. There are mainly two reasons. The first reason is the existence of wild fluctuations and pronounced negative skewness in data. And secondly is that there are few empirical data points available (of the order of hundreds of points). This last reason makes impossible to work with other more sophisticated risk metrics which are more sensitive to the wildest fluctuations.
However, we have also seen that the Gaussian results for the studied Downside Risk measures are still important. We have shown that they work very well as a benchmark if we represent the empirical risk measures in terms of a modified Ratio Sharpe λ = (µ − T )/σ. Perhaps quite surprisingly, we can also see in Figs. 3, 4, and 5 that the Gaussian trading investment behavior works better than most of the sophisticated trading style indices. The main reason lies on the fact that a Hedge Fund provides high benefits with the cost of having in most cases a negative skewness. Downside Risk measures take into account this asymmetry and include it to the risk perception. This is therefore another argument for using the Downside framework since Ratio Sharpe might wrongly overvalue the quality of a Hedge Fund by ignoring the skewness (and the kurtosis) effects in risk analysis.
To summarize, let us point out the main new contributions of the current paper. We have aimed to revisit some of the most popular Downside Risk indicators and have gone further on their application to the Hedge Fund universe in two different aspects. Firstly, the use of the same data set has enabled to get a reliable comparison between these risk indicators which is certainly difficult to get through the existing literature. And secondly, we have derived new analytical expressions for these risk indicators in terms of λ = (µ − T )/σ that allow to get a closer idea on the exact meaning of each indicator. In the same lines, we want to stress the fact that the modified Ratio Sharpe λ appears to be a more useful parameter to work with instead of the target return T . It becomes very helpful to put the results in a broader context and in reference to a benchmark distribution such as the Gaussian one or even to compare different trading styles indices. Thanks to this fact, we have detected several indices with quite similar behaviour for the Sortino ratios and in the quotient d(T )/σ. However, the Gain-Loss Ratio is blind to this structure and unable to untangle these trading styles. For all these reasons and others discussed above, we can conclude that among the Sortino ratios the UPR is the best risk indicator for the Hedge Fund universe. We should however take all these conclusions with real care due to the small data set available. There is a strong presence of noise that we have here quantified with the standard error and the subsequent error bars computed with error propagation. To check the soundness of our conclusions we have discussed the error in the Downside Risk Metrics when underlying is Gaussian distributed. We have observed that the relative error based on our empirical data set length is quite reasonable being below the 15%. We have also shown via simulations that the convergence of our estimators is quite fast since a hundred points is enough to get a reliable estimation of the Gaussian theoretical values involved in the Downside Risk Metrics.
There are many other interesting aspects to study under the current perspective. One possibility is to deeper study these risk indicators when returns follow another return distribution like a Laplace [25,26] or for instance a power law distribution. We could also compute the here presented risk measures when target return is another asset. Another possibility is to study the effect of these analysis in the multi-factor market modeling [5,27,28,29,30]. In any case, all these topics should be left for future investigations. This is also true for the Gain-Loss Ratio equivalent risk indicator.
|
2014-10-01T00:00:00.000Z
|
2006-10-20T00:00:00.000
|
{
"year": 2006,
"sha1": "55bcdc7ed03e05558bc0ba06023e68a58089a25d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/physics/0610162v2.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "322d746a6f1db1ad2f4ece96aad040a294ae24a4",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Physics",
"Economics"
]
}
|
14781952
|
pes2o/s2orc
|
v3-fos-license
|
Delayed X-ray emission from fallback in compact-object mergers
When double neutron star or neutron star-black hole binaries merge, the final remnant may comprise a central solar-mass black hole surrounded by a 0.01-0.1 solar masses torus. The subsequent evolution of this disc may be responsible for short gamma-ray bursts (SGRBs). A comparable amount of mass is ejected into eccentric orbits and will eventually fall back to the merger site after approximately 0.01 seconds. In this Paper, we investigate analytically the fate of the fallback matter, which may provide a luminous signal long after the disc is exhausted. We find that matter in the eccentric tail returns at a super-Eddington rate and is eventually (0.1 sec) unable to cool via neutrino emission and accrete all the way to the black hole. Therefore, contrary to previous claims, our analysis suggests that fallback matter is not an efficient source of late time accretion power and is unlikely to cause the late flaring activity observed in SGRB afterglows. The fallback matter rather forms a radiation-driven wind or a bound atmosphere. In both cases, the emitting plasma is very opaque and photons are released with a degraded energy in the X-ray band. We therefore suggest that compact binary mergers could be followed by an"X-ray renaissance", as late as several days to weeks after the merger. This might be observed by the next generation of X-ray detectors.
INTRODUCTION
Close binaries of compact solar-mass objects are expected to form via the evolution of massive star binaries or by dynamical interaction in dense star clusters. Neutron star (NS-NS) binaries have been detected as radio pulsars (e.g. Faulkner et al. 2005), and while black hole-neutron star (BH-NS) or double black hole (BH-BH) binaries have not been observed directly, they are predicted by population synthesis models. The compact objects are expected to merge due to gravitational wave emission, with evolutionary scenarios estimating a local rate of NS-NS mergers 10 − 100 times higher than for BH-NS and BH-BH systems (e.g. Belczynski et al. 2007). The final remnant for NS-NS and NS-BH coalescence is generally thought to be a BH of a few solar masses surrounded by a 0.01 − 0.1 M ⊙ accreting disc (e.g. Ruffert et al. 1997;Shibata, Taniguchi & Uryu 2003;Rosswog et al. 2004;Faber et al. 2006). The accretion power immediately following the merger is perhaps the ultimate cause of SGRBs (Blinnikov et al. 1984;Eichler et al. 1989;Paczyński 1991). At early times ( < ∼ 0.1 − 1 sec), the accreting disc is geometrically thin, effectively cooled by neutrino emission (Popham, Woosley & Fryer 1999). When the ac-cretion rate drops below ∼ 0.1 M ⊙ sec −1 -the exact value depending on the accretion parameter α and BH spin (Chen & Beloborodov 2007;Metzger, Piro & Quataert 2008) -the disc becomes radiatively inefficient and super-Eddington accretion drives a substantial outflow (Metzger et al. 2008).
During the dynamical phase of the merger, in which the lighter companion is tidally disrupted, a fraction (∼ 10 −2 M ⊙ ) of the debris receives enough energy to be ejected from the system while a comparable amount remains bound in eccentric orbits (e.g., Rosswog 2007;Faber et al. 2006) and will eventually return to the disc site: fallback matter. This weakly bound matter may give rise to interesting phenomena observable on timescales longer than any viscous timescale of the disc. For example, it has been suggested (Lee & Ramirez-Ruiz 2007;Rosswog 2007;Metzger et al. 2008) that it can be responsible for the X-ray flaring, observed in SGRB afterglows on timescales of minutes to hours (e.g. Campana et al. 2006). Unfortunately, numerical investigations have not yet been able to follow the long-term (> minutes) evolution of this eccentric tail, because of time-step limitations (Rosswog 2007).
In this Letter, we investigate analytically the fate of c 0000 RAS matter falling back onto a recent merger. We argue that energy released during fallback is not a promising source of the X-ray flares. The energy liberated during fallback will either lead to a powerful, radiation-driven wind or a more gradually expanding "breeze" that could ultimately form a bound cloud around the merged object. In either case, the expanding gas is so opaque that the radiation is trapped in the expanding flow and degraded to low energies before being released in the X-ray band. We therefore suggest that compact binary mergers might be accompanied by delayed X-ray emission. We assess the detectability of this emission when the merger is localized by either a short γ-ray burst or a gravitational wave signal. A direct observation of the accretion activity would give us valuable information on how compact-object binaries merge.
This Letter is organized as follows. We discuss the behavior of the fallback matter in § 2. Then, we consider two possible scenarios for this material as it rebounds: we model a wind in § 3 and a bound atmosphere in § 4. Prospects for detecting the X-ray emission are discussed in § 5 and conclusions are drawn in § 6.
ACCRETION BEHAVIOR OF FALLBACK MATTER
In our analysis, we scale our parameters with values appropiate for NS-NS binaries, since these systems are the most common. The encounter of a couple of neutron stars is followed by the formation of a central attractor with typical mass Mc = 2.5 mc M ⊙ ), surrounded by an accretion disc that extends initially up to r d = 10 7 r d 7 cm (e.g., Ruffert et al. 1997). The weakly bound material, M fb = 3 × 10 −2 m fb M ⊙ (Rosswog 2007), launched into elliptical orbits, will travel as far as its apocenter and eventually come back to its pericenter rp ≃ r d . The rate at which this material accretes can be found analytically assuming that the energy distribution with mass is flat: the accretion rate, after a plateau phase atṀmax, decreases with time aṡ (Phinney 1989), where the minimum arrival time corresponds to the period of orbits with eccentricity e ≃ 0, and the initial accretion occurs at a ratė Even if eq. 1 has been originally derived for tidal distruption of stars in the potential well of a supermassive black hole, numerical calculations by Rosswog (2007) indicate that the t −5/3 law also applies to the case we are investigating. When the fallback matter hits the disc (or the leftover material), its kinetic energy (per unit mass) v 2 fb /2 ≃ GMc(−1/(2a) + 1/r d ) ∼ GMc/r d is converted into heat via shocks. The internal energy of the shocked matter is photon-dominated. Initially, the fallback matter would simply join the disc and accrete onto the central object, be-cause it is effectively cooled by neutrino emission: i.e., the flow rate is sub-Eddington with respect to the neutrino luminosity and accretion is possible. The neutrino emissivity q − ν = qan + qeN is due both to electron-positron pair annihilation qan ∝ T 9 sh and capture onto nuclei qeN ∝ T 6 sh ρ fb (see Popham, Woosley & Fryer 1999, for the analytic approximations). The fallback matter density at r d is ρ fb = M fb /(4πr 2 d v fb ), while the temperature to which the gas is shock-heated can be approximately obtained by equating its kinetic energy density at r d , (v 2 fb ρ bf /2), to its internal energy density, where ar is the radiation constant. The BH feeding happens for large enough accretion rates,Ṁ > Mign ≃ 0.14 M ⊙ sec −1 , when the cooling time tc = arT 4 sh /q − ν is shorter than the viscous time at 3)) −2 sec. When t = tw ≃ 5 × 10 −2 sec, the accretion rateṀ fb drops below the critical valueṀign and neutrino cooling becomes inefficient and eventually (when k b T sh < mec 2 ) switches off completely. The remaining reservoir of mass in the tail is still substantial M * = (3/2)Ṁ bf (tw) tw ≃ 7 × 10 −3 M ⊙ and, unable to accrete onto the black hole, it is likely to be blown off the disc plane.
WIND MODEL
The first possible fate for the fallback matter that we consider is the formation of a radiation-driven wind.
The amount of mass entrained in the wind Mw and its specific energy are uncertain. One possibility is that the total kinetic energy of fallback matter is deposited unevenly, so that a small fraction of mass (Mw ≪ M * ) can reach a final velocity that exceeds the escape velocity and form a wind. On the other extreme, the wind could have an amount of mass comparable to the fallback tail Mw ≃ M * , where sufficient internal energy to unbind this weakly bound matter is gained via accretion. Given the range of uncertainty, we will scale our equations adopting Mw = 10 −3 Mw −3 M ⊙ and a terminal velocity vt = 0.3c β0.3, where c is the speed of light and we use as guidance the escape velocity vesc = 2GMc/r d = 0.27c at r = r d .
To model the wind, we take an initial radius r0 ≃ r d . This is sufficiently close to the sonic radius that we may assume, in first approximation, an outflow with constant velocity equal to its terminal velocity vt 1 . The wind, powered by fallback matter, will steadily decrease with time according to eq. 1,Ṁw(t) = 3.4 × 10 23 Mw −3 t 2/3 w −1 t −5/3 hr g sec −1 , where tw = 0.1 tw −1 sec and the time t since the onset of the wind is in hours (t hr ). Its matter density then follows from matter conservation, for radii r < vtt. The radiation pressure P = (1/3) arT 4 can 1 For a polytropic wind with γ = 4/3 the velocity at the sonic point is only √ 3 smaller than the terminal velocity.
be related to ρ by the polytropic relation, with index 4/3. Therefore, the temperature decreases slowly as The radiation transported with the wind is mostly liberated at the trapping radius rtr where the diffusion timescale for photons equals the expansion timescale. Beyond this radius, the luminosity is transported by radiative flux up to the photosphere, where the optical depth τ ∼ 1. In our case β ∼ 0.3 or higher, therefore the trapping radius is very close to the photospheric radius and we will ignore in the following the radiative layer. The optical depth for electron scattering τ ≃ ρκr is computed with a Thomson opacity κ = 0.2 κ0.2, that we scale with the value appropriate for a flow composed solely of α-particles. The electron density is, in fact, uncertain: it depends mainly on the initial composition of the wind at r d 2 , which includes α-particles and free baryons. For a neutron-rich composition, κ < 0.2. The nucleosynthesis in the wind does not change the free electron density, since temperatures are high enough for the recombined helium to be fully ionized. The trapping radius then reads : Conservation of energy 1/2Ṁwv 2 t ≈ 16π(a/3)T 4 0 r 2 d vt allows us to solve for the central temperature, −5/12 hr K, and from eq. 6 we can derive the temperature at the trapping radius Ttr(rtr, t) ≃ 1.5 × 10 5 β 1/4 The emission from the trapping radius of the wind becomes harder with time while the luminosity, Ltr(rtr, t) = the thermal emission has a temperature of and a luminosity Lx(r d , tx) ≃ 7.5 × 10 38 β 2 0.3 r d 7 κ −1 0.2 erg sec −1 .
We note that in eq. 11 and eq. 12 the only dependences are on the initial radius, the terminal velocity and the opacity. The dependence is particularly weak for Tx because, when rtr = r d , the accretion rate is set only by the size of the launching region and by the opacity, (see eq. 7). Therefore Lx ≃Ṁwv 2 t ∝ r d v 2 t /κ ∝ r 2 d T 4 0 vt. We stress the important role of the wind composition, equivalently of the electron fraction. For an extreme proton-to-neutron ratio of 0.1, κ ≃ 0.04 and the X-ray emission (Ttr 10 6 K) starts at tx ≃ 3 hr with a luminosity Ltr ≃ 3.4 × 10 40 erg/sec.
The emission from the wind will switch off when the whole energy supplied by the wind can be accreted (Ṁ fb ≃ M edd ≃ 7 × 10 18 g/sec). This is a long time of the order of ∼ 3 months. However, when Lx drops below ∼ Lx/2, the X-ray emission is likely to be undetectable. This happens for t/tx > ∼ a few.
ATMOSPHERE MODEL
Another scenario may be envisaged where a bound atmosphere forms around the central object. This can happen, if the outflowing gas retains the same amount of energy per unit mass that the eccentric tail had. This gas would still be bound to the central BH: it would start expanding from r d nearly isotropically until it reaches a radius r * , where its internal energy is ∼ half its potential energy. After a few seconds, this inflated gas cloud has a nearly constant mass M * ∼Ṁ fb (tw) × tw ≃ 4.6 × 10 −3 M ⊙ and radius r * , since most of the mass M * is injected around t ∼ tw.
We can estimate the radius r * through GMcM * /(2r * ) where a is the semi-major axis of the particle orbits in the eccentric tail. Since the distribution of specific orbital energy ǫ = −GMc/(2a) with mass is constant dm/dǫ ≃ M * /∆ǫ, where ∆ǫ ∼ GMc/r d is the extra energy gained by M * via the tidal torque, we can solve the integral and find r * /r d ≃ ∆ǫ/(GMc/(2r d )). We conclude that r * is of the same order as r d . This reflects the fact that most of the mass is at small a. We choose to parametrize r * = 10 r d = 10 8 r * 8 cm.
We can then calculate the cloud's mean properties. Its mean density is ρ * = 2.2 × 10 6 r −3 * 8 g cm −3 and the temperature can be derived equating its internal energy density (in radiation and gas) with GMcρ * /(2r * ). Radiation pressure is ∼ 5 times higher than gas pressure and temperature decreases linearly with the cloud radius, T ≃ 4.6 × 10 9 r −1 * 8 K (while the pressure ratio remains constant). The gas cloud is in hydrostatic equilibrium, since it changes its properties on a time scale M * /Ṁ * ≃ 4 × 10 6 sec t 5/3 hr , much longer than the dynamical timescale t dy ≃ 2.6 sec r 3/2 * 8 . The rotational energy may be neglected, being a factor ∼ (r d /r * ) times smaller than the internal energy. On timescales of interest, the cloud does not deflate via radiative losses, since the diffusion time is very long, t diff = r 2 * ρ * κ/c ∼ 4600 r −1 * 8 yrs. The merged object is thus surrounded and obscured by a persistent source, emitting at the Eddington limit at a temperature T ph ≃ 3.1 × 10 6 r −1/2 * 8 K.
X-ray signal
After a few minutes, the photons escaping from the wind are in the ultraviolet band and after an hour or so in the extreme ultraviolet (EUV). This emission is strongly absorbed and unlikely to be observed. At later times t ≃ tx, however, the emission should peak in soft X-ray, with a luminosity ∼ L edd (eqs. 12 and 13) and a thermal spectrum with temperature ∼ 0.8 keV (eq. 11). In the case that an opaque cloud surrounds the merged object, we have comparable luminosity, emitted at a temperature that depends on the extension or the atmosphere: for simplicity, we consider here the case in which the emission is at ∼ 0.8 keV, corresponding to a ∼ 5 × 10 7 cm. The main difference with the wind case is that this emission should be persistent. Under favorable environmental conditions, the emission may be observable. If the merger occurs in a galactic halo or even in the intergalactic medium, absorption should be moderate. Moreover, those locations may not be polluted by contaminating soft X-ray sources. Finally, compactobject mergers should not be accompanied by a bright supernova explosion, eliminating another possible co-located X-ray source.
An Eddington luminosity yields an unabsorbed flux at redshift z of F = 3.3×10 −16 (0.03/z) 2 erg cm −2 sec −1 , where we have approximated the luminosity distance at redshift z as D l (z) = 4.2 × 10 3 (Ho/71) −1 z Mpc. Simulating the response of different current and future instruments 3 , allows us to determine the expected count rate as a function of redshift. Assuming NH = 10 20 cm −2 (Galactic and intrinsic to the host galaxy) and a black body spectrum, we get a count rate φ = Kin 0.03 z 2 cts sec −1 , where Kin ≃ 5.4×10 −5 for XMM, Kin ≃ 4.6 × 10 −5 for Chandra, Kin ≃ 4.5 × 10 −3 for XEUS and Kin ≃ 1.1 × 10 −3 for Con-X. Fig. 1 shows that X-ray detection is most likely to be feasible with the next generation of instruments. The proposed missions Con-X and XEUS will be able to collect > ∼ 10 cts or more in a 10 5 sec exposure from mergers occurring as far as z ≃ 0.1 and z ≃ 0.2 respectively. The local merger rate of NS-NS is estimated to be ∼ 0.8 − 10 × 10 −5 yr −1 per Milky Way galaxy (Belczynski et al. 2007;Kim, Kalogera & Lorimer 2006) 4 . If half of those systems eventually merge in the halo, their rate is ∼ 0.4 − 5 × 10 −7 mergers yr −1 Mpc −3 , where a number density of 0.01 galaxies Mpc −3 as been assumed (O'Shaughnessy, . Combining the NS-NS merger rate and the instrument observable volumes, we predict that Con-X could observe ∼ 13 − 156 mergers yr −1 while the expected rate for XEUS is ∼ 100 − 1251 mergers yr −1 .
Merger localization
In order to detect the X-ray emission, it is necessary to localize the merger. In the following, we discuss two possibilities.
Short GRBs
Coalescence of compact objects (especially NS-NS) are possible candidates as progenitors of SGRBs (see Nakar 2007, for a review). Therefore, in principle, a binary coalescence can be localized via a short burst, though there may be limitations. First, the local observed rate is estimated to be ∼ 100 times smaller than the local rate of compact object mergers. The discrepancy is mostly credited to the geometrical beaming of the burst jet. Moreover, the observed redshifts range typically between 0.1 − 1.5. However, the distances of these sources could only be measured for a handful of cases in the last few years. Despite this, it is reasonable to assume that short GRBs should also explode closer to us, if they are indeed produced by NS-NS mergers, and that some selection effects are preventing us from measuring their redshifts. When a short GRBs with z < ∼ 0.2 should be localized, the X-ray emission from the wind or the atmosphere could be brighter than the X-ray afterglow around two weeks later.
Gravitational waves as signposts
The gravitational wave signal is a more promising signpost for mergers. This is because this signal should be associated with any coalescence of compact objects, unlike the beamed γ-ray emission from SGRBs. An instrument such as advanced LIGO 5 should be able to detect mergers of two neutron stars to a distance of z ∼ 0.07: this implies a detection rate for XEUS and Con-X of 4 − 54 mergers yr −1 . Advanced LIGO will be, in fact, more sensitive to NS-BH binaries, which should be visible up to z ≃ 0.15. Since the Galactic merger rate for these systems is 0.1 − 5 × 10 −6 yr −1 per galaxy (Belczynski et al. 2007), XEUS is expected to detect 1 − 53 such mergers per year, while Con-X less than 15 per year.
The main limitation seems to be how accurately the merger position could be localized. Current estimates suggest that a network of non-collocated advanced interferometers -such as advanced LIGO, advanced VIRGO 6 and LCGT, Kazuaki et al (2006))-will be able to detect inspiraling binaries at the redshifts of interest and localize them at a degree level (Sylvestre 2003). This is enough for an optical but not for an X-ray follow-up. However, with a solid angle error ten times smaller, we can identify a region of the sky with only one local galaxy with redshift z < ∼ 0.05; for a galaxy at z < ∼ 0.15, the localization error should be, instead, a few hundreds times smaller. The source distance maybe be obtained directly from the gravitational signal (Abbot at al. 2008). This would greatly help the search for the X-ray fallback signal.
DISCUSSION AND CONCLUSIONS
In this Letter, we investigate the possible fate of fallback matter associated with mergers of compact objects, where a disc is formed by disruption of a NS. Matter flung to highly eccentric orbits will eventually come back to the disc at a super-Eddington rate, converting its kinetic energy into heat via shocks, and will be unable to cool by neutrino emission. Contrary to previous claims, we think that this implies that fallback matter cannot accrete all the way to the central object and be responsible for the late energy injections observed in GRB afterglows. Rather, the fallback matter is likely to be blown off the disc plane, leading to the formation of a radiation-driven wind or a bound atmosphere. For the wind case, we have analytically calculated the time evolution of the temperature and luminosity at the trapping radius: while the luminosity decreases (eq. 9), the wind photosphere becomes hotter (eq. 8). At first, the emission is in the EUV band and absorption will likely prevent us from observing it. After one or two weeks, the emission finally peaks in the soft X-ray band and the wind activity can be observed. The bound cloud is radiation pressure-dominated and emits at the Eddington limit in soft X-rays, if the atmosphere does not extend much further than 10 8 cm. We note that our estimates of luminosities are conservative: factors such as a smaller electron fraction in the ejected plasma and moderate geometrical beaming can substantially increase the expected luminosity.
We also discuss detection prospects for this delayed Xray activity. Our inspection indicates that only in fortuitous circumstances could the X-ray emission be detected with current instruments, while the planned missions (such as Con-X and XEUS) have a better chance § 5.1. Then, the main limiting factors will not be the X-ray detector capability, but rather the tool for localizing the merger § 5.2. On the one hand, short γ-ray bursts can be easily detected and localized in the whole volume where instruments like Con-X and XEUS can observe the X-ray emission; however, they are estimated to occur at a rate that is ∼ 100 times smaller than the rate at which compact binaries merge. On the other hand, the planned advanced gravitational wave interferometers should be able to detect a signal from any such a merger but within cosmic distances smaller than the maximum distance that Con-X and XEUS can reach. Moreover, X-ray follow up would require better localization precision than currently estimated.
The net result is that between a few to a few tens of detections per year are expected by XEUS with a follow-up of a short GRB. Assuming sufficiently good localization, re-pointing after a gravitational signal detection can result in ∼ 4 − 54 wind detection per year from NS-NS mergers, for both Con-X and XEUS. Furthermore for XEUS, there is the exciting possibility to observe X-ray emission from BH-NS mergers: ∼ 1 − 53 event per year. The X-ray emission from these sources should also be brighter than from a NS-NS mergers, since the mass of the central BH could be much larger. The above rates, however, should be taken as indicative of upper limits. We have not taken into account selection effects such as background/foreground sources and the fact that not all BH-NS and NS-NS mergers seem to lead to an accreting system (e.g., Rosswog 2005;Belczynski et al. 2008). Moreover, in some cases, the X-ray afterglow from the burst could outshine the wind emission. Nevertheless, the possibility to get information on mergers of compact objects from electromagnetic signals remains, and it could bring important understanding of the physics of these systems.
Finally, our findings have implications for interpreting late time activity observed in GRB afterglows. We consider unlikely that fallback matter can be held responsible, since most of the mass is blown away. Even if ∼ 10% of this matter can accrete all the way to the hole, it is very unlikely that it could produce the observed flares, which have an energy (∼ 10 49 − 10 46 ergs) comparable to that of the prompt emission (e.g. Campana et al. 2006). This would require that the eccentric tail is far more massive than the main disc (contrary to what is observed in simulations) or that the efficiency in converting accreted mass to energy is somehow strongly enhanced in the late fallback accretion. These arguments also apply to the late accretion from the main disc, which is highly super-Eddington (Metzger et al. 2008). We thus conclude that, in general, standard late time accretion is unlikely to account for the phenomena, like flares and plateaux, observed in GRB afterglows.
|
2008-11-16T10:31:06.000Z
|
2008-08-08T00:00:00.000
|
{
"year": 2009,
"sha1": "fed5ed4a4d26d708ce37db96b2e523cfee670754",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/392/4/1451/2878842/mnras0392-1451.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "fed5ed4a4d26d708ce37db96b2e523cfee670754",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
202984170
|
pes2o/s2orc
|
v3-fos-license
|
On Neutrosophic Offuninorms
Uninorms comprise an important kind of operator in fuzzy theory. They are obtained from the generalization of the t-norm and t-conorm axiomatic. Uninorms are theoretically remarkable, and furthermore, they have a wide range of applications. For that reason, when fuzzy sets have been generalized to others—e.g., intuitionistic fuzzy sets, interval-valued fuzzy sets, interval-valued intuitionistic fuzzy sets, or neutrosophic sets—then uninorm generalizations have emerged in those novel frameworks. Neutrosophic sets contain the notion of indeterminacy—which is caused by unknown, contradictory, and paradoxical information—and thus, it includes, aside from the membership and non-membership functions, an indeterminate-membership function. Also, the relationship among them does not satisfy any restriction. Along this line of generalizations, this paper aims to extend uninorms to the framework of neutrosophic offsets, which are called neutrosophic offuninorms. Offsets are neutrosophic sets such that their domains exceed the scope of the interval [0,1]. In the present paper, the definition, properties, and application areas of this new concept are provided. It is necessary to emphasize that the neutrosophic offuninorms are feasible for application in several fields, as we illustrate in this paper.
Introduction
Uninorms extend the t-norm and t-conorm axiomatic in fuzzy theory.They retain the axioms of commutativity, associativity, and monotony.Alternatively, they generalize the boundary condition, where the neutral element is any number lying in [0,1].Thus, t-norm and t-conorm are special cases of uninorms, t-norms have 1 as their neutral element and the neutral element of t-conorms is 0, see [1][2][3].
Uninorms are theoretically important, and moreover they have also been used as operators in several areas of application; for example, in image processing, to aggregate group decision criteria, among others, see [4][5][6][7][8].An exhaustive search on uninorm applications made by the authors of this paper yielded more than six hundred scientific articles that have been written in the last five years devoted to this subject.
Rudas et al. in [9] report that uninorms have been applied in diverse applications ranging, e.g., from defining Gross Domestic Product index in economics, to fusing sequences of DNA and RNA or combining information on taxonomies or dendograms in biology, and in the fusion of data provided by sensors of robotics in data mining, and in knowledge-based and intelligent systems.Particularly, they offer many examples in Decision Making, Utility Theory, Fuzzy Inference Systems, Multisensor Data Fusion, network aggregation in sensor networks, image approximation, Symmetry 2019, 11, 1136 4 of 26 Definition 1.Let X be a space of points (objects), with a generic element in X denoted by x.A Neutrosophic Set A in X is characterized by a truth-membership function T A (x), an indeterminacy-membership function I A (x), and a falsity-membership function F A (x). T A (x), I A (x), and F A (x) are real standard or nonstandard subsets of ] -0, 1 + [.There is no restriction on the sum of T A (x), I A (x), and F A (x), thus, -0≤ inf T A (x)+ inf I A (x) + inf F A (x) ≤ sup T A (x)+ sup I A (x) + sup F A (x)≤ 3 + (see [26]).
The neutrosophic sets are useful in their nonstandard form only in philosophy, in order to make a distinction between absolute truth (truth in all possible worlds-according to Leibniz) and relative truth (truth in at least one world), but not in technical applications, thus the Single-Valued Neutrosophic Sets are defined, see Definition 2. Definition 2. Let X be a space of points (objects), with a generic element in X denoted by x.A Single-Valued Neutrosophic Set A in X is characterized by a truth-membership function T A (x), an indeterminacy-membership function I A (x), and a falsity-membership function F A (x). T A (x), I A (x), and F A (x) are elements of [0,1].There is no restriction on the sum of T A (x), I A (x), and F A (x), thus, 0 ≤T A (x)+I A (x) + F A (x) ≤ 3 (see [38]).
The domain of the single-valued neutrosophic sets does not surpass the limits of the interval [0,1].This is a classical condition imposed in previous theories such as probability and fuzzy sets.Despite the past, Smarandache in 2007 proposed the membership >1 and <0 and illustrated this proposal; see [39] (pp.92-93) and the example given in the introduction of this paper.In the following, the Single-Valued Neutrosophic Oversets, Single-Valued Neutrosophic Undersets, and Single-Valued Neutrosophic Offsets are formally defined.Definition 3. Let X be a universe of discourse and the neutrosophic set A 1 ⊂X.Let T(x), I(x), F(x) be the functions that describe the degree of membership, indeterminate-membership, and non-membership respectively, of a generic element x∈X, with respect to the neutrosophic set A 1 : T, I, F: X→[0, Ω], where Ω> 1 is called overlimit, T(x), I(x), F(x)∈[0, Ω].A Single-Valued Neutrosophic Overset A 1 is defined as A 1 = (x, T(x), I(x), F(x)), x ∈ X , such that there exists at least one element in A 1 that has at least one neutrosophic component that is bigger than 1, and no element has neutrosophic components that are smaller than 0 (see [31]).Definition 4. Let X be a universe of discourse and the neutrosophic set A 2 ⊂X.Let T(x), I(x), F(x) be the functions that describe the degree of membership, indeterminate-membership, and non-membership, respectively, of a generic element x∈X, with respect to the neutrosophic set A 2 : T, I, F: X→[Ψ, 1], where Ψ< 0 is called underlimit, T(x), I(x), F(x)∈[Ψ, 1].A Single-Valued Neutrosophic Underset A 2 is defined as A 2 = (x, T(x), I(x), F(x)), x ∈ X , such that there exists at least one element in A 2 that has at least one neutrosophic component that is smaller than 0, and no element has neutrosophic components that are bigger than 1 (see [31]).Definition 5. Let X be a universe of discourse and the neutrosophic set A 3 ⊂X.Let T(x), I(x), F(x) be the functions that describe the degree of membership, indeterminate-membership, and non-membership respectively, of a generic element x∈X, with respect to the neutrosophic set A 3 : T, I, F: X→[Ψ, Ω], where Ψ< 0 < 1 <Ω, Ψ is called underlimit, while Ω is called overlimit, T(x), I(x), F(x)∈[Ψ, Ω].A Single-Valued Neutrosophic Offset A 3 is defined as A 3 = (x, T(x), I(x), F(x)), x ∈ X , such that there exists at least one element in A 3 that has at least one neutrosophic component that is bigger than 1, and at least another neutrosophic component that is smaller than 0 (see [31]).
Let us note that the oversets, undersets, and offsets cover the three possible cases to characterize.Now, the logical operations over these kinds of sets have to be redefined, in view that the classical ones cannot always be straightforwardly extended to these domains.This is the case of complement given by Smarandache in [31], whereas the union and intersection definitions do not change with respect to those of single-valued neutrosophic sets.This is summarized below: Let X be a universe of discourse, A = (x, T A (x), I A (x), F A (x) ), x ∈ X and B = (x, T B (x), I B (x), F B (x) ), x ∈ X be two single-valued neutrosophic oversets/undersets/offsets.T A , I A , F A , T B , I B , F B : X→[Ψ, Ω], where Ψ≤ 0< 1 ≤Ω, Ψ is the underlimit, whilst Ω is the overlimit, T A (x), I A (x), F A (x),T B (x), I B (x), F B (x)∈[Ψ, Ω].Let us remark that the three cases are here comprised, viz., overset when Ψ = 0 and Ω>1, underset when Ψ< 0 and Ω = 1, and offset when Ψ< 0 and Ω> 1.
Then, the main operators are defined as follows: Let us remark that when Ψ = 0 and Ω = 1, the precedent operators convert in the classical ones.With regard to logical operators, e.g., n-norms and n-conorms, their redefinitions in the offsets framework are not so evident.Below, definitions of offnegation, neutrosophic component n-offnorm, and neutrosophic component n-offconorm are provided.
One offnegation can be defined as in Equation (1).
To simplify the notation, sometimes we use Let us remark that the definition of the neutrosophic component n-offnorm is valid for every one of the components, thus, we have to apply it three times.Also, Definition 6 contains the definition of n-norm when Ψ = 0 and Ω = 1.
To simplify the notation sometimes we use Proof.The proof is equivalent to the proof of Proposition 1.
In this paper, we use the notion of lattice, based on the poset denoted by ≤ O , where T 1 , I 1 , F 1 ≤ O T 2 , I 2 , F 2 if and only if T 2 ≥ T 1 , I 2 ≤ I 1 and F 2 ≤ F 1 , where the infimum and the supremum of the set are Ψ, Ω, Ω and Ω, Ψ, Ψ , respectively.
One property that is preserved of n-norms is that the minimum is the biggest neutrosophic component n-offnorm for T O , as it is demonstrated in Proposition 1. Proposition 2 proved that the maximum is the smallest neutrosophic component n-offconorm for I O and F O when we consider ≤ O .
Evidently, the minimum is a neutrosophic component n-offnorm and the maximum is a neutrosophic component n-offconorm; see Example 1. Example 2 extends the Łukasiewicz t-norm and t-conorm to the neutrosophic offsets.Let us remark that the simple product t-norm and its dual t-conorm cannot be extended to this new domain.
Finally, we recall the definition of neutrosophic uninorms that appeared in [25], see Definition 8.
Definition 8.
A neutrosophic uninorm U N is a commutative, increasing, and associative mapping, U N : (] − 0, 1 F x , y T y , I y , F y = U N T(x, y), U N I(x, y), U N F(x, y) , where U N T means the degree of membership, U N I the degree of indeterminacy, and U N F the degree of non-membership of both x and y.Additionally, there exists a neutral element e ∈ ] − 0, 1 Let us observe that this definition can be restricted to single-valued neutrosophic sets.Neutrosophic uninorms generalize n-norms, n-conorms, uninorms in L*-fuzzy set theory, and fuzzy uninorms.
On Neutrosophic Offuninorms
This section contains the core of the present paper.It is devoted to exposing the definitions and properties of the neutrosophic offuninorms.
The definition of a neutrosophic uninorm is an especial case of neutrosophic offuninorm when Ψ = 0 and Ω = 1 (see Definition 8) and, additionally, we are dealing with single-valued neutrosophic sets.
It is easy to prove that the neutral element e is unique.Let c be a neutrosophic component (T O , I O or F O ). c: where Ψ≤ 0 and Ω≥1.Let us define four useful functions, ), Ω], defined in Equations ( 2)-( 5), respectively.
where, the superscript -1 means it is an inverse mapping.If the condition c(e) ∈ (Ψ, Ω) is fulfilled, then the degenerate cases Ω = Ψ, c(e) = Ψ and c(e) = Ω are excluded.Therefore, ϕ 1 (c(x)) and ϕ 2 (c(x)) are well-defined non-constant linear functions.Thus, they are bijective and have inverse mappings defined in Equations ( 3) and ( 5), respectively, in the sense that for c(x) . These properties can be easily verified.Also, it is trivial that they are non-decreasing mappings.
, min and max are non-decreasing mappings, thus both To prove c(e) is the neutral element, we have two cases, which are the following: Therefore, identity is satisfied.6) and (7) for c(e) ∈ (Ψ, Ω).They are associative.
Proof.Four cases are possible: ).These proofs are also valid for U D . iii.
Thus, U C satisfies the associativity.Similarly, associativity of U D can be proved.
Let us remark that we applied the properties, c(x) Proof.Since Lemma 1, they are commutative, non-decreasing operators, and c(e) is the neutral element.Since Lemma 2, they are associative operators.Moreover, it is easy to verify that U C (Ψ, Ω) = Ψ and U D (Ψ, Ω) = Ω.
Example 4. Two neutrosophic component n-offuninorms can be defined as where ∧ LO and ∨ LO were defined in the Example 2; c(e)∈(Ψ, Ω).
Proof.Evidently, both operators are commutative, since N u O is.Also, it is non-decreasing since N u O and the functions in Equations ( 2)-( 5) are.They are associative because of the associativity of N u O .
It is easy to verify that the overbounding conditions are also satisfied.
Additionally, we have Proof.Let us define the function , expressed in Equations ( 10) and (11), respectively.
Evidently, they are increasing bijective mappings.
Conversely, if we have Nu O (•, •), we can define ÛN (•, •) as follows: Let us remark that we maintain the definition of inverse mapping that we explained in Equations ( 3) and (5).
In agreement with Proposition 5, many predefined neutrosophic uninorms can be used to define n-offuninorms.In turn, fuzzy uninorms can be used to define neutrosophic uninorms, thus, it is simply necessary to find examples in the field of fuzzy uninorms; see further Section 4.1.First, let us make reference to some properties of n-offuninorms.
Given the neutrosophic component n-offuninorm If there exists y = T O (y), Proof.
See that we applied the commutativity and associativity of Then, according to the previous results we have Let us assume without loss of generality that satisfying that at least one of Ψ 1 and Ψ 2 is smaller than 0, or at least one of Ω 1 and Ω 2 is bigger than 1, then, a neutrosophic component n-offuninorm aggregates both of them, according to the interpretation we have to obtain.
Applications
In the following, we illustrate the applicability of the present investigation aided by three areas of application.
N-Offuninorms and MYCIN
Let us start with the parameterized Silvert uninorms, see [40]: where λ > 0 and c N (e λ ) = 1 λ+1 .To convert this family to the equivalent one defined into [−1, 1] we have to apply the Equations in Proposition 5.Then, it is obtained Let us note that lim An additional consequence of these assertions is that inequalities 0<λ 1 <λ 2 imply Applying Equations ( 2)-( 5) to the conditions of the present example, the following transformations are obtained: 1+λ .Then, a neutrosophic component n-offnorm and a neutrosophic component n-offconorm are defined from Equations ( 8) and ( 9), as follows: , respectively.Other properties of u Oλ (•, •) are the following: To prove those inequalities are strict, let us suppose the equation Then, we conclude it is Archimedean.
u O1 (•, •) means the combination of the CFs of two independent experts about the hypothesis H. CF = -1.0means expert has 100% evidence against H and CF = 1.0 means he or she has 100% evidence to support H. The smaller the CF, the greater the evidence against H; the larger the CF, the greater the evidence supporting H; whereas evidence with degree close to 0 means a borderline degree of evidence.Here, u O1 (c O (x), −c O (x)) = 0, where u O1 (−1, 1) = u O1 (1, −1) = −1 for meaning that the 100% contradiction is assessed as 100% against H.The original u O1 (•, •) in [32] accepts they are undefined.
Another function is the Modified Combining Function C(x,y), see [34], defined as The components n-offnorm and n-offconorm obtained from the PROSPECTOR are the following: ) − 1, respectively, see Figures 1 and 2. respectively, see Figures 1 and 2. Hitherto we mostly calculated on neutrosophic components, nevertheless n-offuninorms have to be defined for the three components altogether.For example, given x, y Conjunctive and disjunctive neutrosophic component n-offuninorms were illustrated in Example 3; see also Example 5. Example 6 is a hypothetical example to explain the use of this theory in a real-life situation.Example 6.Three physicians, denoted by A, B, and C, have to emit a criterion about a patient's disease which suffers from somewhat confusing symptoms.They agree that the Certainty Factor is the better way to express their opinions.They use single-valued neutrosophic offsets, instead of a simple CF to increase the accuracy of the criteria.
Conjunctive and disjunctive neutrosophic component n-offuninorms were illustrated in Example 3; see also Example 5. Example 6 is a hypothetical example to explain the use of this theory in a real-life situation.Physician A thinks that the probability they are dealing with a thyroid disease is A T = <−0.6,0.4, 0.6> and that it is an infectious disease is A I = <0.8,−0.5, −0.8>, thus, A is 60% against H T and 40% undecided about it; however, A is 80% in favor of H I and 50% sure about it.
Example 6. Three physicians, denoted by A, B, and C, have to emit a criterion about a patient's disease which suffers from somewhat confusing symptoms. They agree that the Certainty Factor is the better way to express their opinions. They use single-valued neutrosophic offsets, instead of a simple CF to increase the accuracy of the criteria.
Similarly To decide what is the strongest hypothesis, H T or H I , they select the well-known PROSPECTOR function used in MYCIN (see Equation ( 12)) for each component.
Despite we proved in Proposition 5 that neutrosophic uninorms are mathematically equivalent to offuninorms, it is worthwhile to remark that the reason for using an interval different of [0, 1] is that it could be useful to model real-life problems.The present example is a good one to explain that reason.The advantages arise from the accuracy and compactness of an expert's information.In this example, from an expert's viewpoint, it is easier to express opinions in the scale [−1, 1] with the aforementioned meaning than in the scale [0, 1], which is less clear.Information compactness is given because of only a single offset is semantically equivalent to at least two neutrosophic sets.
Additionally, because of the significance of functions like u O1 (•, •) and C(x,y), which were used as aggregation functions in that well-known expert system, some authors have extended the domain of fuzzy uninorms to any interval [a, b], not necessarily restricted to a = 0 and b = 1; see [33,34].
This fact supports the usefulness of the present work, where for the first time the precedent ideas on extending the truth values beyond the scope of [0, 1] naturally associate with the offset concept maintaining the original definitions of the aggregation functions used in MYCIN.
Another powerful reason is the applicability of u O1 (•, •) and C(x,y), and hence of the fuzzy uninorms defined in [a, b], as threshold functions of artificial neurons in Artificial Neural Networks, as well as to Fuzzy Cognitive Maps, which are used in fields like decision making, forecasting, and strategic planning [33].
Such applications of uninorms in the fuzzy domain can be explored in the framework of neutrosophy theory, e.g., in Artificial Neural Networks based on neutrosophic sets, in Neutrosophic Cognitive Maps, among others [36,37].
N-Offuninorms and Implicators
Fuzzy uninorms are used to define implicators (see [41], pp.151-160).This application was extended to neutrosophic uninorms ( [25]).To extend the implication operator in the offuninorm framework, first, we need to consider the notion of offimplication, which has been defined symbolically.
The Symbolic Neutrosophic Offlogic Operators or briefly the Symbolic Neutrosophic Offoperators extend the Symbolic Neutrosophic Logic Operators, where every one of T, I, F has an under and an over version (see [31], pp.132-139).
T O = Over Truth, T U = Under Truth; I O = Over Indeterminacy, I U = Under Indeterminacy; F O = Over Falsehood, F U = Under Falsehood.Let S N = {T O , T, T U , I O , I, I U , F O , F, F U } be the set of neutrosophic symbols, an order is defined in S N as follows: if '<' denotes "more important than", we have the following order, T U < 3. Let us note that the proposed order is not the unique one, it depends on the decision maker's objective.Let us observe that I is the center of the elements according to <.For every α ∈ S N , the symbolic neutrosophic offcomplement is denoted by C O (α) and it is defined as the symmetric element respect to the median centered in I, e.g., C SO (F O ) = F U and C SO (F) = T, hence, given α ∈ S N its symbolic neutrosophic offnegation is α = C SO (α).Additionally, for any α, β ∈ S N the symbolic neutrosophic offconjunction is defined as α ∧ β = min(α, β), the symbolic neutrosophic offdisjunction is defined as α ∨ β = max(α, β), whereas the symbolic neutrosophic offimplication is defined in Equation (13).
The neutrosophic offnegation satisfies the following properties: 1.It is a non-increasing operator, which extends the classical negation operator in fuzzy logic theory.It is strictly decreasing when Ω + Ψ = 1. 2. It extends the notion of symbolic neutrosophic offnegation because satisfies the following properties: 2.1.It is centered in 0.5, i.e., 0.5 = 0.5, therefore I = 0.5.Let us observe that I is the center of the elements according to <.For every α ∈ S N , the symbolic neutrosophic offcomplement is denoted by C O (α) and it is defined as the symmetric element respect to the median centered in I, e.g., C SO (F O ) = F U and C SO (F) = T, hence, given α ∈ S N its symbolic neutrosophic offnegation is Additionally, for any α, β ∈ S N the symbolic neutrosophic offconjunction is defined as α ∧ SO β = min(α, β), the symbolic neutrosophic offdisjunction is defined as α ∨ SO β = max(α, β), whereas the symbolic neutrosophic offimplication is defined in Equation (13).
The neutrosophic offnegation satisfies the following properties: It is a non-increasing operator, which extends the classical negation operator in fuzzy logic theory.It is strictly decreasing when Ω + Ψ = 1.
2.5
When
3.
If The precedent properties are easy to demonstrate.
Hence, the definition of offimplication where, N and O is the offnegation defined in Equation (14).
Example 7. One illustrative example of Equation ( 16) is obtained revisiting Section 4.1, by defining the following neutrosophic component n-offnorm: This is the transformation of Silvert uninorms to the domain [−1, 2] 2 applying the functions in Equations ( 10) and (11), and the transformation in Proposition 5. Also, let us take U ZD (c(x), c(y)) of Example 3. See that [−1, 2] is symmetric respect to 0.5, and the neutral element is 0.5.
Then, we study the offuninorm defined in the following equation: Thus, we define the offimplication generated by U O (•, •) according to Equation ( 16) as follows: where in this case we have U ZD T O (α), This is the transformation of Silvert uninorms to the domain [−1, 2] 2 applying the functions in Equations ( 10) and (11), and the transformation in Proposition 5. Also, let us take U ZD ((), ()) of Example 3. See that [−1, 2] is symmetric respect to 0.5, and the neutral element is 0.5.
Thus, we define the offimplication generated by (•,•) according to Equation ( 16) as follows: where in this case we have This is the transformation of Silvert uninorms to the domain [−1, 2] 2 applying the functions in Equations ( 10) and (11), and the transformation in Proposition 5. Also, let us take U ZD ((), ()) of Example 3. See that [−1, 2] is symmetric respect to 0.5, and the neutral element is 0.5.
Thus, we define the offimplication generated by (•,•) according to Equation ( 16) as follows: where in this case we have This offimplicator satisfies the overbounding conditions It is easy to check that substituting u O (•, , we obtain the more classical equations 0, 1, 1
N-Offuninorms and Voting Games
The applicability of uninorms to solve group decision problems is evident.However, the use of them as part of a game theory solution is not so obvious.This subsection is devoted to solving voting games based on n-offuninorms.
A cooperative game with transferable utility consists of a pair (N,v), where N = {1, 2, . . .,n} is a non-empty set of players,n ∈ N and v: 2 N →R, i.e., v(•) is a function of the power set of N such that each coalition or S⊆ N is associated with a real number.v is called characteristic function and v(S) represents the conjoint payoff of players in S. Additionally, v(∅) = 0 (see [42], p. 2).
A simple game models voting situations.It is a cooperative game such that for every coalition S, either v(S) = 0 or v(S) = 1, and v(N) = 1 (see [42], p. 7).
One solution is the Shapley-Shubik index, which is the Shapley value to simple games (see [42], pp.6-7).The equation of Shapley value is the following: where |S| is the cardinality of coalition S, |N| is the cardinality of the set of players or grand coalition and φ i (v) is the value assigned to player i in the game.This is the unique solution which satisfies the following axioms: If i is such that for every coalition S the equation v(S ∪ {i}) = v(S) holds, then φ i (v) = 0 (Dummy), • Given v and w two games over N, then This value is the sum of the terms [v(S ∪ {i}) − v(S)], which mean the marginal contribution of player i to the coalitions S, multiplied by which is the probability that |S| − 1 players precede player i in the game and |N| − |S| players follow him or her.Thus, the Shapley value of i is the expected marginal contribution of i to the game (see [42], p. 7).The result of the Shapley-Shubik index is interpreted as a measure of each player's power.
The n-offgame is interpreted in the following way: 1.
Experts forecast that voters will rank coalition S in the k th position of their preference, also they cannot decide if S will be ranked in the l th position.The first place or k = 1 corresponds to the preferred coalition of all and so on.Additionally, the n-offgame must satisfy the following rules: 2.
Given any two coalitions S 1 and S 2 , S 1 S 2 , we have the first component that both v(S 1 ) and v(S 2 ) are different.Thus, every coalition is associated with a unique number in the order of preference.
3.
v(S) = (k,k,2 n -k+1) means experts have no doubt that coalition S will be voted in the k th position.
Let us observe that it is not a simple game.This game can be interpreted as a multicriteria decision-making problem, where its solution is a measure of every player's power in the game Shapley value can be the solution to voting n-offgames, in the form given in Equation ( 18): Let us note that the minus sign in the expression was taken for convenience because the rank we applied is decreasing respect to the coalition´s significance.Additionally, v(S ∪ {i}) − v(S) is the difference between two 3-tuple values, thus the operation (k 1 , l 1 , 18) means the expected number of places won or lost in voter preference, as predicted by experts.
Apparently, Shapley value cannot be the solution to this problem because v(∅) 0 and v(•) is not a game.However, if we take in that v(S) = (k,l,2 n -k+1) in fact represents three games, namely, v 1 (S)=k, v 2 (S)= l, and v 3 (S) =2 n -k+1, one per component and additionally taking into account they are linear transformations of three games with characteristic functions w 1 , w 2, , and w 3 ; where w 1 (S)= 2 n − v 1 (S), w 2 (S)= 2 n − v 2 (S), and w 3 (S) = 1 − v 3 (S), then, the marginal contributions of the three pairs, w 1 (•) and v 1 (•), w 2 (•) and v 2 (•), w 3 (•) and v 3 (•), are the same except for the sign.Thus, these three pairs have the same Shapley value except for the sign and therefore this property is extended to v(•) and w(•).
Shapley value is a rational solution to the game, nevertheless, it can differ from actual human behavior, as Zhang et al. suggested in [43] to model restrictions in game decisions according to the human behavior based on fuzzy uninorms.Therefore, we propose n-offuninorms to explore other behaviors in human decision making by recursively applying an n-offuninorm to every pair of values Here we explore n-offuninorms defined on [−L, L], L = 2 n −1 and with the PROSPECTOR parameterized function with λ> 0 and neutral element e = L 1−λ 1+λ , see Equation (19).
3.
Let S j be the set of coalitions not containing i, and j = 1, 2, . . ., 2 n−1 .Let us take a i1 = v(S 1 ) and a i2 = v(S 2 ) and calculate a prev = U Oλ and go to step 4. Let us point out that in the precedent algorithm the associativity of n-offuninorms was used.Moreover, the algebraic sum in Shapley value and the n-offuninorms yield to somewhat similar results.Thus, for U oλ (•,•) with λ = 1, we have that x, y < 0 imply both U oλ (x,y)<min(x,y) and x+y< min(x,y), whereas when x, y > 0, we have U oλ (x,y)>max(x,y) and x+y> max(x,y).For x,y satisfying x•y<0, then both U oλ (x,y) and x+y are compensatory operators, and finally 0 is the neutral element of The solutions in Table 3 prove that the greater λ, the greater the solution values.Thus, when λ is increased, its associated solution models more optimistic behavior with respect to the first component, which is compensated with more pessimistic behavior with respect to the third component.
The advantages of the proposed approach are more evident when it is compared with a classical one restricted to {0, 1}.Here we used a semantic represented with natural numbers and we calculated directly on them.In contrast, for applying classical definitions in {0, 1}, we would need to define eight Boolean functions, one per element.What is more, some operations such as marginal contributions, which is an algebraic difference, cannot be directly applied in the logic sense.
In case we would need to extend the approaches to the continuous gradation, then a continuous ranking can be modeled with the identity line I d (x) = x, but in the classical approach, eight memberships functions would have to be considered, where the simplest ones are triangular (see Figure 6).From Figure 6 we can infer that there exists a transformation between both models; however, the proposed model is the simplest one.
5.Discussion
Neutrosophic oversets, undersets, and offsets are concepts of a novel and non-conventional theory of uncertainty.Historically, the convention of restricting logic to the interval [0, 1] has dominated fuzzy logic and its generalizations.Possibly this is a legacy of probability and mathematical logic, where, semantically speaking, 0 and 1 have been considered the two extreme opposite sides.Therefore, oversets, undersets, and offsets can be understood as controversial subjects.Nevertheless, Smarandache in [31] illustrates with some examples that such sets, of which their domains surpass the scope of [0, 1], could be useful to represent knowledge in a valid semantic.
Discussion
Neutrosophic oversets, undersets, and offsets are concepts of a novel and non-conventional theory of uncertainty.Historically, the convention of restricting logic to the interval [0, 1] has dominated fuzzy logic and its generalizations.Possibly this is a legacy of probability and mathematical logic, where, semantically speaking, 0 and 1 have been considered the two extreme opposite sides.Therefore, oversets, undersets, and offsets can be understood as controversial subjects.Nevertheless, Smarandache in [31] illustrates with some examples that such sets, of which their domains surpass the scope of [0, 1], could be useful to represent knowledge in a valid semantic.This is a recent theory that needs more developing and the scientific community's acknowledgment of its usefulness.One of our aims with this paper is to demonstrate that this theory can be useful.To achieve this end, we introduced the uninorm theory in the neutrosophic offset framework.This union is manifold advantageous, the most evident one being that we have provided a new aggregator operator to these sets.As we mentioned in the introduction, there exists a wide variety of fuzzy uninorm applications, namely, Decision Making [9,14,15], DNA and RNA fusion [9], logic [17], Artificial Neural Networks [16], among others.Uninorm is more flexible than t-norm and t-conorm because it includes the compensatory property in some cases, which is more realistic for modeling human decision making, as was experimentally proved by Zimmermann in [21].
Also, uninorms have enriched other theories when they were generalized to other frameworks.In L*-fuzzy set theory [23], uninorms also aggregate independent non-membership functions to achieve more precision.Moreover, neutrosophic uninorms aggregate the indeterminate-membership functions [25].
Additionally, some authors have associated uninorms with non-conventional theories.In [33,34] we can find some attempts to extend uninorm domains to an interval [a, b].The reason is that the PROSPECTOR function related to the MYCIN Expert System is one very important milestone in Artificial Intelligence history.The point is that the PROSPECTOR function is basically a uninorm except it is defined in the interval [−1, 1], thus, we can consider intervals greater than [0, 1].They have argued that there exist two reasons to maintain the interval [−1, 1]-the first one is the importance of the PROSPECTOR function, the second one is the facility to interchange information among users and decision makers in form of degrees to accept or reject hypotheses.
The second non-conventional approach is the bipolar or Multi-Polar uninorms defined in [24].The world is (and some people are) is evidently multi-polar; in case of bipolarity they are modeled in [−1, 1].Especially in [24], we have a multi-polar space consisting of an ordered pair of (k, x), where k∈{1, 2, . . ., n} represents a category or class and x∈(0, 1], with the convention 0 = (k, 0) for every category.This is a more complex representation that takes a unique interval [−n, n] where, for x∈[−n, n], the function round(x) represents the category and its fractional part represents the degree of membership to that category.This is a real extension of bipolarity in [−1, 1] to multi-polarity.In [31] (pp.127, 130) Tripolar offsets and Multi-polar offsets are defined.We illustrated in Example 8 that considering the semantic values belong to {−n,−n+1, . . ., 0, 1, . . ., n} could be advantageous.
The definition of uninorm-based implicators is not new in literature, they can be seen in [41] (pp.151-160) for fuzzy uninorms, in [17] it is extended for type 2 fuzzy sets, in [24] for L*-fuzzy set theory, and in [25] for neutrosophic uninorms.In the present paper, uninorm-based offimplicators are defined, however, we only counted on symbolic offimplication operators (see [31], p. 139).To extend this definition to a continuous framework, we had to extend the symbolic offnegation to a continuous one.
Finally, we preferred to illustrate a voting game solution instead of a group decision method because the relationship of offuninorms with the latter subject is predictable.However, to find any game theory associated with uninorms is uncommon in literature.One remarkable example can be seen in [43], where a behavioral approach has been made to certain kind of games, where uninorms model the humans' restrictions to make the division of gains among the players.
In the present paper, another approach is proposed where an indeterminacy component is taken into account.Also, we proved that modeling with a natural number semantic is simpler than to utilize the classical [0, 1] interval, because of the fact that n membership functions can be substituted by a linear identity function.We basically defined the voting game solution since the Shapley-Shubik index components (see [42], pp.6-7), where we only changed the algebraic sum by offuninorms.The classical approaches such as the Shapley-Shubik index are interested in a rational and fair solution; nevertheless, many times that does not occur in real negotiations and then behavioral solutions are needed.
Proposition 1 .Definition 7 .
Let N n O (•, •) be a neutrosophic component n-offnorm, then, for any elements x, y ∈M O we have N n O (c(x), c(y)) ≤ min(c(x), c(y)).Proof.Because of the monotonicity of the neutrosophic component n-offnorm and one of the overbounding conditions, we have N n O (c(x), c(y)) ≤ N n O (c(x), Ω) = c(x), hence N n O (c(x), c(y)) ≤ c(x) and similarly N n O (c(x), c(y)) ≤ c(y) can be proved, therefore, N n O (c(x), c(y)) ≤ min(c(x), c(y)).See that Proposition 1 maintains this property of the n-norms.Likewise to the definition of the neutrosophic component n-offnorm, in Definition 7 it is described the neutrosophic component n-offconorm.Let c be a neutrosophic component (T O , I O or F O ). c: M O →[Ψ, Ω], where Ψ≤ 0 and Ω ≥1.The neutrosophic component n-offconorm N co O : [Ψ, Ω] 2 → [Ψ, Ω] satisfies the following conditions for any elements x, y, and z ∈M O
Definition 9 .
Let c be a neutrosophic component (T O , I O or F O ). c: M O →[Ψ, Ω], where Ψ≤ 0 and Ω≥1.The neutrosophic component n-offuninorm N u O : [Ψ, Ω] 2 → [Ψ, Ω] satisfies the following conditions for any elements x, y, and z ∈M O : i.There exists c(e)∈M O , such that N u O
Lemma 2 .
Let c be a neutrosophic component (T O , I O , or F O ). c: M O →[Ψ, Ω], where Ψ≤ 0 and Ω≥1.Given ∧ O a neutrosophic component n-offnorm and ∨ O a neutrosophic component n-offconorm, let us consider U C (c(x), c(y)) and U D (c(x), c(y)) the operators defined in Equations (
Proposition 4 .
and only if the neutrosophic component n-offnorm and n-offconorm are Archimedean.Let us observe that < O is the order < defined in the real line when c(x) is T O (x) and it is > when c(x) is I O (x) or F O (x).Let c be a neutrosophic component (T O , I O or F O ). c: M O →[Ψ, Ω], where Ψ< 0 and Ω> 1, and let a neutrosophic component n-offuninorm N u O : [Ψ, Ω] 2 → [Ψ, Ω] .Then, for every x, y ∈ M O , a neutrosophic component n-offnorm and a neutrosophic component n-offconorm are defined by Equations (
Symmetry 2019, 11 , 1136 11 of 26 Proposition 5 .
Let (T O , I O , or F O ), c O : M O →[Ψ, Ω] and (T, I, or F), c N : MN→[0,1] be a neutrosophic component n-offset and a neutrosophic component, respectively.There exists a bijective mapping such that every neutrosophic component n-offuninormis transformed into a neutrosophic component uninorm and vice versa.
x) .Suppose x and e = T O (e), I O (e), F O (e) are ≤ O -incomparable, i.e., x O e and e O x.Then,
Symmetry 2019 ,
11, x FOR PEER REVIEW 16 of 25 < < < F O < I O < O , where −∞ < U < I U < F U < 0 , 0 ≤ F < < ≤ 1 and 1 < F O < I O < O < +∞; see Figure3.Let us note that the proposed order is not the unique one, it depends on the decision maker's objective.
otherwise , see Figure 4, and u O (•, •) models the neutrosophic n-components I O and F O , see Figure 5.
Figure 4 .
Figure 4. Depiction of the neutrosophic n-offimplication generated by UZD for TO.
Figure 5 .
Figure 5. Depiction of the neutrosophic n-offimplication generated by uO for both, IO and FO.
Figure 4 .
Figure 4. Depiction of the neutrosophic n-offimplication generated by U ZD for T O .
Figure 4 .
Figure 4. Depiction of the neutrosophic n-offimplication generated by UZD for TO.
Figure 5 .
Figure 5. Depiction of the neutrosophic n-offimplication generated by uO for both, IO and FO.Figure 5. Depiction of the neutrosophic n-offimplication generated by u O for both, I O and F O .
Figure 5 .
Figure 5. Depiction of the neutrosophic n-offimplication generated by uO for both, IO and FO.Figure 5. Depiction of the neutrosophic n-offimplication generated by u O for both, I O and F O .
forecasted experts' ranking of the coalitions.Each coalition can represent a bloc of political parties.
Figure 6 .
Figure 6 .Depiction of two kinds of 3-person game modeling.Classical [0, 1] is represented in dashed lines and triangular membership functions, whereas the solid line represents the solution based on offsets.The points represent the Boolean restrictions.
Figure 6 .
Figure 6.Depiction of two kinds of 3-person game modeling.Classical [0, 1] is represented in dashed lines and triangular membership functions, whereas the solid line represents the solution based on offsets.The points represent the Boolean restrictions.
|
2019-09-17T03:07:32.936Z
|
2019-09-06T00:00:00.000
|
{
"year": 2019,
"sha1": "23b8cfbcb46fe582c260ea23c23f9d52ea032197",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/11/9/1136/pdf?version=1568975682",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cab30bffa4b4c965e00a515533af68930ab689cd",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
266305642
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Analysis of Supersonic Flow in Atmospheric and Low Pressure in the Region of Shock Waves Creation for Electron Microscopy
This paper presents mathematical-physics analyses in the field of the influence of inserted sensors on the supersonic flow behind the nozzle. It evaluates differences in the flow in the area of atmospheric pressure and low pressure on the boundary of continuum mechanics. To analyze the formation of detached and conical shock waves and their distinct characteristics in atmospheric pressure and low pressure on the boundary of continuum mechanics, we conduct comparative analyses using two types of inserted sensors: flat end and tip. These analyses were performed in two variants, considering pressure ratios of 10:1 both in front of and behind the nozzle. The first variant involved using atmospheric pressure in the chamber in front of the nozzle. The second type of analysis was conducted with a pressure of 10,000 Pa in front of the nozzle. While this represents a low pressure at the boundary of continuum mechanics, it remains above the critical limit of 113 Pa. This deliberate choice was made as it falls within the team’s research focus on low-pressure regions. Although it is situated at the boundary of continuum mechanics, it is intentionally within a pressure range where the viscosity values are not yet dependent on pressure. In these variants, the nature of the flow was investigated concerning the ratio of inertial and viscous flow forces under atmospheric pressure conditions, and it was compared with flow conditions at low pressure. In the low-pressure scenario, the ratio of inertial and viscous flow forces led to a significant reduction in the value of inertial forces. The results showed an altered flow character, characterized by a reduced tendency for the formation of cross-oblique shockwaves within the nozzle itself and the emergence of shockwaves with increased thickness. This increased thickness is attributed to viscous forces inhibiting the thickening of the shockwave itself. This altered flow character may have implications, such as influencing temperature sensing with a tipped sensor. The shockwave area may form in a very confined space in front of the tip, potentially impacting the results. Additionally, due to reduced inertial forces, the cone shock wave’s angle is a few degrees larger than theoretical predictions, and there is no tilting due to lower inertial forces. These analyses serve as the basis for upcoming experiments in the experimental chamber designed specifically for investigations in the given region of low pressures at the boundary of continuum mechanics. The objective, in combination with mathematical-physics analyses, is to determine changes within this region of the continuum mechanics boundary where inertial forces are markedly lower than in the atmosphere but remain under the influence of unreduced viscosity.
Introduction
Temperature sensing in supersonic flow using sensors presents challenges not only due to the compressibility of the gas but also because of the formation of shockwaves, which
Introduction
Temperature sensing in supersonic flow using sensors presents challenges not only due to the compressibility of the gas but also because of the formation of shockwaves, which strongly affect the values of state quantities [1][2][3].In the free flow, their value is completely different than in the flow with the inserted probe.The paper deals with the sensing of static temperature in a supersonic gas flow.There are two ways to handle this problem.
The first option is to insert a probe with a flat end into the stream, in front of which a perpendicular torn-off shock wave is created.At this point, we no longer sense the state quantities of static pressure and static temperature on the probe head; instead, we sense total pressure and stagnation temperature [4,5].Using the stagnation temperature value, we can then calculate the static temperature.However, this method requires knowing the Mach number at the location of temperature sensing, which can sometimes be challenging.This leads to the necessity of performing velocity sensing at a specific location.One way to achieve this is by using a sensor based on a Pitot tube, which senses total pressure.However, capturing static pressure from the side of the tube is also necessary (Figure 1).This might pose a challenge in confined spaces.The second option is a suitably shaped temperature sensor fitted with a tip corresponding to the given flow so that a cone shock wave is created on the front of the sensor, beyond which the velocity of the flow does not decrease to subsonic speed and there are no step changes in state variables [6].In addition, these changes occur behind the tip of the sensor, not in front of it.
In this paper, a mathematical-physics analysis [7][8][9] of a temperature sensor inserted into a controlled supersonic flow generated behind the nozzle is carried out and both variants are evaluated.
These analyses were performed as a comparison of the issue in both atmospheric pressure and the low-pressure region at the boundary of continuum mechanics [10,11].The effect of the changed ratio of inertial and viscous forces due to low pressure was evaluated, but still in the region where low pressure does not affect the viscosity value.
The research on supersonic flow at low pressures and its different characteristics of flow at low pressures on the boundary of continuum mechanics compared to conventional atmospheric pressure has had a significant impact on the development of the field of Environmental Scanning Electron Microscopy (ESEM) [12][13][14].In general, electron The second option is a suitably shaped temperature sensor fitted with a tip corresponding to the given flow so that a cone shock wave is created on the front of the sensor, beyond which the velocity of the flow does not decrease to subsonic speed and there are no step changes in state variables [6].In addition, these changes occur behind the tip of the sensor, not in front of it.
In this paper, a mathematical-physics analysis [7][8][9] of a temperature sensor inserted into a controlled supersonic flow generated behind the nozzle is carried out and both variants are evaluated.
These analyses were performed as a comparison of the issue in both atmospheric pressure and the low-pressure region at the boundary of continuum mechanics [10,11].The effect of the changed ratio of inertial and viscous forces due to low pressure was evaluated, but still in the region where low pressure does not affect the viscosity value.
The research on supersonic flow at low pressures and its different characteristics of flow at low pressures on the boundary of continuum mechanics compared to conventional atmospheric pressure has had a significant impact on the development of the field of Environmental Scanning Electron Microscopy (ESEM) [12][13][14].In general, electron microscopy has brought the possibility of viewing samples at a magnification that is several times more detailed than conventional optical microscopes [15,16].However, electron microscopy necessitates a vacuum for the passage of the electron beam.In this environment, monitoring wet samples is challenging and requires extensive preparation [17].As a solution, the Environmental Scanning Electron Microscope (ESEM) was developed.In this microscope, the Sensors 2023, 23, 9765 3 of 34 specimen chamber is separated from the vacuum spaces by a system of small apertures and an intermediate chamber.Some devices, such as mass spectrometers, include intermediate chambers.This design allows samples to be held in the specimen chamber at pressures on the boundary of continuum mechanics.Consequently, wet samples can be observed without special preparation, and it becomes possible to study electrically non-conductive, semiconducting samples [18], or native samples [19] without damaging them, or studying these samples in dynamic in-situ experiments [20,21].ESEM detects signal electrons using specialized ionization or scintillation detectors [22,23].
ESEM consists of chambers with a large pressure gradient separated by a small aperture, in which a critical flow is generated, which has a great impact on the scattering of the electron beam.This paper also contributes to the research on critical flow at the boundary of continuum mechanics in the field of shockwaves.
Experimental Chamber
These analyses serve as preparatory materials for experiments conducted in the experimental chamber, specifically designed for the comprehensive study of gas flow in supersonic mode.The chamber accommodates investigations under classical atmospheric conditions as well as in the low-pressure region at the boundary of continuum mechanics and the slip flow region (Figure 2) [24].The chamber is comprised of two chambers separated by a replaceable component, which can be used to separate the chambers with different aperture and nozzle variants according to the type of research currently planned [25].In the paper, a variant is analyzed where the chambers are separated by the aperture with a diameter of 1.6 mm and a subsequent nozzle with dimensions that are further determined in the paper according to the theory for the calculated cross-section.The experimental chamber was lent to Brno University of Technology by the team of Vílém Neděla from the Institute of Scientific Instruments of the Czech Academy of Sciences, who manufactured it and are engaged in research in the field of low-pressure flow, among other things.The results of the research in the field of the difference in the character of supersonic flow at low pressures will be used in the construction of a differentially pumped chamber in ESEM.In this conclusion, shock waves are analyzed, which, since there are pressure gradients on them, have a great influence on the primary electron beam that passes through the differentially pumped chamber because there is a greater scattering of the electron beam on them.Each scattering has the effect of reducing the resulting sharpness of the image.
This relationship leads to the surprising conclusion that dynamic viscosity does not depend on the pressure and density of the gas.Physically, it can be justified by the fact that at a lower density of the gas, fewer molecules jump between layers, but due to the longer free path, each jump is associated with a proportionately greater momentum transfer.Experiments have confirmed this conclusion for gases under conditions under which a gas can be considered ideal.Pfeiffer probes were used as sensors that sensed the absolute pressure in the chambers.Their location can be seen in Figure 1, left; these are red probes inserted into the appropriate chamber.Their location is also shown in the diagram in Figure 2. According to the current measurements, the Pfeiffer CMR 361 sensors with a measuring range from 10 Pa to 110,000 Pa and the Pfeiffer CMR 362 sensor with a measuring range from 1 Pa to 1100 Pa are used.
Methodology
As a first step, using the theory of one-dimensional isentropic flow, the calculated state of the nozzle for the selected angle of 12° was determined for the selected ratio of Pv/Po = 0.1 [27,28].Subsequently, an analysis of this nozzle was performed on the previ- A 2D axisymmetric model of the chamber with aperture and nozzle was created for further theoretical calculations and mathematical-physics analyses.As mentioned, the aperture diameter was chosen to be 1.6 mm.
Analyses were performed for two pressure drops.The first one is the atmospheric pressure variant (P o = 101,325 Pa and output pressure P v = 10,132 Pa, i.e., a ratio of 10:1).From the given ratio, the calculation dimension of the nozzle was further determined using the theory of one-dimensional isentropic flow (see below).Boundary conditions for the analyses are shown in Figure 2. Subsequently, for the same pressure ratio, an analysis was performed for the low-pressure variant (P o = 10,000 Pa and output pressure P v = 1000 Pa).
This second option is chosen because all our research is in the low-pressure area but still in the area of continuum mechanics.In this case, with a choice above 133 Pa, the viscosity of the gas is not dependent on the pressure applied.The derivation of the dynamic viscosity relationship concerning the mean free path (Equation ( 1)) was taken from [26].
This relationship leads to the surprising conclusion that dynamic viscosity does not depend on the pressure and density of the gas.Physically, it can be justified by the fact that at a lower density of the gas, fewer molecules jump between layers, but due to the longer free path, each jump is associated with a proportionately greater momentum transfer.Experiments have confirmed this conclusion for gases under conditions under which a gas can be considered ideal.
Pfeiffer probes were used as sensors that sensed the absolute pressure in the chambers.Their location can be seen in Figure 1, left; these are red probes inserted into the appropriate chamber.Their location is also shown in the diagram in Figure 2. According to the current measurements, the Pfeiffer CMR 361 sensors with a measuring range from 10 Pa to 110,000 Pa and the Pfeiffer CMR 362 sensor with a measuring range from 1 Pa to 1100 Pa are used.
Methodology
As a first step, using the theory of one-dimensional isentropic flow, the calculated state of the nozzle for the selected angle of 12 • was determined for the selected ratio of P v /P o = 0.1 [27,28].Subsequently, an analysis of this nozzle was performed on the previously tuned Ansys Fluent system [24], and the results were evaluated regarding the critical flow theory.This combination of theory and mathematical-physics analyses is one of the great advantages of modern research methodology [29][30][31].
Subsequently, analyses of supersonic flow in the nozzle and behind the nozzle are carried out in three variants for both of the above-mentioned pressure conditions.
•
Flow in free space, in an intact environment-Free Flow.
•
Flow in an environment with an inserted temperature sensor with a flat end-Flat shape.
•
Flow in an environment with an inserted temperature sensor with a conical end-Angle 30 • .
These analyses require theoretical knowledge in the given areas, which will now be briefly introduced: Understanding the three regimes of gas flow: The last point is also related to the change in temperature behind the perpendicular shock wave.
Another consideration is the behavior of the compressible supersonic flow mode at the tip of the sensor, on which not a detached perpendicular shock wave but a cone shock wave is created.
In the end, the whole research is carried out using a modern methodology, combining these physical theories with mathematical-physics analyses using the Ansys Fluent system, preparing experimental measurements using sensors, and thus verifying the results and retrospectively tuning the mathematical-physics analysis using the Ansys Fluent system.The results of experimental measurements will thus be harmonized with mathematicalphysics analysis [32,33].
Simulation Settings in the Ansys Fluent System
For this type of analysis, the Pressure-Based Coupled solver has proven to be less computationally demanding after comparative analyses and, at the same time, delivers the same results as the Density-Based solver.
It is a Pressure-based solver that employs an algorithm that belongs to a general class of methods called the projection method [34].In the projection method, the limitation of conservation of mass (continuity) of the velocity field is achieved by solving the pressure equation (or by correcting the pressure).Unlike the segregated algorithm described above, its scheme and comparison with the coupled variant are shown in Figure 3 [35], and the pressure-based coupled algorithm solves a coupled system of equations comprising the momentum equations and the pressure-based continuity equation.Thus, in the coupled algorithm, Steps 2 and 3 in the segregated solution algorithm are replaced by a single step in which the coupled system of equations is solved.The remaining equations are solved in a decoupled fashion, as in the segregated algorithm.Since the momentum and continuity equations are solved in a closely coupled manner, the rate of solution convergence significantly improves when compared to the segregated algorithm.However, the memory requirement increases by 1.5-2 times that of the segregated algorithm since the discrete system of all momentum and pressure-based continuity equations needs to be stored in the memory when solving for the velocity and pressure fields (rather than just a single equation, as is the case with the segregated algorithm).In the next setting, the Advection Upstream Splitting Method (AUSM) scheme was selected.It is a numerical method used for solving advection equations in computational fluid dynamics.It is particularly useful for the simulation of compressible flows with shocks and discontinuities, which fully corresponds to our case of solving large gradients associated with shock waves.The AUSM is developed as a numerical inviscid flux function for solving a general system of conservation equations.It is based on the upwind concept and was motivated to provide an alternative approach to other upwind methods, such as the Godunov method, flux difference splitting methods by Roe, and Solomon and Osher, and flux vector splitting methods by Van Leer, and Steger and Warming.The AUSM scheme first computes a cell interface Mach number based on the characteristic speeds of the neighboring cells.The interface Mach number is then used to determine the The solution process involves iterations in which the entire set of governing equations is repeatedly solved until the solution converges since the governing equations are nonlinear and connected [36].
In the next setting, the Advection Upstream Splitting Method (AUSM) scheme was selected.It is a numerical method used for solving advection equations in computational fluid dynamics.It is particularly useful for the simulation of compressible flows with shocks and discontinuities, which fully corresponds to our case of solving large gradients associated with shock waves.The AUSM is developed as a numerical inviscid flux function for solving a general system of conservation equations.It is based on the upwind concept and was motivated to provide an alternative approach to other upwind methods, such as the Godunov method, flux difference splitting methods by Roe, and Solomon and Osher, and flux vector splitting methods by Van Leer, and Steger and Warming.The AUSM scheme first computes a cell interface Mach number based on the characteristic speeds of the neighboring cells.The interface Mach number is then used to determine the upwind extrapolation for the convection part of the inviscid fluxes.A separate Mach number splitting is used for the pressure terms.Generalized Mach number-based convection and pressure splitting functions were proposed by Liou [35] and the new scheme was termed AUSM+.The AUSM+ scheme has several desirable properties: 1.
Provides exact resolution of contact and shock discontinuities 2.
Preserves positivity of scalar quantities 3.
Free of oscillations at stationary and moving shocks To solve the transfer of results between the cell's mesh, the second-order upwind scheme was chosen, where variables on cell surfaces are calculated using a multivariate linear reconstruction approach [36,37].In this approach, higher-order accuracy is achieved at the cell faces using the Taylor series of expansion of a cell-centered solution around the cell's center of gravity [38,39].
where φ and ∇φ are the cell-centered values and its gradient in the opposite cell, and → r is the vector displacement from the center of gravity of the cell against the direction of the center of gravity of the face.
The given setting was able to deal with the flow with all the changes induced during pumping and fully manage this type of very complex flow and corresponded to the results of experimental measurements [40].
A suitably selected mesh was needed for the chosen mathematical-physics analysis, and the resulting computational mesh is a combination of a structured mesh with a 2D variant of hexagonal elements.The advantage of these meshes is that they conserve cells when meshing purely rectangular surfaces (Figure 2) and also reduce blurred results caused by possible errors in transferring results over oblique ones.A structured mesh cannot be used in the aperture and nozzle areas, so a triangular mesh has been used here.For the most accurate simulation, a significant refinement of the mesh was used in the area where supersonic flow is expected.Figure 4a shows the basic setting of the mesh with refinement in the area of expected more complex physical phenomena in flow and gradients [41].During the calculation, manual adaptive refinement was also performed using the Field Variable method.The choice extent of mesh adaptation was chosen according to the maximum values in the cell derivative option, a gradient pressure, with a maximum refinement level of 4. As a result, pressure gradients in the supersonic flow regions in the nozzle were appropriately captured, as can be seen in Figure 4b.A mesh independence study was carried out for the inspection, which consisted of monitoring the course of the monitored variables after mesh refinement.Global parameters were monitored: absolute pressure, static temperature, velocity, and density.
An important factor was the setting of the boundary layer.To set it, there is a rule of creating at least 10 cells in the cross-section of the channel from the axis to the wall with refinement at the wall, and in the case of a flowing body, at least five fine cells on the flow around the body.When setting up a turbulent model, the size of the first cell is usually determined using the y+ variable.This option is not available for the laminar model set up by us because there is no turbulent flow and all possible eddies created, for example, by detachment at the edges of the nozzle, are laminar in character without mixing the individual layers of flow.The correctness of the boundary layer settings can then be checked by evaluating the boundary layer, where the so-called velocity profile must be created, as shown in Figure 5.
the Field Variable method.The choice extent of mesh adaptation was chosen according to the maximum values in the cell derivative option, a gradient pressure, with a maximum refinement level of 4. As a result, pressure gradients in the supersonic flow regions in the nozzle were appropriately captured, as can be seen in Figure 4b.A mesh independence study was carried out for the inspection, which consisted of monitoring the course of the monitored variables after mesh refinement.Global parameters were monitored: absolute pressure, static temperature, velocity, and density.An important factor was the setting of the boundary layer.To set it, there is a rule of creating at least 10 cells in the cross-section of the channel from the axis to the wall with refinement at the wall, and in the case of a flowing body, at least five fine cells on the flow around the body.When setting up a turbulent model, the size of the first cell is usually determined using the y+ variable.This option is not available for the laminar model set up by us because there is no turbulent flow and all possible eddies created, for example,
Determination of the Computational Cross-Section
For the planned experiments, the calculation state of the nozzle was determined using the theory of one-dimensional isentropic flow for the construction of this nozzle [42].Thus, according to the dimension of the input cross-section of the nozzle-the narrowest cross-section, the so-called critical cross-section, and the output cross-section of the nozzle
Theoretical Materials 4.1. Determination of the Computational Cross-Section
For the planned experiments, the calculation state of the nozzle was determined using the theory of one-dimensional isentropic flow for the construction of this nozzle [42].Thus, according to the dimension of the input cross-section of the nozzle-the narrowest crosssection, the so-called critical cross-section, and the output cross-section of the nozzle was determined [43,44].
This calculation is based on the following relationships of the theory of one-dimensional isentropic flow, which relate state quantities such as pressure, temperature, density, velocity, Mach number, and nozzle flow (Equations ( 3)-( 8)) [45,46].
where p 0 is the input pressure, p v is the output pressure, T 0 is the input temperature, T v is the output temperature, v 0 is the input velocity, v v is the output velocity, v kr is the critical velocity, ρ 0 is the input density, ρ v is the output density, M is the Mach number, κ is the gas constant for used Nitrogen = 1.4,A is the computational cross-section, and A kr is the critical cross-section.The nozzle has been designed for the ratio of output to input pressure P v :P o = 0.1.
In the case of the designed nozzle, from the above relationships of the theory of isentropic one-dimensional flow (Equations ( 3)-( 8)) based on the input critical cross-section of the nozzle with a diameter of 1.6 mm, an output cross-section with a diameter of 2.22 mm with a length of 0.5 mm and an opening angle of 12 • according to [27] was set (Figure 6).
This designed nozzle was analyzed in the Ansys Fluent system.For the calculation, the variant P vs = 200 Pa: P o = 2000 Pa was chosen.
The boundary conditions setting for mathematical-physics analysis is shown in Figure 2 above.The nozzle has been designed for the ratio of output to input pressure Pv:Po = 0.1.
In the case of the designed nozzle, from the above relationships of the theory of isentropic one-dimensional flow (Equations ( 3)-( 8)) based on the input critical cross-section of the nozzle with a diameter of 1.6 mm, an output cross-section with a diameter of 2.22 mm with a length of 0.5 mm and an opening angle of 12° according to [27] was set (Figure 6).
Gas Flow Regimes
Temperature sensing in the supersonic flow regime is closely related to the flow velocity.The solved problem, which is also reflected in the sensing of temperature in supersonic flow, needs to be described in connection with the sensing of the flow velocity and thus also with static and total pressure.
As mentioned, one of the options for measuring flow velocity is to use a Pitot tube sensor (Figure 7).This principle is based on the relation where the total pressure is equal to the sum of the dynamic and static pressures.
Gas Flow Regimes
Temperature sensing in the supersonic flow regime is closely related to the flow velocity.The solved problem, which is also reflected in the sensing of temperature in supersonic flow, needs to be described in connection with the sensing of the flow velocity and thus also with static and total pressure.
As mentioned, one of the options for measuring flow velocity is to use a Pitot tube sensor (Figure 7).This principle is based on the relation where the total pressure is equal to the sum of the dynamic and static pressures.
Sensors 2023, 23, x FOR PEER REVIEW 10 of 34 Dynamic pressure is further calculated as the product of density and the square of velocity.If we obtain the total pressure from the head of the Pitot tube and the static pressure from its side, the velocity can be determined based on this relationship (Equation (9)).However, it is not always possible to determine it using this simple calculation due to the three different flow regimes, as shown below.Dynamic pressure is further calculated as the product of density and the square of velocity.If we obtain the total pressure from the head of the Pitot tube and the static pressure from its side, the velocity can be determined based on this relationship (Equation ( 9)).
Sensors 2023, 23, 9765 10 of 34 However, it is not always possible to determine it using this simple calculation due to the three different flow regimes, as shown below.
In practice, however, this simple principle of diagnosis can be used only for cases of velocity up to 0.3 Mach, as there are three mathematical flow regimes of solving velocity by Pitot tube, namely [47][48][49]: Incompressible Regime 2.
Supersonic Compressible Regime
Incompressible Regime
A flux can be considered incompressible if its velocity is less than 30% of the speed of sound.For such a fluid, Bernoulli's equation describes the relationship between velocity and pressure along the flow plane, and the simple equation above applies (Equation ( 10)).
After deriving, it is possible to obtain velocity from the following relationship:
Subsonic Compressible Regime
For flow velocities greater than 30% of the speed of sound, the fluid is considered compressible.In the theory of compressible flow, it is necessary to consider the dimensionless Mach number M, which is defined as the ratio of the flow velocity v to the speed of sound c: When the Pitot tube is subjected to a subsonic compressible flow rate (0.3 < M < 1), the flow of gas along the streamlines ends in smooth compression at the stagnation point of the Pitot tube (Figure 8).The relationship for determining the velocity takes on a more complex form, which takes into account Poisson's constant and the value of the stagnation pressure, which is taken by the Pitot tube instead of the total pressure.
where is the Poisson constant, Cp is the heat capacity at constant pressure, Cv is the heat capacity at a constant volume, and cv and cp are the respective specific heat capacities.The relationship for determining the velocity takes on a more complex form, which takes into account Poisson's constant and the value of the stagnation pressure, which is taken by the Pitot tube instead of the total pressure.where κ is the Poisson constant, C p is the heat capacity at constant pressure, C v is the heat capacity at a constant volume, and c v and c p are the respective specific heat capacities.
Supersonic Compressible Regime
In supersonic mode (M > 1), a shockwave is formed in front of the Pitot tube head.The gas is first slowed down non-isentropically to subsonic velocity and then slows isentropically to zero velocity at the stagnation point (Figure 9).
taken by the Pitot tube instead of the total pressure.
where is the Poisson constant, Cp is the heat capacity at constant pressure, Cv is the heat capacity at a constant volume, and cv and cp are the respective specific heat capacities.
Supersonic Compressible Regime
In supersonic mode (M > 1), a shockwave is formed in front of the Pitot tube head.The gas is first slowed down non-isentropically to subsonic velocity and then slows isentropically to zero velocity at the stagnation point (Figure 9).The relationship for determining the velocity can no longer have a common form, but it is the ratio of stagnation pressure sensed from the Pitot tube head and static pressure sensed again from its side.This ratio is expressed by the following relationship: The relationship for determining the velocity can no longer have a common form, but it is the ratio of stagnation pressure sensed from the Pitot tube head and static pressure sensed again from its side.This ratio is expressed by the following relationship: From Equation ( 14), it is then necessary to express the value of the Mach number using the iterative method, from which we then obtain the value of the flow velocity.
As will be evident from the results of the analysis in the Ansys Fluent system, a large part of the flow behind the aperture moves in this regime.
Temperature Measurement
Here, a similar problem with the shockwave manifests itself as with the total pressure sensing.In front of the temperature sensor, the flow is slowed down, and a perpendicular detached shock wave is formed, resulting in an increase in temperature (Figure 10).The temperature sensor senses the stagnant temperature T 0 , but the value sought is the static temperature, which is obtained from Equation (15) [50,51]: where T is static temperature, T 0 is stagnation temperature, κ is Poisson constant, and M is Mach number.
temperature sensor senses the stagnant temperature T0, but the value sought is the static temperature, which is obtained from Equation (15) [50,51]: where T is static temperature, T0 is stagnation temperature, is Poisson constant, and M is Mach number.
Perpendicular-Detached Shock Wave
In the case of a perpendicular detached shock wave, there are sharp gradients behind it, and the temperature, density, and both total and static pressure change abruptly.For this case, there is a theory of one-dimensional isentropic flow for a perpendicular shock wave, which relates the Mach number, velocity, temperature, density, and both total and static pressure that enter and exit the shock wave (Equations ( 16)-( 21)) [46].
Perpendicular-Detached Shock Wave
In the case of a perpendicular detached shock wave, there are sharp gradients behind it, and the temperature, density, and both total and static pressure change abruptly.For this case, there is a theory of one-dimensional isentropic flow for a perpendicular shock wave, which relates the Mach number, velocity, temperature, density, and both total and static pressure that enter and exit the shock wave (Equations ( 16)-( 21)) [46].
where M is the Mach number, V is the velocity, T is the temperature, P is the static pressure, P o is the total pressure, and ρ is the density.
Cone Shock Wave
Since we will be dealing with the cone on the sensor, which ensures the formation of a cone shockwave, it is necessary to briefly mention the problem.The cone shockwave does not show such large changes in state variables, and thus such a large pressure loss happens as a perpendicular-detached shockwave [52].Due to the conical shape of the sensor inserted into the flow, it is necessary to solve the relationship between the cone and the shockwave according to the Taylor-Maccoll theory (Figure 11 where κ is specific heat ratio, v is velocity, M is Mach number, s is shock angle, a is deflection angle, r is radius, θ is ray angle, and c is cone angle.
Cone Shock Wave
Since we will be dealing with the cone on the sensor, which ensures the formation of a cone shockwave, it is necessary to briefly mention the problem.The cone shockwave does not show such large changes in state variables, and thus such a large pressure loss happens as a perpendicular-detached shockwave [52].Due to the conical shape of the sensor inserted into the flow, it is necessary to solve the relationship between the cone and the shockwave according to the Taylor-Maccoll theory (Figure 11 where ϰ is specific heat ratio, v is velocity, M is Mach number, s is shock angle, a is deflection angle, r is radius, θ is ray angle, and c is cone angle.This dependence is possible to plot into the graphics clearly illustrating the Mach number and tip angle values at which the cone shockwave detaches from the sensor tip, resulting in the formation of the perpendicular shockwave (Figure 12) [54][55][56].This dependence is possible to plot into the graphics clearly illustrating the Mach number and tip angle values at which the cone shockwave detaches from the sensor tip, resulting in the formation of the perpendicular shockwave (Figure 12) [54][55][56].The dependence of the angle of the probe cone (αc) on the angle of the shock wave (αs) is based on Equation (23), where the values for one of the expected measured points in the experimental chamber were entered as input parameters, in which the value of the Mach number M1 = 2.58 and the selected angle αc 18° were entered: The dependence of the angle of the probe cone (α c ) on the angle of the shock wave (α s ) is based on Equation (23), where the values for one of the expected measured points in the experimental chamber were entered as input parameters, in which the value of the Mach number M 1 = 2.58 and the selected angle α c 18 • were entered: The change in the values of density, pressure, temperature, and Mach number after passing through the shock wave is also solved by the theory of isentropic one-dimensional flow for a cone shock wave, using a given angle [46].
where M 1n is the normal component of Mach number, M 2 is the Mach number behind the cone shock wave, T 2 is the temperature behind the shockwave, T 1 is the temperature in front of the shockwave, p 2 is the static pressure behind the shockwave, p 1 is the static pressure in front of the shock wave, ρ 2 is the density behind the shock wave, ρ 1 is the density in front of the shockwave, p 02 is the total pressure behind the shock wave, and p 01 is the total pressure in front of the shock wave.
Results
Mathematical-physics analyses of the supersonic flow of the nozzle were performed and compared.
In the first step, a mathematical-physics analysis was performed on the given shape of the nozzle (Figure 6) without the inserted sensor in the graphics marked as free flow (Figure 13a).In the second step, the analysis was performed with an inserted sensor with a flat end marked as the flat shape in graphics (Figure 13b).In the third step, the analysis was performed with an inserted probe with a conical end with an angle of 30 • in graphics marked as Angle 30 • (Figure 13c).This color coding is also observed for plotted graphics.and compared.
In the first step, a mathematical-physics analysis was performed on the given shape of the nozzle (Figure 6) without the inserted sensor in the graphics marked as free flow (Figure 13a).In the second step, the analysis was performed with an inserted sensor with a flat end marked as the flat shape in graphics (Figure 13b).In the third step, the analysis was performed with an inserted probe with a conical end with an angle of 30° in graphics marked as Angle 30° (Figure 13c).This color coding is also observed for plotted graphics.In the following sub-chapters, the individual results will be analyzed and described in the following style: in each sub-chapter, an analysis of the atmospheric pressure variant (pressure ratios between P o = 101,325 Pa and output P v = 10,132 Pa) will be carried out first, followed by a comparison with the low-pressure variant (pressure ratios between P o = 10,000 Pa and output P v = 1000 Pa), both in a pressure ratio of 10:1, as is shown in Figure 2.Each sub-chapter will have two images for a convenient comparison, where "a" will stand for the atmospheric pressure variants and "b" for the low-pressure variants.
Evaluation of Flow Velocity
Before we proceed to evaluate the average of the main investigated variable in this paper-static temperature-we will evaluate the state quantities related to this and dependent on them.First, the velocity is evaluated in the form of a Mach number.
First, we compare the results of the atmospheric pressure variant.
In the graphics (Figure 14a), a sharp increase can be seen in the Mach number behind the aperture in the first part of the nozzle, up to a distance of 1.2 mm, to a value of almost 2.5 Mach.Then, under the influence of a perpendicular shock wave (Figure 15a), there is a sharp drop in velocity to a subsonic value.In Figure 15a, this detached shock wave is barely noticeable.It originates from the break of the oblique shock waves.Then, the velocity increases again, and at a distance of about 5 mm it decreases again, but not to subsonic velocity, because this time the gas flow does not pass through a detached but oblique shock wave.Figure 15a also shows that from the break of the oblique shock waves, a detached shock wave, even a short one, does not originate, but only the reflection and break of the oblique shock waves occur.These do not cause a drop in subsonic velocity or sharp gradients of variables such as pressure or temperature, which will be evident in other results.
What is crucial for this case is that in the case of free flow from the passage of the oblique shock wave, there is again an increase in velocity to the next place when the oblique shock wave passes through the axis of the flow, which is evident in Figure 15a in the very right corner at the bottom, which is already outside the range of the distribution, as this is no longer essential for our evaluation.
In Figure 16a, the Mach Number distribution shows where the oblique shock waves pass because they change their velocity rapidly and, as will be seen later in the analysis of pressure and temperature, these quantities as well.However, their change is not nearly as significant as the transition on the axis at the point where the short perpendicular shock wave passes.However, the low-pressure variant is different (Figure 14b); there is no formation of significant oblique shock waves in the nozzle, and no perpendicular shock wave is created (Figures 15b and 16b).Therefore, there are no steep changes to subsonic velocity and the whole process is smoother.This is because the ratio of inertial and viscous forces is different at low pressure.Due to the low pressure (and thus the density), the inertial forces are lower than in the case of the atmospheric pressure variant.However, in the second case, a ratio involving low pressures has been selected, albeit still above the threshold where viscous forces remain unaffected by decreasing pressure.Consequently, viscous forces play a more significant role in this ratio.
A comparison can be made by the quantity of the Reynolds number, which relates inertial and viscous forces, and is the resistance of the environment due to internal friction.The higher the Reynolds number, the lower the influence of the frictional forces of the fluid particles on the total resistance.Internal friction is dependent on the velocity gradient (Equation ( 31)).
where τ is the tangential stress, η is the dynamic viscosity, and dv dy is the velocity gradient.Kinematic viscosity is related to dynamic viscosity and is determined by Equation (32).
where is density.where v is the velocity of the flowing fluid, r is the radius of the tube, through which the fluid flows, and v is the kinematic viscosity.This relationship can be used in our case in aperture and nozzle.For fluid flow in spaces of a more general shape than the tube, in our case when flowing around the sensors, the radius of the tube r is replaced by a suitable characteristic dimension l.Then, applies: For a basic comparison, we can use the function (local avg.mixture density × local average cell velocity ×cell dimension)/local viscosity in the Ansys Fluent system, where the characteristic dimension is taken according to the size of the Cell Reynolds number (Figure 17).(32).
where is density.
The Reynolds number Re is given by where v is the velocity of the flowing fluid, r is the radius of the tube, through which the fluid flows, and ̅ is the kinematic viscosity.This relationship can be used in our case in aperture and nozzle.For fluid flow in spaces of a more general shape than the tube, in our case when flowing around the sensors, the radius of the tube r is replaced by a suitable characteristic dimension l.Then, applies: For a basic comparison, we can use the function (local avg.mixture density × local average cell velocity ×cell dimension)/local viscosity in the Ansys Fluent system, where the characteristic dimension is taken according to the size of the Cell Reynolds number (Figure 17).It is evident that in the low-pressure variant, the influence of frictional viscous forces is significantly smaller than inertial forces, on average up to 1:15.Now, we will focus on the case with an inserted sensor with a flat end in the atmospheric pressure variant (pressure ratios Po = 101,325 Pa and output Pv = 10,132 Pa).
In our case, however, it is essential that in the variant where we insert a temperature sensor with a flat end into the axis of the flow-Flat Shape, there is a sharp decrease in velocity to stagnation at the surface of the sensor.This can be seen in the graphics in Figure It is evident that in the low-pressure variant, the influence of frictional viscous forces is significantly smaller than inertial forces, on average up to 1:15.Now, we will focus on the case with an inserted sensor with a flat end in the atmospheric pressure variant (pressure ratios P o = 101,325 Pa and output P v = 10,132 Pa).
In our case, however, it is essential that in the variant where we insert a temperature sensor with a flat end into the axis of the flow-Flat Shape, there is a sharp decrease in velocity to stagnation at the surface of the sensor.This can be seen in the graphics in Figure 14a.Figure 18a shows a sharp drop in velocity behind the perpendicular detached shock wave, which is shown in Figure 19a with the pressure gradient.Behind this shock wave occurs the previously mentioned sharp gradient of velocity, pressure, and temperature.14a. Figure 18a shows a sharp drop in velocity behind the perpendicular detached shock wave, which is shown in Figure 19a with the pressure gradient.Behind this shock wave occurs the previously mentioned sharp gradient of velocity, pressure, and temperature.In the figure of the distribution of the Mach number (Figure 18a,b), it is evident that in the atmospheric pressure variant, there are gradients of the Mach number on the oblique shock waves, while these gradients are not present in the low-pressure variant due to the absence of oblique shock waves and the flow has a gradual character.
Due to the different ratios of inertial and viscous forces in the low-pressure variant, i.e., a lower influence of inertial forces, there is a slight change-a smaller rounding of the detached shock wave in the low-pressure variant (Figure 19b).Figure 19c shows a comparison of the detached shock waves of both variants.In the figure of the distribution of the Mach number (Figure 18a,b), it is evident that in the atmospheric pressure variant, there are gradients of the Mach number on the oblique shock waves, while these gradients are not present in the low-pressure variant due to the absence of oblique shock waves and the flow has a gradual character.
Due to the different ratios of inertial and viscous forces in the low-pressure variant, i.e., a lower influence of inertial forces, there is a slight change-a smaller rounding of the detached shock wave in the low-pressure variant (Figure 19b).Figure 19c shows a comparison of the detached shock waves of both variants.
A completely different process occurs when we insert a temperature sensor with a tip into the flow axis-Angle 30 • .Here, there is an incomparably slight drop in speed, but only behind the tip of the sensor tip-behind the cone shock wave.This can be seen from the graph in Figure 14a, which shows that the Mach number waveform is completely identical to the waveform of the variant without the inserted probe and completely different from the Flat Shape variant (Figure 20).A completely different process occurs when we insert a temperature sensor with a tip into the flow axis-Angle 30°.Here, there is an incomparably slight drop in speed, but only behind the tip of the sensor tip-behind the cone shock wave.This can be seen from the graph in Figure 14a, which shows that the Mach number waveform is completely identical to the waveform of the variant without the inserted probe and completely different from the Flat Shape variant (Figure 20).The cone shockwave, which is shown in Figure 21a, is provided by a correctly selected sensor tip angle, as mentioned in Section 4.5.Since the probe is located at a point 7 mm from the beginning of the nozzle (Figure 13a,b), according to the results of the graphics (Figure 14a) of the Mach Number for the Free Flow variant, the Mach Number value at this point is 1.91 Mach.According to the theory (Section 4.5), the cone shock wave is separated from angle values greater than 39 • .Therefore, the value of 30 • was chosen with a reserve.That is, sensors capable of creating a cone shock wave up to Mach number values of 1.5 and larger.The cone shockwave, which is shown in Figure 21a, is provided by a correctly selected sensor tip angle, as mentioned in Section 4.5.Since the probe is located at a point 7 mm from the beginning of the nozzle (Figure 13a,b), according to the results of the graphics (Figure 14a) of the Mach Number for the Free Flow variant, the Mach Number value at this point is 1.91 Mach.According to the theory (Section 4.5), the cone shock wave is separated from angle values greater than 39°.Therefore, the value of 30° was chosen with a reserve.That is, sensors capable of creating a cone shock wave up to Mach number values of 1.5 and larger.
According to Taylor MacColl's theory for the velocity Mach Number = 1.9 and a Cone angle of 30°, the cone shockwave angle is 49.8°.This corresponds exactly to the mathematical-physics analysis in the Ansys Fluent system variant in Figure 21a for the atmospheric pressure variant.It is interesting to compare it with the low-pressure variant (Figure 21b).Due to the different nature of the flow, according to mathematical-physics analyses, the velocity at the point of the sensor tip in the free flow is higher than in the previous variant, i.e., Mach Number = 1.97 (Figure 20b).For this velocity and Cone angle of 30°, according to Tayor Maccoll's theory, the angle of the cone shockwave is smaller than in the previous variant, i.e., 48.6°.In fact, in this variant, a larger angle was measured-52.8°(Figure 21b), so it is a big change.An important observation is that the thickness of the shock wave in the low-pressure variant is also about 0.02 mm thicker due to the smaller impact of inertial forces compared to the constant impact of viscous forces, which will further affect the evaluation of the pressure and temperature curve of the inserted sensor with an angle of 30° (Figure 21c).According to Taylor MacColl's theory for the velocity Mach Number = 1.9 and a Cone angle of 30 • , the cone shockwave angle is 49.8 • .This corresponds exactly to the mathematical-physics analysis in the Ansys Fluent system variant in Figure 21a for the atmospheric pressure variant.It is interesting to compare it with the low-pressure variant (Figure 21b).Due to the different nature of the flow, according to mathematical-physics analyses, the velocity at the point of the sensor tip in the free flow is higher than in the previous variant, i.e., Mach Number = 1.97 (Figure 20b).For this velocity and Cone angle of 30 • , according to Tayor Maccoll's theory, the angle of the cone shockwave is smaller than in the previous variant, i.e., 48.6 • .In fact, in this variant, a larger angle was measured-52.8• (Figure 21b), so it is a big change.An important observation is that the thickness of the shock wave in the low-pressure variant is also about 0.02 mm thicker due to the smaller impact of inertial forces compared to the constant impact of viscous forces, which will further affect the evaluation of the pressure and temperature curve of the inserted sensor with an angle of 30 • (Figure 21c).
Evaluation of Static Pressure
With the help of the discussed results of the Mach number distribution, it is possible to analyze the results of the static pressure waveform.
In the graphics (Figure 22a) for the atmospheric pressure variant, a sharp decrease in static pressure is first noticeable, copying a sharp increase in the Mach number.Then, there is a sharp increase in pressure due to the decrease in velocity to a subsonic value due to the perpendicular shock wave (Figure 21a).
Evaluation of Static Pressure
With the help of the discussed results of the Mach number distribution, it is possible to analyze the results of the static pressure waveform.
In the graphics (Figure 22a) for the atmospheric pressure variant, a sharp decrease in static pressure is first noticeable, copying a sharp increase in the Mach number.Then, there is a sharp increase in pressure due to the decrease in velocity to a subsonic value due to the perpendicular shock wave (Figure 21a).As can be seen in the graphics (Figure 22a), in the case of free flow, another passage occurs, this time through the crossing of oblique shock waves at a distance of about 5.2 mm.There is a slight increase in pressure, and then a decrease again due to an increase in the flow velocity.
Figure 23a clearly shows the pressure distribution with all pressure gradients at the shock wave boundaries.A sharp gradient on the perpendicular shock wave and smaller gradients on the border of oblique shock waves can be seen.
On the other hand, in the case of the low-pressure variant, the pressure curve, which appears gradual without step changes, corresponds to the fact that there are no oblique As can be seen in the graphics (Figure 22a), in the case of free flow, another passage occurs, this time through the crossing of oblique shock waves at a distance of about 5.2 mm.There is a slight increase in pressure, and then a decrease again due to an increase in the flow velocity.
Figure 23a clearly shows the pressure distribution with all pressure gradients at the shock wave boundaries.A sharp gradient on the perpendicular shock wave and smaller gradients on the border of oblique shock waves can be seen.If a temperature sensor with a flat end is inserted into the flow axis, there is a sharp increase in pressure on the sensor surface up to the moment of stagnation.The pressure on the sensor surface is no longer static, but total pressure.This is how the total pressure in the Pitot tube is sensed in the flow.This can be seen in the graphics in Figure 23a.Figure 24a shows a sharp rise in pressure behind the perpendicular detached shock wave, which is shown in Figure 19a using a pressure gradient.Behind this shock wave, there is a sharp increase in pressure since the shock wave is approximately 0.2 mm away from the sensor surface; this area is visible in Figure 24a.
In the case of the atmospheric pressure variant, the total pressure is slightly lower, and as in the case of the low-pressure variant, the flow velocity is higher at the given location, as mentioned in the previous subchapter (Figure 24b).On the other hand, in the case of the low-pressure variant, the pressure curve, which appears gradual without step changes, corresponds to the fact that there are no oblique shock waves, and thus the course of the Mach number does not show large gradients (Figure 23b).
If a temperature sensor with a flat end is inserted into the flow axis, there is a sharp increase in pressure on the sensor surface up to the moment of stagnation.The pressure on the sensor surface is no longer static, but total pressure.This is how the total pressure in the Pitot tube is sensed in the flow.This can be seen in the graphics in Figure 23a.Figure 24a shows a sharp rise in pressure behind the perpendicular detached shock wave, which is shown in Figure 19a using a pressure gradient.Behind this shock wave, there is a sharp increase in pressure since the shock wave is approximately 0.2 mm away from the sensor surface; this area is visible in Figure 24a.Since the state variables are bound, a completely different process occurs when we insert a temperature sensor with the already mentioned tip-an angle of 30°-into the flow axis.Here, there is a remarkably small increase in the pressure value, but what is crucial is that it occurs not in front of the sensor but behind the tip of the sensor-behind the cone shockwave.This can be seen from the graphics in Figure 22a.The static pressure waveform is completely identical to the variant without the inserted probe and completely different from the Flat Shape variant (Figure 25a).In the case of the atmospheric pressure variant, the total pressure is slightly lower, and as in the case of the low-pressure variant, the flow velocity is higher at the given location, as mentioned in the previous subchapter (Figure 24b).
Since the state variables are bound, a completely different process occurs when we insert a temperature sensor with the already mentioned tip-an angle of 30 • -into the flow axis.Here, there is a remarkably small increase in the pressure value, but what is crucial is that it occurs not in front of the sensor but behind the tip of the sensor-behind the cone shockwave.This can be seen from the graphics in Figure 22a.The static pressure waveform is completely identical to the variant without the inserted probe and completely different from the Flat Shape variant (Figure 25a).However, the effect of the difference in the ratio of inertial and viscous forces in both variants is manifested.Due to the greater thickness of the shock wave, there is a small increase in static pressure just before the tip, which is not visible in the graphics because it is at a distance of 0.007 mm (Figures 21b and 22b).This must be considered when using a temperature sensor with a tip, as this small increase in static pressure is also reflected in an increase in static temperature (Figure 25b).
Evaluation of Static Temperature
The course of static temperature is dependent on the course of Mach number and Static pressure and is very similar to the course of static pressure.
The graphics (Figure 26a) show a sharp decrease in static temperature following a pressure drop and, conversely, a sharp increase in Mach number.Again, there are gradients caused by perpendicular and oblique shock waves (Figure 15a).However, the effect of the difference in the ratio of inertial and viscous forces in both variants is manifested.Due to the greater thickness of the shock wave, there is a small increase in static pressure just before the tip, which is not visible in the graphics because it is at a distance of 0.007 mm (Figures 21b and 22b).This must be considered when using a temperature sensor with a tip, as this small increase in static pressure is also reflected in an increase in static temperature (Figure 25b).
Evaluation of Static Temperature
The course of static temperature is dependent on the course of Mach number and Static pressure and is very similar to the course of static pressure.
The graphics (Figure 26a) show a sharp decrease in static temperature following a pressure drop and, conversely, a sharp increase in Mach number.Again, there are gradients caused by perpendicular and oblique shock waves (Figure 15a).The temperature distribution in the Free Flow variant corresponds to the stated characteristics of the supersonic flow in the nozzle.
In the case when a temperature sensor with a flat end is inserted into the flow axis, the temperature of the sensor surface behind the detached shock wave (Figure 19a) rises sharply until it reaches the stagnation temperature value on the nozzle surface (Equation ( 35)).Therefore, the temperature on the sensor surface is no longer static, but stagnant, corresponding to the total temperature.Its difference is evident in the graph (Figure 26a), The temperature distribution in the Free Flow variant corresponds to the stated characteristics of the supersonic flow in the nozzle.
In the case when a temperature sensor with a flat end is inserted into the flow axis, the temperature of the sensor surface behind the detached shock wave (Figure 19a) rises sharply until it reaches the stagnation temperature value on the nozzle surface (Equation ( 35)).Therefore, the temperature on the sensor surface is no longer static, but stagnant, corresponding to the total temperature.Its difference is evident in the graph (Figure 26a), which shows the difference between static and stagnation temperatures at the point of the inserted sensor-measurement point.
The analysis is carried out according to the theory given in Section 4.3.From the results in the graphics for the course of Mach number = 1.91 (Figure 14a), static temperature T = 171 K (Figure 26a), and Poisson's constant for Nitrogen κ = 1.4, it is possible to determine the stagnation temperature To from the modified relationship, which in the currently examined case amounts to up to 123.5 K and corresponds to the value obtained by mathematical-physics analysis in Figure 29a.
Here, as in the course of static pressure, the graphics (Figure 26a) and Figure 27a show a sharp rise in pressure behind the perpendicular detached shock wave, which is shown in Figure 15a, using a pressure gradient.Behind this shock wave, there is a sharp increase in pressure and temperature.In Figure 28a, this area is marked by a distinctive red area.which shows the difference between static and stagnation temperatures at the point of the inserted sensor-measurement point.The analysis is carried out according to the theory given in Section 4.3.From the results in the graphics for the course of Mach number = 1.91 (Figure 14a), static temperature T = 171 K (Figure 26a), and Poisson's constant for Nitrogen ϰ = 1.4, it is possible to determine the stagnation temperature To from the modified relationship, 303K (35) which in the currently examined case amounts to up to 123.5 K and corresponds to the value obtained by mathematical-physics analysis in Figure 29a.
Here, as in the course of static pressure, the graphics (Figure 26a) and Figure 27a show a sharp rise in pressure behind the perpendicular detached shock wave, which is shown in Figure 15a, using a pressure gradient.Behind this shock wave, there is a sharp increase in pressure and temperature.In Figure 28a, this area is marked by a distinctive red area.Inserting a sensor with a 30° tip into the gas flow causes the same effect as in the previous case when analyzing the static pressure waveform.Even in this case, there is a remarkably small temperature rise, but what is crucial, it occurs behind the tip of the sensor tip-behind the cone shock wave.This can be seen from the graph in Figure 26a, where the static temperature curve is completely identical to the course of the variant without an inserted probe and completely different from the Flat Shape variant (Figure 29a).Inserting a sensor with a 30 • tip into the gas flow causes the same effect as in the previous case when analyzing the static pressure waveform.Even in this case, there is a remarkably small temperature rise, but what is crucial, it occurs behind the tip of the sensor tip-behind the cone shock wave.This can be seen from the graph in Figure 26a, where the static temperature curve is completely identical to the course of the variant without an inserted probe and completely different from the Flat Shape variant (Figure 29a).This fact is of great importance for static temperature scanning using a conventional sensor with a flat end-Flat Shape.As previously mentioned, the sensor measures stagnation temperature, and to derive static temperature, it becomes imperative to ascertain the Mach number.If it is determined by scanning, for example, with a Pitot tube, it is an extra operation and it is also an introduction of an error into the measurement.
When using a suitably shaped sensor tip, we directly sense the actual static temperature, and the value of the Mach number can be obtained from the pressures in front of the nozzle and behind the nozzle according to the relationships (Equations ( 23)-( 30)).To determine the angle of the tip, it is sufficient to assume that the tip angle can be selected with a sufficient reserve.This fact is of great importance for static temperature scanning using a conventional sensor with a flat end-Flat Shape.As previously mentioned, the sensor measures stagnation temperature, and to derive static temperature, it becomes imperative to ascertain the Mach number.If it is determined by scanning, for example, with a Pitot tube, it is an extra operation and it is also an introduction of an error into the measurement.
When using a suitably shaped sensor tip, we directly sense the actual static temperature, and the value of the Mach number can be obtained from the pressures in front of the nozzle and behind the nozzle according to the relationships (Equations ( 23)-( 30)).To determine the angle of the tip, it is sufficient to assume that the tip angle can be selected with a sufficient reserve.
A small problem may arise with the low-pressure variant, where the different influence of the ratio of inertial and viscous forces in both variants takes place.Due to the greater thickness of the shockwave, a small increase in static pressure happens just before the tip, which is difficult to see in the graph as it is a distance of 0.007 mm (Figures 21b,c, and 26b).It has a value of up to 1500 Pa.This must be taken into account when using a temperature sensor with a tip, as this small increase in static pressure is also reflected in a static temperature increase of 78 K (Figures 28b and 29b).In this case, it would be necessary to create the sensor tip with a significantly sharper tip angle, otherwise, the results would be distorted.When sensing the temperature, it is necessary to take into account the different shapes of the shockwave at low pressure.
This paper serves as the basis for a planned experiment with the Schlieren method, in which it will be placed in the experimental chamber using the windows visible in Figure A small problem may arise with the low-pressure variant, where the different influence of the ratio of inertial and viscous forces in both variants takes place.Due to the greater thickness of the shockwave, a small increase in static pressure happens just before the tip, which is difficult to see in the graph as it is a distance of 0.007 mm (Figure 21b,c and Figure 26b).It has a value of up to 1500 Pa.This must be taken into account when using a temperature sensor with a tip, as this small increase in static pressure is also reflected in a static temperature increase of 78 K (Figures 28b and 29b).In this case, it would be necessary to create the sensor tip with a significantly sharper tip angle, otherwise, the results would be distorted.When sensing the temperature, it is necessary to take into account the different shapes of the shockwave at low pressure.
This paper serves as the basis for a planned experiment with the Schlieren method, in which it will be placed in the experimental chamber using the windows visible in Figure 1.The principle of the planned Schlieren method is based on the bending of the light beam trajectory when passing through an inhomogeneous transparent object.Unlike the Shadowgraph method, a filter (optical knife) is used, which is implemented by inserting an aperture into the focal length of the imaging lens.The resulting image of the Schlieren method represents the first derivation of the density of the screened medium.Due to its simplicity and clarity, the Schlieren method is used to visualize heat transfer, momentum, or flow of matter and will be used for experimental verification of the results of mathematical and physical analyses from Ansys Fluent.
Conclusions
In this paper, a mathematical-physics analysis was carried out in conjunction with theoretical data in the field of supersonic flow behind the nozzle with an inserted temperature sensor.After the presentation of the theoretical background for the issue of flowing the inserted probe into the supersonic flow, including the theory of shockwaves, comparative analyses of inserted probes with both a flat end and a tip for the analysis of the formation of detached and cone shock waves were performed.These analyses were carried out in two variants for pressure ratios of 10:1 before and after the nozzle.The first variant was used for the pressure in the chamber in front of the nozzle in the value of atmospheric pressure.These analyses for the atmospheric pressure variant also served to a certain extent to fine-tune the system, as they also corresponded to theoretical assumptions.The second type of analysis was performed for the low-pressure variant.This choice was deliberately chosen because it is already a low-pressure region, which is a research topic for the team, but it is still an area of continuum mechanics, and it is also intentionally a pressure area, where the viscosity value is still not dependent on pressure.In these variants, the character of the flow was investigated at the ratio of inertial and viscous flow forces for atmospheric pressure conditions, which were compared with low-pressure conditions, where the ratio of inertial and viscous flow forces already leads to a significant reduction in inertial forces.
The results showed a changed character of the flow with a reduced tendency towards the formation of cross-oblique shock waves in the nozzle and the formation of shock waves with a greater thickness because the viscous forces further inhibited the formation of the thickening of the shockwave itself.This can affect temperature sensing with a sensor with a tip, as the shockwave area can form in a very small area in front of the tip, which can affect the result.Also, due to the reduced inertial forces, the angle of the cone shock wave is a few degrees larger than in theory; there is no such tilting due to the lower inertial forces.
These analyses are the basis for the upcoming experiments in an experimental chamber built for experiments in the given region of low pressures on the boundary of continuum mechanics.
Figure 2 .
Figure 2. The 2D axisymmetric model of the experimental chamber with boundary conditions rotated by 90°.
Figure 2 .
Figure 2. The 2D axisymmetric model of the experimental chamber with boundary conditions rotated by 90 • .
Figure 3 .
Figure 3. Overview of the Pressure-Based Solution Methods.
Figure 3 .
Figure 3. Overview of the Pressure-Based Solution Methods.
Figure 4 .
Figure 4. Structured mesh for the mathematical-physics analysis (a) basic setting of the mesh, and (b) pressure gradients in the supersonic flow regions in the nozzle.
Figure 4 .
Figure 4. Structured mesh for the mathematical-physics analysis (a) basic setting of the mesh, and (b) pressure gradients in the supersonic flow regions in the nozzle.
Figure 5 .
Figure 5.The velocity profile in the boundary layer (a) in the nozzle and (b) at the tip of the sensor.
Figure 5 .
Figure 5.The velocity profile in the boundary layer (a) in the nozzle and (b) at the tip of the sensor.
Figure 6 .
Figure 6.Dimensions of the designed nozzle [mm].This designed nozzle was analyzed in the Ansys Fluent system.For the calculation, the variant Pvs = 200 Pa: Po = 2000 Pa was chosen.The boundary conditions setting for mathematical-physics analysis is shown in Figure 2 above.
Figure 7 .
Figure 7. Pitot's tube sensing method.In practice, however, this simple principle of diagnosis can be used only for cases of velocity up to 0.3 Mach, as there are three mathematical flow regimes of solving velocity by Pitot tube, namely [47-49]: 1. Incompressible Regime 2. Subsonic Compressible Regime 3. Supersonic Compressible Regime
Figure 12 .
Figure 12.Dependence of Mach number on probe cone angle.
Figure 14 .
Figure 14.Mach number course of (a) atmospheric pressure variant and (b) low-pressure variant.Figure 14.Mach number course of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 14 .
Figure 14.Mach number course of (a) atmospheric pressure variant and (b) low-pressure variant.Figure 14.Mach number course of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 15 .
Figure 15.Distribution of Pressure Gradient of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 16 .
Figure 16.Distribution of Mach number of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 15 .
Figure 15.Distribution of Pressure Gradient of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 15 .
Figure 15.Distribution of Pressure Gradient of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 16 .
Figure 16.Distribution of Mach number of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.Figure 16.Distribution of Mach number of Free flow of (a) atmospheric pressure variant and (b) lowpressure variant.
Figure 16 .
Figure 16.Distribution of Mach number of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.Figure 16.Distribution of Mach number of Free flow of (a) atmospheric pressure variant and (b) lowpressure variant.
Figure 17 .
Figure 17.Comparison of Cell Reynolds number (a) atmospheric pressure variant, and (b) lowpressure variant.
Figure 17 .
Figure 17.Comparison of Cell Reynolds number (a) atmospheric pressure variant, and (b) lowpressure variant.
Figure 18 .
Figure 18.Distribution of the Mach number of Flat shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 18 .
Figure 18.Distribution of the Mach number of Flat shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 19 .
Figure 19.Distribution of Pressure Gradient of Flat shape of (a) the atmospheric pressure variant, (b) the low-pressure variant, and (c) comparison of detached shock waves of both variants.
Figure 19 .
Figure 19.Distribution of Pressure Gradient of Flat shape of (a) the atmospheric pressure variant, (b) the low-pressure variant, and (c) comparison of detached shock waves of both variants.
Figure 20 .
Figure 20.Distribution of Mach number of Angle 30° of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 20 .
Figure 20.Distribution of Mach number of Angle 30 • of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 21 .
Figure 21.Distribution of Pressure Gradient of Angle 30° of (a) the atmospheric pressure variant, (b) the low-pressure variant, and (c) the comparison of cone shock waves of both variants.
Figure 21 .
Figure 21.Distribution of Pressure Gradient of Angle 30 • of (a) the atmospheric pressure variant, (b) the low-pressure variant, and (c) the comparison of cone shock waves of both variants.
Figure 22 .
Figure 22.Static pressure course of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 22 .
Figure 22.Static pressure course of (a) atmospheric pressure variant and (b) low-pressure variant.
shock waves, and thus the course of the Mach number does not show large gradients (Figure23b).
Figure 23 .
Figure 23.Distribution of Absolute Pressure (Static pressure) of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 23 .
Figure 23.Distribution of Absolute Pressure (Static pressure) of Free flow of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 24 .
Figure 24.Distribution of Absolute Pressure (Static pressure) of Flat Shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 24 .
Figure 24.Distribution of Absolute Pressure (Static pressure) of Flat Shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 25 .
Figure 25.Distribution of Absolute Pressure (Static pressure) of an angle of 30° of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 25 .
Figure 25.Distribution of Absolute Pressure (Static pressure) of an angle of 30 • of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 26 .
Figure 26.Static temperature course of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 26 .
Figure 26.Static temperature course of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 27 .
Figure 27.Distribution of Static temperature of Free flow of (a) the atmospheric pressure variant and (b) the low-pressure variant.Figure 27.Distribution of Static temperature of Free flow of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 27 .
Figure 27.Distribution of Static temperature of Free flow of (a) the atmospheric pressure variant and (b) the low-pressure variant.Figure 27.Distribution of Static temperature of Free flow of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 28 .
Figure 28.Distribution of Static temperature of Flat shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 28 .
Figure 28.Distribution of Static temperature of Flat shape of (a) the atmospheric pressure variant and (b) the low-pressure variant.
Figure 29 .
Figure 29.Distribution of Static temperature of Angle 30° of (a) atmospheric pressure variant and (b) low-pressure variant.
Figure 29 .
Figure 29.Distribution of Static temperature of Angle 30 • of (a) atmospheric pressure variant and (b) low-pressure variant.
|
2023-12-16T17:21:52.260Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "4f419ddc02f63c4bc0e5f075942768426ffe3368",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/24/9765/pdf?version=1702307564",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ced954f008c9d2585385f48abf3cabbb273d0ef",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214193145
|
pes2o/s2orc
|
v3-fos-license
|
Silage Fermentation and In Vitro Degradation Characteristics of Orchardgrass and Alfalfa Intercrop Mixtures as Influenced by Forage Ratios and Nitrogen Fertilizing Levels
: Intercropping is a globally accepted method of forage production and its e ff ect on silage quality depends not only on forage combination but also fertilization strategy. In the present study, field intercropping of orchardgrass ( Dactylis glomerata ) and alfalfa ( Medicago sativa ) at five seed ratios (100:0, 75:25: 50:50, 25:75, 0:100 in %, based on seed weight) was applied under three N fertilizing levels (0, 50, and 100 kg / ha), and harvested for silage making and in vitro rumen degradation. As a result of intercropping, the actual proportions (based on dry matter) of alfalfa in mixtures were much closer to seed proportion of alfalfa in field, except 75:25 orchardgrass-alfalfa intercrops with no fertilization. The actual proportions of alfalfa in mixtures decreased by 3–13% with the increase of N level. Increases of alfalfa proportion in mixtures increased silage quality, nutrients degradability and CH 4 emissions. Increasing N levels increased silage pH, concentration of butyric acid, and fiber fractions. In summary, inclusion of alfalfa at around 50% in orchardgrass-alfalfa silage mixtures were selected for favorable ensiling and higher forage use e ffi ciency while also limiting CH 4 emissions, compared to monocultures. The silage quality and feeding values of mixtures were influenced more by forage ratios than by N levels.
Introduction
Ensiling has been increasingly used for forage preservation across the world, especially where precipitation patterns limit opportunities for dependable hay production. Orchardgrass (Dactylis glomerata) as a high nutritive grass widely cultivated in North America, Europe, and East Asia [1,2]. Although orchardgrass is suitable for ensiling with high water-soluble carbohydrates (WSC) [3], its protein content is commonly insufficient to satisfy nutrient requirements of ruminants [4,5]. In contrast, the sole use of legumes (e.g., alfalfa, Medicago sativa, and clover) can increase the protein content of silage but cause undesirable fermentation and proteolysis due to the high buffering capacity and low WSC and poor utilized nitrogen (N) by ruminants [6,7]. Additionally, previous studies noted that the high degradable nutrients in alfalfa provided a large number of substrates to methanogens, resulting in relative high methane (CH 4 ) and carbon dioxide (CO 2 ) emissions [8,9]. To overcome the imbalance of nutrient supply, intercropping of grass and legumes has been increasingly accepted as a good practice of high-quality forage production in terms of either mixed hay or mixed silage [10,11]. It is important to identify the suitable ratio of grass-to-legume that well coordinates with the needs in the field and during the ensiling process, quickly increasing acidity and reducing proteolysis. However, it is presently not clear whether the intercrop mixtures of orchardgrass and alfalfa at appropriate ratios would make superior quality silage with high nutritive values and nutrients' degradability and low CH 4 emissions.
The effect of intercropping on forage yield and nutritive quality depends not only on forage ratio but also N fertilization strategy, since the N fertilizing level plays a vital role in plant growth processes [12]. The effects of N level on forage nutritive value and digestibility are varied, which might result in the difference in CH 4 emissions. A previous study noted that increasing the N level from 0 to 336 kg/ha increased the protein content but did not affect fiber content in alfalfa [13]. Orchardgrass has a life cycle that matches well with alfalfa and exhibits consistent growth throughout the growing season [14]. Orchardgrass also has high polyphenol oxidase activity to reduce protein degradation and lipolysis during the aerobic conditions of ensiling and rumen fermentation [15,16]. We hypothesized that orchardgrass and alfalfa intercrop mixtures would improve silage quality and nutrients' digestibility while also reducing CH 4 emissions, compared to sole silage crops, and the increases of N levels might compromise silage quality and digestibility. Therefore, the objectives of our study were to: (1) investigate the appropriate ratios of orchardgrass and alfalfa intercrop mixtures for acquiring high silage quality and in vitro degradability and low CH 4 emissions and (2) evaluate the effect of N levels on silage fermentation and in vitro degradation characteristics of intercrop mixtures.
Experimental Site
Field experiments were conducted at the Teaching and Research Station of China Agricultural University in Dongchengfang County, Zhuozhou City, Hebei Province. The experimental site was located at 39 • 21 N and 115 • 51 E at 41 m altitude. The sward soil was a sandy loam with pH 7.89, 12.5 g/kg organic matter (OM), 0.76 g/kg total N, 14.8 mg/kg phosphorus (P), and 79.3 mg/kg available potassium (K). Corn was planted and harvested before conducting the current field experiment. The annual rainfall in the field was about 550 mm in 2016, with rainfall mainly occurring in the summer (July to August).
Field Experimental Design
The field experiment was arranged in a two factorial randomized complete block design with four replications, in which the two main treatments included five intercropping ratios and three N levels. The intercropping ratios and N levels were randomly assigned to each plot. The five intercropping ratios of orchardgrass (cv. Aba) to alfalfa (cv. WL534) were 100:0, 75:25, 50:50, 25:75, and 0:100 (in %, based on seed weight). The three N levels provided by urea (46% N) were 0, 50, and 100 kg/ha, and fertilization was performed annually via top-dressing soon after regrowth began (4 April 2016). Before seeding, the field was plowed, harrowed, and then divided into four blocks. Compound fertilizer (containing 28, 28, and 28 kg/ha of N, P, and K, respectively) at 187 kg/ha was uniformly incorporated into the top 20 cm of the soil. Each plot had an area of 25 m 2 (5 × 5 m) and comprised 17 rows with a 30 cm spacing. All plots were separated by a 1-m-wide discard zone, and the two species in each plot were sown manually and simultaneously in different rows at depths of 2 cm according to the current experimental design on 13 September 2015. After planting, all plots were uniformly irrigated, and a sprinkler irrigation system was used during the entire experimental period. Swards were weed-free, and no pests were observed throughout the entire growth period.
Whole fresh orchardgrass and alfalfa plants at the jointing and early bloom stages respectively, were harvested with a reaping hook from the field plots. The cutting dates were on 22 May 2016 for the first cut and on 30 June 2016 for the second cut. Three square areas (1 × 1 m) from each plot were randomly selected and harvested at a stubble height of 5 cm above the ground. The harvested fresh samples were immediately sorted and separately weighed after cutting. A representative fresh subsample of 500 g from each species from each plot was oven-dried to 65 • C for 48 h and then weighed to determine the actual proportions (based on dry matter, DM) of the two species and the chemical composition of mixtures.
Silage Production and Chemical Analysis
The remaining fresh forages were sun-wilted by placing on a polyethylene sheet with occasional turning until the DM content increased to 400 g/kg. The DM content was rapidly checked at regular intervals by using a microwave oven. No leaf or stem loss was observed during the wilting process, and the weather conditions were favorable for field drying, with no rainfall during harvest. The wilted orchardgrass and alfalfa intercrop mixtures were chopped to a particle size of 2 cm, and then samples of approximately 750 g were packed into polyethylene silos (1 L capacity), with three silos per plot. All silos were sealed with screw lids to prevent oxygen inflow but to enable escape of gas from the silage and kept in a dark room at 25 • C.
After 40 days of ensiling, the silos were opened, and a subsample of 20 g from the ensiled mixtures was homogenized in a blender with 180 mL of distilled water for 1 min. The content of the blender was filtered through four layers of cheesecloth to make a silage extract for the determination of silage pH, ammonia N, and organic acid, e.g., lactic acid, acetic acid, propionic acid, and butyric acid. Finally, silages were oven-dried at 65 • C for 48 h and ground in a mill and passed through a 1.0 mm sieve for subsequent chemical analysis and in vitro incubation.
The crude protein (CP) was determined using a Kjeltec TM 2300 (Foss, Hillerod, Denmark). The neutral detergent fiber (NDF), acid detergent fiber (ADF) and acid detergent lignin (ADL) concentrations were determined by the batch procedures outlined by ANKOM Technology Corporation (Fairport, New York, NY, USA). The concentrations of cellulose and hemicellulose were calculated by subtracting ADL from ADF and ADF from NDF, respectively. The concentrations of ammonia N and organic acids in the silage extract were determined according to the method described by Li and Meng [17].
Rumen Fluid Collection
The Institutional Animal Care and Use Committee of the College of Animal Science and Technology of China Agricultural University approved the animal experimental procedures in the current study (CAU20171014-1). Rumen fluid was collected from three lactating Holstein dairy cows (510 ± 20 kg body weight) fitted with permanent rumen cannula, and cows were fed a total mixed ration made up of 4.0 kg alfalfa hay, 3.0 kg whole corn silage, and 6.0 kg commercial concentrate ad libitum. The ration was provided twice daily in equal meals at 06:00 and 18:00 h, and fresh water was available to cows at all times. Rumen contents from the three cows were obtained 1 h before the morning feeding, squeezed and filtered through four layers of gauze, and then mixed in equal volumes to obtain a representative rumen fluid. The fluid was held under CO 2 in a water bath at 39 • C for later in vitro inoculation.
In Vitro Batch Culture and Sample Collection
Dried silage samples of each treatment weighing 500 mg each were placed into a total of 180 glass bottles (five mixtures × three nitrogen levels × four field plots × three bottles per plot) with a capacity of 120 mL and sealed with a rubber stopper and screw caps. Fifty milliliters of fresh buffer solution with a pH of 6.85 [18] and 25 mL of homogeneous rumen fluid were added to each bottle and continuously flushed with N 2 to maintain anaerobic conditions. After sealing the bottles with rubber stoppers and screw caps, all bottles were incubated at 39 • C for 48 h. Simultaneously, four bottles without forage samples were incubated as blanks to correct the final values. The harvested forage samples from the first and second cuts were separately arranged to conduct in vitro incubation at different times. All of these batch cultures were repeated in three experimental runs during different weeks.
At the end of incubation, the bottles were uncapped, and the pH value in the cultured fluids was immediately measured by a pH meter (FiveEasy 20 K, Mettler Toledo International Inc., Greifensee, Switzerland). The entire content of each bottle was filtered with pre-weighed nylon bags (8 × 12 cm, 42 µm pore size) to obtain the non-degraded particles. Ten milliliters of filtrate were sampled to measure the concentrations of ammonia N and volatile fatty acids (VFAs) [19]. The nylon bags were then thoroughly rinsed with fresh water and then oven-dried at 65 • C for 48 h. The in vitro dry matter disappearance (IVDMD), in vitro neutral detergent fiber disappearance (NDFD), and in vitro acid detergent fiber disappearance (ADFD) was calculated by the difference between the pre-incubated and post-incubated amounts, corrected by the blanks after the incubation.
Statistical Analysis
The field experiment was arranged in a two-factorial randomized complete block design with four replications, and analysis of variance (ANOVA) was conducted to determine the main effects of intercropping ratios and N levels as well as their interactions. Regression analyses were performed to evaluate the effects of intercropping ratios and N fertilizer levels on response variables. The means comparison of each feature was calculated using Tukey's HSD test, and significance was considered to be p < 0.05 unless otherwise noted. Since there were significant interactions between the two main treatments (intercropping ratios and N levels) and cuts on most variables of field yield, the results of the treatment effects were analyzed separately for each cut.
Linear and nonlinear models were used to simulate the regression of chemical composition, ensiling characteristics, and in vitro degradation characteristics of silage mixtures with the actual alfalfa proportions and the N levels, respectively. Models with the least Akaike's information criteria (AIC) were finally selected. In vitro incubation data of each of the three experimental runs within the same treatment were averaged before statistical analysis. All statistical analyses were performed using R software (version 3.2.3), and the figures were plotted using SigmaPlot 12.0. Since there were different in vitro incubation periods between the first and the second cut forage samples, the results of the treatment effects were analyzed separately for each cut.
Forage DM Yield of Orchardgrass and Alfalfa Intercrop Mixtures
Both the increases in alfalfa seeding rate with intercropping and increasing N levels increased the total DM yield during the two harvests (p ≤ 0.001, Table 1). The actual proportion of alfalfa in intercrop mixtures increased with the increases of its respective seeding rate in both of the two harvests (p < 0.001, Table 1). Increases in N levels significantly decreased the actual proportion of alfalfa in the first cut (p < 0.001, Table 1), and marginally decreased that in the second cut (p = 0.091, Table 1). Significant interactions between the intercropping ratios and N levels were observed on the total DM yield and actual proportion of alfalfa and orchardgrass for either of the two cuts (p < 0.05, Table 1).
Chemical Composition of Orchardgrass and Alfalfa Intercrop Mixtures Prior to Ensiling
The chemical composition of orchardgrass and alfalfa intercrop mixtures prior to ensiling was shown in Table 2. Alfalfa was usually rich in CP and ADL and lacking fiber fractions' (e.g., NDF and ADF) accumulation, compared with orchardgrass. Increases in N levels significantly increased the concentrations of ADF and ADL and decreased the concentration of Ash in the First cut. The DM concentration was not affected by the actual alfalfa proportion or N levels (p > 0.05).
Chemical Composition of Orchardgrass and Alfalfa Silage Mixtures
The different chemical composition of silage mixtures followed a similar pattern as those observed with raw forage mixtures. The concentrations of CP and ADL increased (p < 0.001, Table 3), whereas those of NDF, ADF, ash, hemicellulose, and cellulose decreased with the increase in the actual alfalfa proportion of the mixtures (p < 0.05, Figures 1 and 2, Table 3). No effect of N levels on CP concentration was observed in the two harvests (p > 0.05, Table 3). Increases in N levels significantly increased the concentrations of ADF and ADL and decreased the concentration of Ash in the two harvests (p < 0.05, Figure 2, Table 3). The N fertilizer level at 100 kg/ha resulted in the highest concentrations of ADF, ranging from 302 to 340 g/kg and ADL ranging from 44.3 to 80.2 g/kg, respectively. Silage DM concentration was not affected by the actual alfalfa proportion or N levels (p > 0.05, Table 3).
Ensiling Characteristics of Orchardgrass and Alfalfa Silage Mixtures
Quadratic effects were observed on silage pH and the concentrations of ammonia N, lactic acid, acetic acid, and propionic acid (p < 0.05, Table 4), while no effects occurred on the concentration of butyric acid, as the actual alfalfa proportion increased among intercrop mixtures for both harvests (p > 0.05, Table 4). Increasing N level increased the silage pH and the concentrations of ammonia N and butyric acid but decreased the concentration of lactic acid for the two harvests (p < 0.05, Table 4). Silage mixtures with no N fertilization had the lowest silage pH, ammonia N, and the highest lactic acid concentration.
The final pH, ammonia N, and total VFAs increased with an increasing alfalfa proportion in the silage mixtures for the two harvests (p < 0.05; Tables 5 and 6). Regarding the fermentation end-products, there was an increase in the concentrations of acetate, isobutyrate, valerate, and isovalerate as the actual alfalfa proportion increased among the silage mixtures (p < 0.05; Tables 5 and 6). The increase in the alfalfa proportion in the silage mixtures also increased the CO 2 and CH 4 for both harvests (p < 0.05; Table 6). Compared with sole alfalfa, silage mixtures of orchardgrass and alfalfa caused a great decrease (up to 17.4%) in the estimated CH 4 emissions. Nitrogen levels promoted the production of total VFAs and acetate (p < 0.05), although these effects were marginal in the first cut (p < 0.10; Table 5).
Discussion
As we hypothesized, the mixture of orchardgrass and alfalfa at appropriate forage ratios is a good option for making well-preserved silage as indicated by better ensiling profiles, silage quality, and feeding values, with less proteolysis during ensiling and fewer CH 4 emissions during ruminal fermentation (Figure 7). The effect of N levels in this study caused less favorable ensiling fermentation and more fiber fractions' accumulation but did not change IVDMD and ME (Figure 7). These results in our study were helpful for the effective production, preservation, and utilization of superior forages in dairy systems, achieving desirable animal performances but without causing environmental concerns. Figure 7. A qualitative scheme indicating the effects of forages ratios and nitrogen (N) fertilizer levels on the ensiling characteristics, nutritive values and in vitro degradation characteristics of orchardgrass and alfalfa silage mixtures. Note: CP, crude protein; ADL, acid detergent fiber; NDF, neutral detergent fiber; ADF, acid detergent fiber; IVDMD, in vitro dry matter disappearance; OMD, organic matter digestibility; ME, metabolizable energy; VFA, volatile fatty acids; CO 2 , carbon dioxide; CH 4 , methane; NDFD, neutral detergent fiber disappearance; ADFD, acid detergent fiber disappearance.
Forage Yield of Orchardgrass and Alfalfa Intercrops
Intercropping legumes with grasses increases forage productivity, nutritive value, and resource-use efficiency [26]. In our current study, the total DM yield increased as the seeding rate of alfalfa increased in intercropping, and the intercropping ratios of orchardgrass and alfalfa at 50:50 commonly resulted in higher total DM yield than other ratios for the two harvests. This might be due to the complementarity and facilitation effects of the intercropped orchardgrass and alfalfa, which increased the resource-use efficiency of light, water, and soil nutrients through the role of soil microorganisms in these processes, hence reducing interspecific competition [27]. Consistently, many studies noted that the total forage DM yield was improved by a grass-legume intercropping system [28,29]. Nitrogen is an essential nutrient for growth and development of plants, and N fertilization is usually practiced to improve forage yield.
As expected in the current study, N fertilization increased the total DM yield of orchardgrass and alfalfa intercrops in most occasions, compared with the sole crops. In accordance with a previous study [30], application of 78 kg N/ha to bermudagrass, stargrass, and bahiagrass increased the forage mass by an average of 129% over the value observed under the condition of no N fertilization. Grasses responded positively to higher soil N level, but legumes usually saved the soil N pool, which explained the significant interactions between intercropping ratios and N levels on the total forage DM yield in the current study. High N levels that are in excess of legumes' needs may interfere with the effect of N-fixing bacteria and reduce the percentage of N derived from the atmosphere and the amounts of fixed N from the legume species, resulting in low DM yields [31,32].
Chemical Composition of Silage Mixtures
Generally, grasses tend to increase the fiber fractions, such as NDF and ADF, of mixtures owing to the abundant cell wall materials [33], and legumes are usually richer in CP than grasses due to their substantial biological fixation of N from the atmosphere [34]. Similar to the current study, increasing the alfalfa proportion in mixtures increased CP and decreased NDF, ADF, hemicellulose, and cellulose concentrations, suggested that forage mixtures improved nutritive values and reduced the need for purchased protein supplements in ruminant rations [35]. In the present study, an increase in N levels did not change CP in the two cuts, possibly because of the lack of N accumulation in aboveground biomass and the more developed root biomass [36]. In the present study, the concentrations of ADF and ADL increased as the N level increased, because of the quicker plant growth and greater development [37] as well as more cell wall biosynthesis and fibrous tissue accumulation [38]. Similarly, the increase in N levels was negatively associated with the concentrations of NDF in perennial ryegrass [39]. The decrease in the concentration of ash for orchardgrass and alfalfa intercrop mixtures receiving higher N levels in the current study, similar to the study from Waramit et al. [40], was possibly because of higher plant growth rate that increased photosynthetic activity and accumulated more carbon compounds in plants [41].
Ensiling Characteristics of Silage Mixtures
Silage pH is one of the main determinants that influence the extent of fermentation and quality of ensiled crops, and well-preserved silage usually has a low pH but high lactic acid concentration. In our current study, the increase in the alfalfa proportion in mixtures increased the silage pH, and the concentrations of ammonia N and acetic acid, in agreement with the previous study from Contreras-Govea et al. [42] showing that legumes such as alfalfa, pea, and red clover do not easily make good silage because of their high buffering capacity and low WSC content as well as their high level of proteolysis under the combined effects of both plant and silage microbial enzymes, which compromises the ensiling quality compared with that associated with grasses. Higher production of acetic acid facilitates enterobacteria activity, leading to poor fermentation during the early stages, lowering DM and energy, and the process of acetic acid production directly competes with lactic acid bacteria for nutrient use [6], resulting in a tendency and shift toward acetate production especially when sugar concentrations are low.
In our current study, silage from intercrop mixtures of orchardgrass and alfalfa tended to have higher lactic acid compared with those from an alfalfa monoculture, probably because the higher content of WSC in orchardgrass provided a more readily available fermentation substrate, e.g., soluble sugars for lactic acid-producing bacteria, associated with a more rapid decline in pH for successful fermentation compared with that in sole alfalfa silage [3]. The lower silage pH and ammonia N from silage mixtures than from sole alfalfa crops might also be attributed to the activity of polyphenol oxidase in orchardgrass and the mediated and reduced alfalfa proteolysis, since much higher activity of polyphenol oxidase was observed when orchardgrass was at 740.6 U/g (fresh weight) than with perennial ryegrass, timothy and tall fescue at 119.0, 16.3, and 6.5 U/g, respectively [15].
The increase of N levels increased silage pH in our current study, which could be explained by the increased ammonia N accumulation that acted as a buffer against the pH decline. In comparison with a previous study [43], application of N fertilizer level at 75 kg/ha to alfalfa resulted in the silage pH value at 4.96 and ammonia N concentration at 11.1 mmol/L, lower than the values of sole alfalfa silage under the 50 kg/ha N level in the current study, which could be explained by the different concentrations of CP and lactic acid of alfalfa silage in response to different N levels. Although the accumulation of acetic acid was helpful to improve aerobic stability of silage, excessively accumulated acetic acid and ammonia N may negatively decrease silage intake by cows [44,45]. In the current study, increasing N levels resulted in the increase in acetic acid formation accompanying the decrease in lactic acid concentration, which implied certain hetero-fermentation rather than homo-fermentation during the ensiling process [6]. The decrease in lactic acid concentration might also be owing to substantial lactate acting as the substrate for different fermentation processes, such as acetate fermentation [46]. Similar to the study from Tremblay et al. [47], a higher N application rate for timothy caused less favorable ensiling properties.
In Vitro Degradation Characteristics
In the present study, the increase in the actual alfalfa proportion in the mixtures increased the IVDMD and OMD compared with that from an orchardgrass monoculture for both harvests. In agreement with a previous study [48], IVDMD quadratically increased when the proportion of alfalfa increased in axonopus-alfalfa and tall fescue-alfalfa mixtures. This may be because balanced digestible nutrients from orchardgrass and alfalfa silage mixtures set off a ruminal synergistic effect on the fractional rate of degradation and the extent of fermentation, followed by better nutrient availability and utilization efficiency for rumen microorganisms [49]. The association between fermented nutrients from grass and legume mixtures may also lead to synergistic effects on the dominant microbial populations and shifts in the microbial community composition [50], and different metabolic pathways might be simultaneously driven through niche compartmentalization and functional dominance between abundant bacteria [51]. In the current study, no differences were observed in the IVDMD and OMD for silage mixtures when alfalfa proportion was ≥50%. If a mixture with a high level of IVDMD and OMD is needed, at least 50% alfalfa is required.
Higher fiber content means a lower degradation rate and longer fermentation time [52], and the indigestible fiber, e.g., ADL, in particular, is the main physical barrier interfering with microbial attachment and degradation, and this is negatively correlated with fiber digestibility [33]. In accordance with the present study, the decrease in NDFD and ADFD with an increasing alfalfa proportion in silage mixtures is due to the higher ADL in the cell wall of alfalfa compared with that of orchardgrass, and the higher degree of NDF lignification and lower level of digestible NDF fractions in alfalfa than in grass [53], corresponding to the diversified populations of the three predominant fibrolytic bacteria, e.g., F. succinogenes, R. albus, and R. flavefaciens in the rumen [54]. In the current study, no effect of N levels was observed on IVDMD, in parallel with the previous study [55], and the pronounced fiber fractions accumulation with increasing N levels in the second cut explained the decreased NDFD and OMD in the present study.
The end-product gases (e.g., CH 4 , CO 2 , and H 2 ) are mainly produced from the process of carbohydrate fermentation rather than protein fermentation during in vitro incubation, and only a small proportion of gas is indirectly produced from the buffering of short chain VFAs [56]. These gases imply not only lower efficiency and productivity of livestock systems but also considerable threats to our environment [57]. In the current study, the increase of alfalfa proportion in orchardgrass and alfalfa mixtures increased the CO 2 and CH 4 . Consistent with the previous studies [58,59], the amount of CO 2 and CH 4 produced from grass pastures was less than that from alfalfa pastures, because CH 4 production was positively corrected with the concentrations of readily fermentable nitrogen free extract and IVDMD. In contrast, the CH 4 production from orchardgrass hay was higher than from alfalfa hay at 15.76 and 20.56 mol/100 mol respectively, which was mainly due to the higher ratio of acetate to propionate in orchardgrass than in alfalfa [60]. Other researchers also confirmed that tannins in forages could suppress the methanogenic bacteria to mitigate CH 4 emissions [61]. However, N levels did not affect the end-product gases due to the similar responses of IVDMD and the ratio of acetate to propionate to different N levels.
In Vitro Fermentation Characteristics
The final pH and ammonia N in cultured fluid tends to be associated with the proportion of legumes in the fermented forages [62]. In the current study, increasing the alfalfa proportion increased the final pH and ammonia N in the cultured fluid, possibly indicating the rapid release of NH 3 from soluble protein degradation and lower NH 3 uptake by ruminal bacteria. Similarly, a higher concentration of NH 3 was found in sole alfalfa silage at 32.8 mg/dL than in sweet sorghum sole silage at 25.6 mg/dL [54]. Volatile fatty acids from ruminal degradation of feed constituents account for the majority of the energy utilized by the host animal, especially the total VFAs and the relative concentrations of important determinants, e.g., acetate, propionate, and butyrate [63]. Higher levels of soluble carbohydrates provided energy resources for better fermentative environments and ruminal microbe growth, and coincided with higher VFAs production, degradation rates, and effective degradability [64]. Our present study also confirmed that increasing alfalfa inclusion increased the total VFAs' production. Nitrogen fertilization did not change the total VFAs' production in the current study, in agreement with the study from Peyraud an Astigarraga [55].
Branched-chain VFAs (e.g., isobutyrate, valerate, and isovalerate) are mainly fermented in the rumen as end-products from protein degradation, and they are the essential factors for cellulolytic bacterial growth. The concentrations of branched-chain VFAs produced were mainly dependent on the composition and extent of ruminal deamination of amino acids in feeds, and higher CP was significantly correlated with the higher molar concentrations of valerate and isovalerate production [65], which explained the higher ruminal valerate and isovalerate production with increasing alfalfa proportion in the present study. However, no effects of N levels occurred on the molar concentrations of branched-chain VFAs in the present study because the community of dominant ruminal microbes and fermentation pathways of VFA end-products were not affected or changed [19].
Conclusions
Inclusion of alfalfa at 62%, 57%, and 54% under a 0, 50, and 100 kg/ha N level respectively, in orchardgrass-alfalfa mixtures is a good option to make well-preserved silage with higher forage quality and feeding values but with lower CH 4 emissions than sole crops. The intercrop mixtures possibly compensated for the low WSC content and high buffering capacity of alfalfa that favored good ensiling and also alleviated the extensive proteolysis which may adversely affect the utilization of N by ruminants. Compared with unfertilized intercrops, a high N level at 100 kg/ha caused less favorable ensiling fermentation and more fiber fractions' accumulation accompanied by more acetate and total VFAs' production, although it did not change the nutrients' degradability. This practice in our study provides guidance and reference on effective production, preservation, and utilization of superior forages in dairy rations, which underpins to achieve desirable animal performances and profitability but with a reduction of up to 17.4% of estimated CH 4 emissions.
|
2020-01-30T09:09:49.093Z
|
2020-01-23T00:00:00.000
|
{
"year": 2020,
"sha1": "db522d272875c36b5c805a8e5a5d5a40adfaf14f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/12/3/871/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3c099c34188e4b6427d524e967014a429184b2aa",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
32501038
|
pes2o/s2orc
|
v3-fos-license
|
The Question of Abelian Higgs Hair Expulsion from Extremal Dilaton Black Holes
It has been argued that the extremal dilaton black holes exhibit a flux expulsion of Abelian-Higgs vortices. We re-examine carefully the problem and give analytic proofs for the flux expulsion always takes place. We also conduct numerical analysis of the problem using three initial data sets on the horizon of an extreme dilatonic black hole, namely, core, vacuum and sinusoidal initial conditions. We also show that an Abelian-Higgs vortex can end on the extremal dilaton black hole. Concluding, we calculate the backreaction of the Abelian-Higgs vortex on the geometry of the extremal black hole and drew a conclusion that a straight cosmic string and the extreme dilaton black hole hardly knew their presence.
of a Meissner effect. However, the recent works of Bonjuor et al. [7] show subtleties in the treatment of the event horizon, showing that a flux expulsion can occur but it does not do so in all cases. Analytic proofs for an expulsion and a penetration of a flux in the case of the extremal charged black hole were also presented.
Nowadays, it seems that the superstring theories are the most promissing candidates for a consistent quantum theory of gravity. Numerical studies of the solutions to the low-energy string theory, i.e., the Einstein-dilaton black holes in the presence of a Gauss-Bonnet type term, disclosed that they were endowed with a nontrivial dilaton hair [8]. This dilaton hair is express ed in term of its ADM mass [9]. The extended moduli and dilaton hair and their associated axions for a Kerr-Newmann black hole background were computed in Ref. [10]. On the other hand, a full analysis of a cosmic string in dilaton gravity was given in Ref. [11]. An Abelian Higgs vortex in the background of an Euclidean electrically charged dilaton black hole was studied in Ref. [12]. It was shown that this kind of the Euclidean black hole can support a vortex solution on a horizon of the black hole. The vortex effect was to cut out a slice out of the considered black hole geometry. In Ref. [13] the authors argued that an electrically charged dilaton black hole could support a long range field of a Nielsen-Olesen string. Using both numerical and perturbative techniques the properties of an Abelian-Higgs vortex in the presence of the considered black hole were investigated. In the case of an extreme dilatonic black hole the analog of the Meissner effect was revealed.
In this paper we will try to provide some continuity with our previous works [12]- [13]. Namely, we shall re-examine the problem of the flux expulsion in the presence of the extremal dilaton black hole in the light of the arguments recently quoted by Bonjour et al. [7]. We have provided analytical proofs that vortices will wrap around the extreme dilatonic black hole. Then, we conduct the numerical analysis of the problem.
The paper is organized as follows. In Sec.II we briefly review some results obtained in Ref. [13] that will be needed, so that the paper becomes self-contained. We also conduct analytic considerations of the expulsion problem of the Nielsen-Olesen vortex in the presence of an extreme dilatonic black hole. In Sec.III, we give a numerical analysis of the problems taking into account three initial data sets on the horizon of the black hole, i.e., core, vacuum and sinusoidal initial conditions. We also pay attention to the problem of a vortex which terminates on the extremal dilaton black hole. Before concluding our considerations, in Sec.IV, we discuss the gravitational backreaction.
II. NIELSEN-OLESEN VORTEX AND A DILATON BLACK HOLE
In this section we shall review some material published in Ref. [13] in order to establish notation and convention and for the paper to be self-contained. We also gave some analytic arguments in favour of a flux expulsion. In our consideration we shall study an Abelian-Higgs vortex in the presence of a dilaton black hole assuming a complete separation of degrees of freedom of each of the objects. Our system will be described by the action which is the sum of the action for a low-energy string theory [14], which in the Einstein frame, has the form and S 2 is the action for an Abelian Higgs system minimally coupled to gravity and be subject to spontaneous symmetry breaking. It yields where Φ is a complex scalar field, D µ = ∇ µ + ieB µ is the gauge covariant derivative.B µν is the field strength associated with B µ , while F αβ = 2∇ [α A β] . As in Ref. [5] one can define the real fields X, P µ , χ by the relations Equations of motion derived from the action S 2 are given by The field χ is not a dynamical quantity. In a flat spacetime the Nielsen-Olesen vortices have the cylindrically symmetric solutions of the forms where ρ is the cylinder radial coordinate, N is the winding number.
As far as a static, spherically symmetric solution of the equations of motion derived from the action S 1 is concerned it is determined by the metric of a charged dilaton black hole. The metric may be written as [15] where we define r + = 2M and r − = Q 2 M which are related to the mass M and charge Q by the relation Q 2 = r+r− 2 e 2φ0 . The charge of the dilaton black hole Q, couples to the field F αβ , is unrelated to the Abelian gauge field B µν associated with the vortex. The dilaton field is given by e 2φ = 1 − r− r e −2φ0 , where φ 0 is the dilaton's value at r → ∞. The event horizon is located at r = r + . For r = r − is another singularity, one can however ignore it because r − < r + .
In what follows, we shall consistently assume that X and P φ are functions of r, θ coordinates. Then, equations of motion for these fields are given by where β = λ 2e 2 = m 2 Higgs /m 2 vect is the Bogomolnyi parameter. When β → ∞, the Higgs field decouples and like in the Reissner-Nordström case [6], one can study a free Maxwell field in the electrically charged black hole spacetime. The other situation will arise when P = 1. It will be the case of a global string in the presence of the electrically charged dilaton black hole.
One can show Ref. [16], that in normal spherically symmetric coordinates X is a function of √ g 33 and one will try with the coordinates R = r(r − Q 2 M ) sin θ, namely X = X(R) and P φ = P φ (R). Taking into account the thin string limit, i.e., M ≫ 1, equations for X(R) and P (R) can be reduced to the Nielsen-Olesen type up to the errors of an adequate order (see Ref. [13]).
The main task of our work will be to answer the question about the flux expulsion in the case of the extremal dilaton black hole. Some numerical arguments concerning the so-called Meissner effect were quoted in Ref. [6]. Now, having in mind the arguments of Ref. [7], we examine this problem carefully. We begin with the analytical considerations.
From now on, we shall consider only the extremal dilaton black hole, for which r + = r − .
As one can see from Eqs. (10)(11), the flux expulsion solution X = 0, P = 1 always solves these equations of motion. Thus, our strategy will be to show the nonexistence of a penetration solution. First of all, we assume that a piercing solution does exist. This requirement is fulfilled when one has the nontrivial solutions X(θ) and P (θ) which is symmetric around θ = π 2 . For this value of the angel X has a maximum and P a minimum. Expanding equations of motion for P (θ) near the poles indicates that P ,θ = 0 at the poles. Thus, there exists such a point θ 0 ∈ (0, π 2 ) for which the second derivative of P (θ) is equal to zero, namely P ,θθ (θ 0 ) = 0 and P ,θ (θ 0 ) < 0. In the extremal black hole case Eq.(11) has the form as follows Having in mind that P ,θθ (θ 0 ) = 0 and cot θ 0 = 0 in the considered interval of θ, we reach to the contradiction with our starting assumption that P ,θ (θ 0 ) < 0. This is sufficient to conclude that the flux expulsion must always take place.
Further, after proving that the flux expulsion must take place for a sufficiently thick string, we proceed to the case of a thin string. As was remarked in Ref. [7] in order to consider the case of a thin string one has to look at the Eqs. (10)(11) in the exterior region of the horizon. To begin with, we assume that there is a flux expulsion. Therefore near the horizon of the extremal dilaton black hole, one has The function X is symmetric around π 2 , having maximum X m . Integrating Eq.(13) on the interval (θ, π 2 ), for θ > β we arrive at the inequality Then using the fact that X , This shows that the inequality must hold over the range θ ∈ (β, π 2 ) for the expulsion to occur. From the graph of the function ζ(θ) = 1 sin θ ln tan θ 2 , we deduce that on the interval θ ∈ (β, π 2 ) one has ζ(θ) < 0. Then, the inequality (15) always holds and we have the expulsion of the vortex for the extremal dilaton black hole.
The analytic solution of equations of motion for the dilaton black hole which size was small comparing to the vortex size were considered by the present authors in Ref. [13]. It was done for the large N -limit. Then it happened that, inside a core of the vortex the gauge symmetry will be unbroken, and the expectation that X 2 β ≈ 0 is well justified. These all provided the solution of Eq.(11) as where p is an integration constant equal to twice the magnetic field strength at the center of the core [5]. On the other hand, the solution for X yields where b(r) is given by Eq. (21) in Ref. [13]. The exact form of X ensures its vanishing on the extremal black hole horizon. Concluding we see that, using analytical arguments one has always the expelling of the fields from the extreme dilaton black hole horizon. As we see in the next section these analytical considerations are fully confirmed by numerical investigations.
III. NUMERICAL RESULTS
To confirm our results from previous section we performed number of numerical calculations for extremal dilaton black holes with strings. The numerical method is simply the same as in our previous article [13] (see also Ref. [17]).
Namely, the fields X and P are replaced with their discreted values on the polar grid (r, θ) → (r i = 2M + i∆r, θ k = k∆θ) according to the difference version of the equations of motion (10) and (11) On the horizon we used The boundary conditions are imposed on large radii and for a string core according to Boundary conditions on the horizon are guessed at the begining of calculation and then updated in accordance with Eqs. (20) and (21). The process of relaxation and updating of the fields on the horizon of the black hole is repeated until convergence take place.
In all above cases we received the same final configuration of the fields always showing the string expulsion from the extreme dilatonic black hole. These confim our previous theoretical predictions. The results of the numerical calculations are presented in Fig. 1.
Further, we pay our attention to the case of a cosmic string ending on the extreme dilatonic black hole. This is an important configuration as the main phenomenological input to the instantons mediating defect decays (se, e.g., [18]- [20]). Namely, in Ref. [16] it was shown that the Nielsen-Olesen solution could be used to construct regular metrics which represented vortices ended on black holes either in a static equilibrium or accelerating off to infinity. The latter metric depicts a cosmic string which is eaten by accelerating black holes.
To consider numerically a string ending on the black hole we have to slightly modify the boundary conditions (22).
They remain the same at the outer boundary and in the string core for θ = 0. For θ = π we initially set X = 1 and P = 0. During the calculation values of the fields were updated on each step according to P ,θ = 0 and X ,θ = 0.
IV. GRAVITATIONAL BACKREACTION
The gravitational backreaction of the string on the dilaton black hole geometry was studied in Ref. [13]. As one can compare the resultant metric we shall obtain and the conical metric gained in Ref. [13] they are the same. But in our previuos attitude we did not pay attention to the corrections of other fields in the theory. Therefore our previous results were not correct. In this section we consider the backreaction problem taking into account all the fields in the theory.
In order to consider the gravitational backreaction effect of the string superimposed on the dilaton black hole one needs to consider a general static axially symmetric solution to the Einstein-Maxwell-dilaton Abelian-Higgs equations of motion. First, we find a coordinate transformation which enalbes us to write the spherical dilaton black hole metric in the axisymmetric form. Using the coordinate transformation as follows: the metric of the charged dilaton black hole may be rewritten in the axisymmetric form, namely where The dilaton black hole metric (25) will constitute, to the zeroth order, our background solution. The relevant equations of motion become where the energy momentum tensor T α β is given by where ǫ = 8πGη 2 which is assumed to be small. This assumption is well justified because, e.g., for the GUT string ǫ ≤ 10 −6 . The first term is the contribution from the string and it has the following explicit form: T ρ z (string) = 2e −2(γ−ψ) X ,ρ X ,z + β α 2 P ,ρ P ,z , where V (X) = (X 2 −1) 2
4
. The electromagnetic dilaton contribution is given by Having in mind Eq.(43) one notices that the electromagnetic-dilaton energy momentum tensor always fulfills As in Ref. [5] we will solve the Einstein-Maxwell-dilaton equations iteratively, i.e., α = α 0 + ǫα 1 etc. Following Ref.
[13] we can use the coordinates R = r(r − Q 2 M ) sin θ = ρe −ψ0 , which yields that near the core of the string where sin θ ≈ O(M −1 ), one gets R 2 ,ρ + R 2 ,z ∼ e 2γ0−2ψ0 . This relation implies that near the core of the string, to the zeroth order, the energy momentum tensor reads as follows: which is the purely function of R. As in the Schwarzschild case [5], this strongly suggests to look for the metric perturbat ions as a function of R.
As in Ref. [7] we assume that the first order perturbed solutions take form Further, carry out the computing of the necessary derivatives one gets the following equation for a(R) From which one can reach to the following expression for a(R): Eq. (52) can be rewritten as where On the other hand, Eq.(30) implies which yields that f = f 0 is equal to a constant value. The magnetic correction one can get either directly or using the duality transformation [22], which implies F → * F , φ → −φ, where * F µν = 1 2 e −2φ ǫ µνρδ F ρδ . Turning our attention to Eq.(33) and taking into account the value of f 0 , one finds for ψ 1 The result of the integration of Eq.(57) is given by Consider now Eq.(36). For γ 1 (R), one gets the expression of the form Similarly, for φ 1 we arrive at the expression which implies that φ 1 =φ ln r(r − Q 2 M ) sin θ, whereφ is a constant value. Taking into account the above corrections, one can consistently transform the metric to the (t, r, θ, ϕ) coordinates in which the asymptotic form of the metric is expressed as where e ǫC = e 2ǫψ1 .
One should notice [5] that the B-term is outside the range of the applicability of the considered approximation. After rescalling coordinatest = e ǫC/2 t,r = e ǫC/2 r and defining the quantitiesM = e ǫC/2 M andQ = e ǫC/2 Q, one gets the metric in the thin string metric, i.e. M ≫ 1, Now, we turn to a deficit angle, which has the form δθ = 2πǫ(A + C) = 8πGµ, where µ is the total mass of string per unit length. On the other hand, its ADM mass generalized to an asymptotically flat space is written [21] The definition of the physical charge of the black hole, respectively for magnetic or electric charge is given by where dS µν = l [µ n ν] dA and dA is the element of surface area. The null vector n µ is orthogonal to the two-sphere on the horizon, with the normalization condition l α n α = −1.
Then, one can write the first order corrected solutions in terms of M I and Q ph . The resultant metric has the form +r r − Q 2 ph e 4Gµ−2ǫD M I e −4Gµ sin 2 θdϕ 2 , whereD = 2φ 1 + f 0 . The corrected innerr − and outerr + horizons are situated at On the other hand, the corrected condition for the extremal black hole yields Finally, using the formula for the entropy [21], we conclude that it has form as follows: Finally, we will discuss another interesting problem that can be explored, i.e., the interaction of a cosmic string with an extreme dilatonic black hole. We shall consider the vortex which is in a perfect aligment with an extremal black hole axis. This assumptions enables one to get rid of great complications when the black hole and the vortex are displaced to each other. We also assume that, we do not take into account details of the core structure of a cosmic string. It is turned out that the extreme dilatonic black hole metric can be written as [23] where the function V satisfies ∇ 2 (x,y,z) V = 0. In the cylindrical coordinates (ρ, z, ϕ) centered on the string, with a conical deficit 0 ≤ ϕ ≤ 2π p , p ≈ 1 + 4µG, the function V takes the form (see, e.g., Ref. [24]) where we set the black hole at ρ = ρ 0 , ϕ = 0, and z = 0. While u 0 is defined by the relation cosh u 0 = ρ 2 +z 2 +ρ 2 0 2ρρ0 . The obtained function is nonsingular away from the conical line and the singularity of the black hole. Then, there are no forces between the extremal black hole and the cosmic string. Analogous results have been obtained in the case of a cosmic string and an extremal Reissner-Nordström black hole [7]. Of course, we should be aware of neglecting the effect of the dilaton extremal black hole on the string core. Nevertheless, using our assumption of a complete separation of degrees of freedom of each of the objects one concludes that an extremal dilaton black hole and a straight cosmic string will hardly feel their presence.
V. CONCLUSIONS
In our work we ask the question of whether or not an Abelian-Higgs vortex is expelled from the extremal dilaton black hole. We gave analytical arguments that no matter how thick was the vortex it was always expelled from the considered black hole. In order to confirm our analytic results we performed numerical calculations in which boundary conditions on the horizon of the extreme dilatonic black hole were guessed at the beginning of the process and updated according to the adequate equations. We also paid attention to the vortices ending on the extremal dilaton black hole.
Finally, we studied the backreaction effect of the vortex on the geometry and the other fields in the theory under consideration. In the thin string limit we get the conical dilaton black hole metric. Concluding, we mentioned the problem of an interaction between a straight cosmic string and an extremal dilaton black hole which was situated in the perfect aligment with a black hole axis. According to the assumptions of the clear separation of the degrees of freedom of these objects, one can conclude that they hardly feel their presence.
|
2017-09-15T23:30:40.481Z
|
1999-07-06T00:00:00.000
|
{
"year": 1999,
"sha1": "af835027fec4df158429af9393320ff62340bcad",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9907025",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "241453f670a519032017ac6d86eb6cabcffea645",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
244143591
|
pes2o/s2orc
|
v3-fos-license
|
Detection of mecA and class1 integron in Staphylococcus aureus isolated from Egyptian hospitals
This study highlights the prevalence of mecA and class1 integron in multidrug resistant Staphylococcus aureus. A hundred clinical Staphylococcus aureus (SA) isolates were collected from two Egyptian hospitals (Ain-shams hospital and Abbassia fever hospital). All isolates were multidrug resistant (showing resistance to two or more antibiotic groups), antimicrobial susceptibility test showed that all isolates were resistant to methicillin, 46% were resistant to ciprofloxacin, 45% were resistant to erythromycin, 37% were resistant to vancomycin and 36% were resistant for imipenem and 11% were resistant to the seven tested antibiotic groups. Minimal inhibitory concentration showed that 58% of the isolates were resistant to imipenem. The isolates were examined for the presence of mecA, integrase gene (intl1) and class1 integron by PCR amplification. Forty two percent of the isolates were found to carry class1 integron gene cassette with variable amplicon, 36% of the isolates carried (intl1) integrase gene. Only 80% of methicillin-resistant S. aureus (MRSA) isolates were shown to have mecA gene.
INTRODUCTION
Staphylococcus aureus (SA) is opportunistic human pathogen able to cause extensive variety of diseases (Chang et al., 1997;Moorem and Lindsay 2001). Due to the increasing number of infections caused by MRSA which are now most frequently multidrug resistant (MDR). MDR bacteria are defined as the bacteria resistant to more than two antibiotic groups according to Ito et al. (2001). MRSA harbors staphylococcal gene cassette chromosome mec (SCCmec), which mediate the methicillin resistance gene (Hotta et al., 2000;Hafez et al., 2009). Integron have a significant role in the dissemination of MDR via horizontal gene transfer (Mindlina and Petrovaa, 2017). It integrates exogenous open reading frames by recombination and converting them to functional genes (Mazel, 2006).
The aim of this work is to investigate the presence of mecA gene and class 1 integron between multidrug resistant MRSA.
MATERIALS AND METHODS Identification of the bacterial isolates
A total of 100 clinical bacterial isolates were collected from two Egyptian hospitals, Ain-shams hospital and Abbassia fever hospital. The isolates were recovered from urine, pus discharge and wounds. Isolates were cultured on nutrient agar plates, purified and then subcultured on plates of blood agar, mannitol salt and Baird-Parker agar medium using the streak plate method (Mahon and Manusekis, 1995;Chapin and Lauderdale, 2003;Todar, 2005). The plates were incubated at 37 C for 24-48h. Gram stain, catalase and coagulase production were carried out according to Koneman's color atlas (1992).
Further identification was carried out by Microscan biotyper automated system.
Antibiotic resistance surveillance
Muller Hinton plates were inoculated with 0.5 McFarland standard inocula. Seven different antibiotic groups were tested against the isolates as shown in Table (1). The antibiotic susceptibility test was carried out for 100 isolates according to Kirby-Bauer disk diffusion susceptibility test protocol (Bauer et al., 1966). The antibiotics inhibition zones were measured, and resistance was interpreted as recommended by (NCCLS, 1997;CLSI, 2006CLSI, , 2020.
Determination of minimal inhibitory concentration (MIC):
Multidrug resistant (MDR) S. aureus isolates were further investigated for their resistance to imipenem. The MIC was reported for MRSA isolates using Etest (AB BIODISK, Sweden) agar diffusion method according to (Cui et al., 2008).
Extraction of DNA and Polymerase chain reaction (PCR)
Genomic DNA was extracted from the bacterial isolates using Quiagen DNA extraction kit (QIAamp DNA Mini Kit) following the manufacturer instructions. PCR was carried out for amplification of mecA gene which encodes the unique penicillin-binding protein (PBP2) associated with methicillin resistance in S. aureus. Integrase gene (intl1), a genetic element involved in spreading of antibiotic resistance and class1 integron gene cassette (CS1) (Moura et al., 2007).
PCR was performed using thermocycler (Applied Biosystem, 2720). The total volume of the reaction mixture was 25μl contains 2μl DNA suspension, (0.025μmole) of each primer, 12.5 µl Dream Taq green PCR master mix (Thermo Fisher Scientific), PCR protocol was 4 min of denaturation at 94 o C followed by 35 cycles of 1 min at 94 o C, 30 s at annealing temperature for each primer pair (Table 2), 1 min at 72 o C, final extension step of 10 min at 72 o C. PCR amplicons produced distinct bands corresponding to their respective molecular sizes that were easily recognizable by electrophoresis on 0.8% TAE agarose gel stained with ethidium bromide. The gel was visualized under an ultraviolet transilluminator (UVI tec, Cambridge, UK) to investigate the presence of target gene.
253
Detection of mecA and class1 integron in Staphylococcus aureus isolated from Egyptian hospitals Table (2). Primers used in this study.
Primer of target gene
Oligonucleotide Sequence Annealing temp.
Identification of the isolates
The bacterial isolates were identified as Staphylococcus aureus (SA) as they produce golden yellow colonies on mannitol salt agar medium, also black, shiny and convex colonies with clear zones on Baird-Parker agar media and they were positive for catalase and coagulase. Out of 100 isolates, 47% were isolates recovered from urine, 40% from wounds and 13% from pus discharges. Moreover, 46% of the isolates were recovered from males and 54% from females.
Antibiotic susceptibility test
Antibiotic susceptibility test showed a multidrug resistance of all isolates against the seven tested antibiotics with various extents. Results of antibiotic sensitivity test are summarized in Table (3).
Determination of minimal inhibitory concentration (MIC):
The MIC of imipenem was determined for the hundred methicillin resistant S. aureus isolates against imipenem antibiotic. Results were determined by interpretation with data from Clinical and Laboratory Standards Institute (CLSI, 2020). The isolates that were resistant to >32 µg considered resistant to imipenem (CLSI, 2020;Huanga et al., 2021). The isolates showed high resistance to imipenem as MIC value was 64 μg/ml for imipenem in 58% of isolates.
Detection of mecA, integrase gene (intl1) and class1 integron gene cassette in S. aureus isolates
The result indicated that mecA gene was detected in 80% of MRSA isolates, unexpectedly twenty percent of the phenotypically MRSA isolates were found to be mecA gene negative. Furthermore, (intl1) gene was detected in 36% of the isolates, moreover variable band sizes of class1 integron gene cassette was detected in 42% of the isolates. The pattern size of variable regions of class1 integron gene cassette in the positive isolates ranges from 100 to 1000 bp Fig (1).
The distribution of mecA gene among nosocomial sources was found to be predominant in urine than in pus discharge and wounds. The absence of mecA gene and its gene product (PBP2a) in the phenotypically methicillin resistant isolates could be attributed to several reasons such as: hyper production of βlactamase enzyme (Olayinka et al., 2009), or specific alterations in different amino acids present in protein binding proteins cascade (PBPs 1, 2, and 3) which include three amino acid substitutions (Ba et al., 2014). Horizontal gene transfer (HGT) could be another possible factor for the dissemination of antibiotic resistance by transferring mecA gene among MRSA strains (Hanssen et al., 2004;Tolba et al., 2013;Liao et al., 2018;El-Baghdady et al., 2020).
DISCUSSION
The SCCmec chromosome contains mecA gene together with two regulatory genes mecI and mecR1 (mec complex), when mecR1 is expressed, the organism would be resistant while when mecI is expressed the isolates will be sensitive (Baig et al., 2018).
The masking of methicillin resistance of S. aureus isolates is also explained by Gallagher et al. (2017) who mentioned that accurate detection of methicillin resistance can be difficult due to (heteroresistance phenomenon) the presence of two subpopulations (one susceptible and the other resistant) that may coexist within a culture of staphylococci. All cells in a culture may carry the genetic information for resistance, but only a small number may express resistance in vitro (Wayne 2005;Figueiredo et al., 2014).
Additional genes may regulate the expression of mecA (Berger-Bachi et al., 2002;Rolo et al., 2017), although this mechanism remains unknown (Barbier et al., 2010). Resistance to several antibiotics is associated with the presence of integron (Bay and Turner 2012). In this study among the hundred MDR S. aureus isolates 42% were found to carry class1 integron gene cassette with variable amplicon size ranges from 100 to 1000 bp and 36% of isolates carry (intl1) integrase gene with the amplicon size of 430 bp. The typical integron structure is known to have integrase (intl1) gene and its promoter (P intI ), an integration site named attI (attachment site of the integron), and a constitutive promoter (Pc) for the gene cassette integrated at the attC site. The second component is a cluster of gene cassette; a cassette is composed of an ORF flanked by two attC recombination sites (Joss et al., 2009;Yohann et al., 2017).
The In0 elements have no attC sites but having the integrase gene with its promoter and attI site. This indicates that integron lacking antibiotic resistance determinants are very common in natural populations (Mindlina and Petrovaa 2017).
The third type of integron structure is a cluster of attC site lacking integronintegrase (CALIN element) that is composed of at least two attC sites.
Integron regularly capture cassettes from CALIN elements then numerous genomes may be lacking integrase gene but carrying CALIN structure might be important reservoirs of novel cassettes (Jean et al., 2016).
In the current study 16% of the isolates were carrying a typical integron structure, 26% CALIN have a cluster of class1 integron gene cassette without integrase gene and 20% In0 isolates have integrase gene only without any gene cassette and finally 38% of the isolates have no integron element detected in them. So, the results prove a variation in integron structure genome in MRSA isolates.
The high prevalence of mecA gene and integron in multi-drug resistant isolates highlights the urgent need to employ effective means to avoid dissemination of drug-resistant bacteria.
Conclusion
Resistance to methicillin is not necessary associated with mecA gene because mecA gene was not detected in 20% of phenotypically methicillin resistance isolates. The presence of integron may lead to more extensive resistance determinants than genes alone and serve as reservoirs of antimicrobial resistance. The presence of class 1 integron in MRSA isolates could accelerates the dissemination of MRSA infections.
|
2021-11-17T16:32:13.620Z
|
2021-12-25T00:00:00.000
|
{
"year": 2021,
"sha1": "5f6fa8d014e92c90f0328d654eb30c200679b0f4",
"oa_license": null,
"oa_url": "https://ajbs.journals.ekb.eg/article_201677_0b524e5ce6db34e74983d34f446a857a.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "08cc3992930fc5812242a62b4c4a9184e7ff06d9",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
119619337
|
pes2o/s2orc
|
v3-fos-license
|
Approximate construction of rational approximations and the effect of error autocorrection. Applications
Several construction methods for rational approximations to functions of one real variable are described in the present paper; the computational results that characterize the comparative accuracy of these methods are presented; an effect of error autocorrection is considered. This effect occurs in efficient methods of rational approximation (e.g., Pade approximations, linear and nonlinear Pade-Chebyshev approximations) where very significant errors in the coefficients do not affect the accuracy of the approximation. The matter of import is that the errors in the numerator and the denominator of a fractional rational approximant compensate each other. This effect is related to the fact that the errors in the coefficients of a rational approximant are not distributed in an arbitrary way but form the coefficients of a new approximant to the approximated function. Understanding of the error autocorrection mechanism allows to decrease this error by varying the approximation procedure depending on the form of the approximant. Some applications are described in the paper. In particular, a method of implementation of basic calculations on decimal computers that uses the technique of rational approximations is described in the Appendix. To a considerable extent the paper is a survey and the exposition is as elementary as possible.
Abstract. Several construction methods for rational approximations to functions of one real variable are described in the present paper; the computational results that characterize the comparative accuracy of these methods are presented; an effect of error autocorrection is considered. This effect occurs in efficient methods of rational approximation (e.g., Padé approximations, linear and nonlinear Padé-Chebyshev approximations) where very significant errors in the coefficients do not affect the accuracy of the approximation. The matter of import is that the errors in the numerator and the denominator of a fractional rational approximant compensate each other. This effect is related to the fact that the errors in the coefficients of a rational approximant are not distributed in an arbitrary way but form the coefficients of a new approximant to the approximated function. Understanding of the error autocorrection mechanism allows to decrease this error by varying the approximation procedure depending on the form of the approximant. Some applications are described in the paper. In particular, a method of implementation of basic calculations on decimal computers that uses the technique of rational approximations is described in the Appendix.
To a considerable extent the paper is a survey and the exposition is as elementary as possible. Whenever he has some money to spare, he goes to a shop and buys some kind of useful book. Once he bought a book that was entitled "Inverse trigonometrical functions and Chebyshev polynomials". N.N.Nosov "Happy family". Moscow, 1975, p.91 §1. Introduction The author came across the phenomenon of error autocorrection at the end of seventies while developing nonstandard algorithms for computing elementary functions on small computers. It was required to construct rational approximants of the form (1) R(x) = a 0 + a 1 x + a 2 x 2 + · · · + a n x n b 0 + b 1 x + b 2 x 2 + · · · + b m x m to certain functions of one variable x defined on finite segments of the real line. For this purpose a simple method (described in [1] and below) was used: the method allows to determine the family of coefficients a i , b j of the approximant (1) as the solution of a certain system of linear algebraic equations. These systems turned out to be ill conditioned, i.e., the problem of determining the coefficients of the approximant is, generally speaking, ill-posed in the sense of [2]. Nevertheless, the method ensures a paradoxically high quality of the obtained approximants whose errors are close to the best possible [1].
For example, for the function cos x the approximant of the form (1) on the segment [−π/4, π/4] obtained by the method mentioned above for m = 4, n = 6 has the relative error equal to 0.55 · 10 −13 , and the best possible relative error is 0.46 · 10 −13 [3]. The corresponding system of linear algebraic equations has the condition number of order 10 9 . Thus we risk losing 9 accurate decimal digits in the solution of calculation errors. Computer experiments show that this is a serious risk. The method mentioned above was implemented as a Fortran program. The calculations were carried out with double precision (16 decimal positions) by means of ICL-4-50 and ES-1045 computers. These computers are very similar in their architecture, but when passing from one computer to another the system of linear equations and the computational process are perturbed because of calculation errors, including round-off errors. As a result, the coefficients of the approximant mentioned above to the function cos x experience a perturbation already at the sixth-ninth decimal digits. But the error in the rational approximant itself remains invariant and is 0.4 · 10 −13 for the absolute error and 0.55 · 10 −13 for the relative error. The same thing happens for approximants of the form (1) to the function arctg x on the segment [-1,1] obtained by the method mentioned above for m = 8, n = 9 the relative error is 0.5 · 10 −11 and does not change while passing from ICL-4-50 to ES-1045 although the corresponding system of linear equations has the condition number of order 10 11 , and the coefficients of the approximant experience a perturbation with relative error of order 10 −4 .
Thus the errors in the numerator and the denominator of a rational approximant compensate each other. The effect of error autocorrection is connected with the fact that the errors in the coefficients of a rational approximant are not distributed in an arbitrary way, but form the coefficients of a new approximant to the approximated function. It can be easily understood that the standard methods of interval arithmetic (see, for example, [54]) do not allow to take into account this effect and, as a result, to estimate the error in the rational approximant accurately.
Note that the application of standard procedures known in the theory of ill-posed problems results in this case in losses in accuracy. For example, if one applies the regularization method, two thirds of the accurate figures are lost [4]; in addition, the amount of calculations increases rapidly. The matter of import is that the exact solution of the system of equations in the present case is not the ultimate goal; the aim is to construct an approximant which is precise enough. This approach allows to "rehabilitate" (i.e., to justify) and to simplify a number of algorithms intended for the construction of the approximant, to obtain (without additional transforms) approximants in a form which is convenient for applications.
Professor Yudell L. Luke kindly drew the author's attention to his papers [5,6] where the effect of error autocorrection for the classical Padé approximants was revealed and was explained at a heuristic level. The method mentioned above leads to the linear Padé-Chebyshev approximants if the calculation errors are ignored.
In the present paper, using heuristic arguments and the results of computer experiments, the error autocorrection mechanism is considered for quite a general situation (linear methods for the construction of rational approximants, nonlinear generalized Padé approximations). The efficiency of the construction algorithms used for rational approximants seems to be due to the error autocorrection effect (at least in the case when the number of coefficients is large enough).
Our new understanding of the error autocorrection mechanism allows us, to some extent, to control calculation errors by changing the construction procedure depending on the form of the approximant.
In the paper the construction algorithm for linear Padé-Chebyshev approximants is considered and the corresponding program is briefly described (see [7]). It is shown that the appearance of a control parameter allowing to take into account the error autocorrection mechanism ensures the decrease of the calculation errors in some cases. Results of computer calculations that characterize the possibilities of the program and the quality of the approximants obtained as compared to the best ones are presented. Some other (linear and nonlinear) construction methods for rational approximants are described. Construction methods for linear and nonlinear Padé-Chebyshev approximants involving the computer algebra system REDUCE (see [8]) are also briefly described. Computation results characterizing the comparative precision of these methods are given. With regard to the error autocorrection phenomenon the effect described in [9] and connected with the fact that a small variation of an approximated function can lead to a sharp decrease in accuracy of the Padé-Chebyshev approximants is analyzed. Some applications are indicated. In particular, a method of implementation of basic calculations on decimal computers that uses the technique of rational approximations is described in the Appendix.
To a considerable extent the paper is a survey and the exposition is as elementary as possible. In the survey part of the paper we tried to present the required information clearly and consistently, to make it self-contained. But this part does not claim to be complete: the number of papers concerning rational approximations theory and its applications in numerical analysis (including computer calculation of functions, numerical solving of equations, acceleration of convergence of series, and quadratures), in theoretical and experimental physics (including quantum field theory, scattering theory, nuclear and neutron physics), in the theory and practice of experimental data processing , in mechanics, in control theory, and other branches is much too vast; see, in particular, the reviews and reference handbooks [3,[10][11][12][13][14][15][16].
The author is grateful to Yudell L. Luke for stimulating conversations and valuable instructions. The author also wishes to express his thanks to I. A. Andreeva, A. Ya. Rodionov and V. N. Fridman who participated in the programming and organization of computer experiments. This paper would not have been written without their help. A preliminary version of the paper was published in [56]. §2. Best approximants We shall need some information and results pertaining to ideas of P. L. Chebyshev, see [17]. Let [A, B] be a real line segment (i.e., [A, B] is the set of all real numbers x such that A x B) and f (x) be a continuous function defined on this segment. Consider the absolute error function of the approximant of the form (1) to the function f (x), i.e., the quantity and the absolute error of this approximant, i.e., the quantity A classical problem of approximation theory is to determine, for fixed degrees m and n in (1), the coefficients in the numerator and the denominator of expression (1) 4 so that (3) is the smallest possible. The corresponding approximant is called best (with respect to the absolute error). An important role is played by the following result.
Generalized de la Vallée-Poussin theorem [17]. If the polynomials where 0 µ m, 0 ν n, b m−µ = 0, have no common divisor (i.e., the fraction B], and at successive points . . , (−1) N λ N with alternating signs (so that the numbers λ i are either all positive or all negative), N = m + n + 2 − d, where d is the smallest of the numbers µ, ν, then the error ∆ of any approximant of the form (1) satisfies the inequality Proof. Suppose that there exists an approximant R(x) of the form (1) for which the inequality (5) is not satisfied. Consider the difference From our assumption it follows that the numbers ε(x 1 ), ε(x 2 ), . . . , ε(x N ) differ from zero and have alternate signs. And this, in its turn, by virtue of the continuity of the function ε(x) on the segment [A, B] implies that the function ε(x) has at least On the other hand, the definition of the function ε(x) implies the equality ε( and V (x) are polynomials and the degree of U (x) does not exceed m + n − d. So the function ε(x) cannot have more than N − 2 = m + n − d zeros. This contradiction proves the theorem.
The quantity d which is mentioned in the theorem is called the defect of the approximant R(x); in practice usually d = µ = ν = 0. The generalized de la Vallée-Poussin theorem gives us a sufficient condition for the approximant R(x) = P (x)/ Q(x), where P (x) and Q(x) are polynomials of the form (4), to be best. The points x 1 < x 2 < · · · < x N of the segment [A, B] are called Chebyshev alternation points for the approximant R(x) if the error function∆(x) = f (x) − R(x) at these points has values which coincide with the absolute error∆ of the approximant R in absolute value and are alternate in sign. In other words, at the points x 1 , . . . , x N the error function∆(x) has extrema with alternating signs which coincide with each other in absolute value. From the generalized de la Vallée-Poussin theorem, it follows that the presence of Chebyshev alternation points is sufficient for the approximant R(x) to be best.
Chebyshev theorem. The presence of Chebyshev alternation points is a necessary and sufficient condition under which the approximant is best. Such an approximant exists and is unique if two fractions that coincide after cancellation are not regarded as different.
A comparatively simple proof is given in [17]. Note that P. L. Chebyshev and Vallée-Poussin considered the case of polynomial approximants. The general case was first considered by N. I. Akhiezer, the results mentioned being valid also in the case when the expression ∆ ρ (x) = (f (x) − R(x))/ρ(x), where the weight ρ is nonzero, is taken for the error function; if the weight satisfies certain additional conditions, then the segment [A, B] need not be assumed finite [17]. Note, that for ρ(x) ≡ 1 we obtain the absolute error (2); and if f (x) has no zeros on the segment [A, B], then for ρ(x) = f (x) we shall obtain the relative error function Correspondingly, the quantity is the relative error, and one can speak of the best approximants with respect to the relative error.
Suppose that the segment [A, B] is symmetric with respect to zero, i.e., A = −B. If the function f (x) is even, then it is not difficult to verify that all its best rational approximants on this segment (in the sense of the absolute error or of the relative one) are also even functions, so that one can immediately look for them in the form R(x 2 ) = P (x 2 )/Q(x 2 ), where P and Q are polynomials. If the function f (x) is odd, then its best approximants are also odd functions and one can immediately look for them in the form xR(x 2 ) = xP (x 2 )/Q(x 2 ), where P and Q are polynomials. One can speak of the best approximants with respect to the relative error, if an odd function f (x) is zero only for x = 0, is continuously differentiable, and f ′ (0) = 0.
In this case f (x) can be represented in the form xϕ(x), where ϕ(x) is a continuous even function that never equals zero. Then describing rational approximants to the function f (x) with best relative error reduces to solving the same problem for the even function ϕ(x); indeed, . §3. Construction methods for best approximants Suppose that a rational approximant of the form (1) is the best approximant to a continuous function f (x) on the segment [A, B]. For simplicity, further we shall assume that the defect is zero. Let x 1 , x 2 , . . . , x m+n+2 be the Chebyshev alternation points. Then the error function ∆ ρ (x) corresponding to the weight ρ (see above) satisfies the following system of equalities: For fixed values of x 1 , . . . , x m+n+2 , relations (8) can be regarded as a system of m + n + 2 equations with respect to the unknowns a i , b j , λ, where i = 0, . . . , n, j = 0, . . . , m. Since one can multiply the numerator and the denominator of the fraction R(x) by the same number, we see that one more condition, for example, b 0 = 1, can be added to system (8), so that the number of equations coincides with the number of unknowns.
The iteration method of computation of coefficients in the approximant R(x) (suggested by A. Ya. Remez (see [18]) for polynomial approximants and generalized to the general case) is based on this idea. Different versions of the generalized Remez method were considered in many papers; see, for example [3,12,[20][21][22][23][24][25][26][27]. The approximant is constructed as follows. On the first step, the initial approximations x 1 < x 2 < · · · < x m+n+2 to the Chebyshev alternation points are chosen on the segment [A, B] and the system of equations (8) is solved. As a result we obtain some rational approximant R 1 (x) with error function ∆ 1 (x) = (f (x)−R 1 (x))/ρ(x). For this function the extremum points are found, and the information obtained is used to modify the set {x 1 , . . . , x m+n+1 }. Then the procedure is repeated anew, a new approximant R 2 (x) is obtained, and so on.
Taking into account the fact that ∆ ρ (x) = (f (x) − R(x))/ρ(x) and R(x) has the form (1), system (8) can be rewritten in the form whence, as the result of elementary transformations, we get the system of equations Note that for a fixed value of λ (as well as for the alternation points x 1 , . . . , x m+n+2 ) the coefficients a i , b j of the approximant satisfy the system of linear homogeneous algebraic equations (9). But λ must also be determined; this transforms (9) into a nonlinear system of equations which is rather difficult to solve. The case when it is necessary to find the polynomial approximant, i.e., the case m = 0, is an exception to what was just noted. In this case the system (9) becomes linear.
The solution of nonlinear system of equations (9) is usually reduced to the iterated solution of systems of linear equations. The following method is comparatively popular (see, for example, [3,12,21,22,25,28]) and was used to compile the wellknown tables of rational approximants to elementary and special functions [3]. Let b 0 = 1 (normalization); then (9) takes the following form Substituting a fixed number λ 0 for λ in the nonlinear terms of system (9'), we get the linear system The iteration process is applied to the initial collection of values of the critical points {x k }, i.e., of initial approximations to the Chebyshev alternation points, and to the given value λ 0 . First, from (10) one determines the new value of λ and substitutes it for λ in the nonlinear terms of equation (10); then the system of equations (10) is solved again, and the next value of λ is determined, and so on. As a result a new value of λ and the collection of the coefficients a i , b j are defined. The next step is to determine a new collection of critical points {x k } as extremum points of the error function for the approximant obtained on the previous step. Both steps form one cycle of an iteration process. The calculation is finished when the value of λ with precision given in advance coincides in absolute value with the maximal value of the error function. A complete text of the corresponding Algol program is given in [22]. Unfortunately the iteration process described above can be nonconvergent even in the case when the initial approximation differs from the solution of the problem infinitesimally; see [28]. For some versions of the Remez method it is proved that the iteration process converges if the initial approximation is sufficiently good, see [20, 23-25, 29, 12]. Nevertheless, in each particular case it is often difficult to indicate a priori (i.e., before the start of calculations) the initial approximant that ensures the convergence of the iteration process, and for a given initial approximation it is difficult to verify whether the conditions which ensure the convergence are satisfied. One of the methods which is applied in practice is to construct, at first, the best polynomial approximant of degree m + n (in this case no difficulties arise); next, using the Chebyshev alternation points of this approximant as the initial collection of critical points for the iteration process one constructs the best approximant having the form of a polynomial of degree m + n − 1 divided by a linear function. Finally, in the same manner, the degree of the numerator is successively reduced and the degree of the denominator is successively raised till an approximant of the required form (1) is obtained, see [3,12].
Together with iteration methods for constructing the best rational approximants, methods of linear and convex programming are used, see [18,30]. Iteration methods, as a rule, are more efficient [27], but cannot be generalized directly to the case of functions of several variables. §4. The role of approximate methods and an estimate of the quality of approximation The construction algorithms for the best rational approximants are comparatively complicated, so simpler methods that give an approximate solution of the problem are used on a large scale, see, for example, [1, 5, 7-9, 11-15, 24, 25, 31-37]. Below we describe methods which are easily implemented, use comparatively little computation time and yield approximants that are close to best. Such an approximant can be used as an initial approximant for an iteration algorithm which gives the exact result. The approximant that is best in the sense of the absolute error is not necessary best in the sense of the relative error. It is usually important in practice for both the absolute error and the relative one to be small. So rather than the best approximants, the approximants constructed by means of an approximate method and having appropriate absolute and relative errors are often more convenient. Finally, one can also apply methods giving an approximate solution of the rational approximation problem to those cases when the information about an approximated function is incomplete (for example, there are known values of a function only for a finite number of the argument values, or there are known only the first terms of the function expansion in a series, or the initial information contains an error, and so on).
The generalized de la Vallée-Poussin theorem (see §2 above) allows to estimate the proximity of an approximate solution of the approximation problem to the best approximant even in the case when this best approximant is unknown.
For example, suppose we want to estimate the proximity of a given approximant of the form (1) to an approximant of the same form with best absolute error to a given function f (x). Suppose for simplicity that the defect of the best approximant is zero (in practice this condition usually holds). Then, by virtue of Chebyshev's alternation theorem, in the case when the given approximant R(x) is sufficiently close to the best one, at the successive points x 1 < · · · < x m+n+2 belonging to the interval where the argument x varies the absolute error function ∆(x) takes the nonzero values λ 1 , −λ 2 , . . . , (−1) m+n+1 λ m+n+2 having alternating signs. In this case we shall say that alternation appears. If |λ 1 | = |λ 2 | = · · · = |λ m+n+2 |, then this alternation is Chebyshev's. Denote by ∆ min the best possible absolute error of approximants of the form (1) to the function f (x) (the numbers m and n are fixed). Suppose λ = min{|λ 1 |, . . . , |λ m+n+2 |}. Then, due to the generalized de la Vallée-Poussin theorem, the inequality ∆ min λ is valid; thus (11) ∆ ∆ min λ.
It is clear that ∆ coincides with the greatest (in absolute value) extremum of the function f (x), and one can take the least (in absolute value) extremum of this function for λ (up to a sign). The quantity (12) q = λ/∆ characterizes the proximity of the error of the given approximant to the error of the best approximant. It is clear that 0 < q 1 and q = 1 if the given approximant is best. The closer the quantity q to 1 the higher the approximant quality. From (11) and (12) it follows that Usually, the estimate (13) is rather rough. The appearance of the alternation itself indicates to the closeness of the error of the given approximant to the best one, and the quantity ∆ min /∆ is, in general, much greater than the value of q.
Similarly, the quality of an approximant with respect to the best relative error is evaluated.
If we can calculate the values of the approximated function for all the points of the segment [A, B] (or for a sufficiently "dense" set of such points), and if the coefficients of the rational approximant R(x) are already known, then it is not hard to determine the points of local extremum of the error function and to calculate the quantities λ 1 , λ 2 , . . . , λ m+n+2 , and also the quantities λ and q by means of a special standard subroutine. The same subroutine is also necessary for the construction of the best approximants by means of an iteration method. A good program package for the construction of rational approximants must contain a subroutine of this sort as well as a good subroutine for solving systems of linear algebraic equations and must have, as a component part, routines which implement both the algorithms for approximate solving the approximation problem and the construction algorithms for best approximants. §5. Chebyshev polynomials and polynomial approximations Chebyshev polynomials play an important role in approximation theory and in computational practice (see, for example, [12,13,17,18,24,33,38]). We shall consider Chebyshev polynomials of the first kind.
These polynomials were defined by P. L. Chebyshev in the form (14) T n (x) = cos(n arccos x), where n = 0, 1, . . . . Assume that ϕ = arccos x; representing cos nϕ via sin ϕ and cos ϕ, it is not difficult to verify that the right-hand side of formula (14) coincides indeed with a certain polynomial. In particular, T 0 (x) = cos 0 = 1, and so on. For an actual computation of T n (x) the recurrence relation is usually used. Sometimes it is more convenient to consider the polynomials T n (x) = 2 −n+1 T n (x) since the coefficient at x n of the polynomial T n (x) is equal to 1. The polynomials mentioned above satisfy the recurrence relation Consider a particular case of the problem of the best approximation, the approximant to the function f (x) = x n on the segment [−1, 1] being looked for in the form of a polynomial P (x) of degree n − 1. From the de la Vallée-Poussin theorem it follows that the approximant in question has the form P (x) = x n − T n (x). In this case the error function ∆(x) coincides with T n (x) and one can explicitly obtain the Chebyshev alternation points: x k = − cos kπ n , where k = 0, 1, . . . , n. Indeed, i.e., T n (x) takes its maximum value 1/2 n−1 with alternate signs at the points indicated above. This implies an important consequence: the best polynomial approximant of degree n − 1 to the polynomial a 0 + a 1 x + · · · + a n x n on the segment [−1, 1] has the form a 0 + a 1 x + · · · + a n x n − a n T n (x). This result allows to reduce the degree of a polynomial (for example, of some polynomial approximant) with a minimum loss of accuracy. The reduction of the polynomial degree by means of successively applying the method indicated above is called economization. The economization method is due to C. Lanczos, see [38].
The monomials x 0 , x 1 , . . . , x m can be expressed via the Chebyshev polynomials T 0 , T 1 , . . . , T m . For m > 0 the following formula is valid: where m k are the binomial coefficients, [m/2] is the integer part of the number m/2, a k = 1/2 for k = m/2 and a k = 1 for k = m/2. The expansion of the polynomial T m in powers of x for m > 0 is given by the formula Finally, x 0 = T 0 = 1. It is clear that the set of polynomials of the form n i=0 c i T i , where c i are numerical coefficients, coincides with the set of all polynomials of degree n.
The economization procedure mentioned above can also be described in the following way. The initial polynomial It is not hard to verify that the Chebyshev polynomials T n form an orthogonal (but not orthonormal) basis in L 2 w . The expansion of a function f (x) into the series in Chebyshev polynomials (the Fourier-Chebyshev series) is easily reduced to the expansion of the function f (cos x) into the standard Fourier series in cosines. Among the polynomials of degree n, the polynomial P n (x) = n i=0 c i T i gives the best approximation to the function f (x) in L 2 w . The following result shows that this approximant on the segment [−1, 1] is close to the best one in the sense of the absolute error.
Proof. Assume that ϕ(x) has exactly m sign changes at the points is a polynomial of degree m, we see that it can be represented in the form of a linear combination of the polynomials T 0 , T 1 , . . . , T m . Thus (20) implies It is clear that the function ϕ(x)P (x) has no sign changes; thus the equality just obtained means that ϕ(x) vanishes almost everywhere. The latter proves the theorem.
In a more general case, this result is proved in [39, p.110]. The proof given above allows a generalization to the case of systems of orthogonal polynomials of a sufficiently general form and arbitrary segments of integration (including infinite ones), see [40]. Now let us return to the function (19) and to its approximant P n (x) = n i=0 c i T i . The absolute error function If the function f (x) is continuous, then the error function ∆(x) is also continuous. In this case from the Cheney theorem it follows that either ∆(x) is identically zero or has n + 1 sign changes. This means that alternation is present, i.e., the approximant P n (x) is close to the best one, and their proximity can be evaluated by means of relations (11)- (13).
While the truncated Taylor series to a given function f (x) defined on X. If X coincides with a real line segment [A, B], ϕ k = x k and ψ k = x k for all k, then the expression (21) turns out to be a rational function of the form (1) (see the Introduction). It is clear that expression (21) also gives a rational function in the case when we take Chebyshev polynomials T k or, for example, Legendre, Laguerre, Hermite, etc. polynomials as ϕ k and ψ k . Fix an abstract construction method for an approximant of the form (21) and consider the problem of computing the coefficients a i , b j . Quite often this problem is ill-conditioned, i.e., small perturbations of the approximated function f (x) or a calculation errors lead to considerable errors in the values of coefficients. For example, the problem of computing coefficients for best rational approximants (including polynomial approximants) for high degrees of the numerator or the denominator is ill-conditioned.
The instability with respect to the calculation error can be related both to the abstract construction method of approximation (i.e., with the formulation of the problem) and to the particular algorithm implementing the method. The fact that the problem of computing coefficients for the best approximant is ill-conditioned is related to the formulation of this problem. This is also valid for other construction methods for rational approximants with a sufficiently large number of coefficients. But an unfortunate choice of the algorithm implementing a certain method can aggravate troubles connected with ill-conditioning.
Several construction methods for approximants of the form (21) are connected with solving systems of linear algebraic equations. This procedure can lead to a large error if the corresponding matrix is ill-conditioned. Consider an arbitrary system of linear algebraic equations where A is a given square matrix of order N with components a ij (i, j = 1, . . . , N ), h is a given vector column with components h i , and y is an unknown vector column with components y i . Define the vector norm by the equality (this norm is more convenient for calculations than x 2 1 + · · · + x 2 N ). Then the matrix norm is determined by the equality If a matrix A is nonsingular, then the quantity is called the condition number of the matrix A (see, for example, [41]). Since y = A −1 h, we see that the absolute error ∆y of the vector y is connected with the absolute error of the vector h by the relation ∆y = A −1 ∆h, whence ∆y A −1 · ∆h and ∆y / y A −1 · ( h / y )( ∆h / h ). 13 Taking into account the fact that h A · y , we finally obtain i.e., the relative error of the solution y is estimated via the relative error of the vector h by means of the condition number. It is clear that (26) can turn into an equality. Thus, if the condition number is of order 10 k , then, because of round-off errors in h, we can lose k decimal digits of y.
Similarly, the contribution of the error of the matrix A is evaluated. Finally, the dependence of cond(A) on the choice of a norm is weak. A method of rapid estimation of the condition number is described in [41, §3.2]. The analysis of the cases when the condition number gives a much too pessimistic error estimate is given in [42].
As an example, we note that the coefficients of the polynomial P n (x) which give the best approximant to the function f (x) in the metric of the Hilbert space L 2 w (see §5 above) can be determined from the system of equations where k = 0, 1, . . . , n. With respect to coefficients of the polynomial P n (x) (in powers of x or in Chebyshev polynomials) these equations are linear and algebraic. But due to the fact that the monomials x k are "almost linearly dependent", system (27) is very ill-conditioned. The equivalent system is better conditioned, but in this case it is also preferable to use the economization procedure or to determine the coefficients c i in (19) by formulas (28)
effect of error autocorrection
Fix an abstract construction method (problem) for an approximant of the form (21) to the function f (x). Let the coefficients a i , b j give an exact or an approximate solution of this problem, and let theã i ,b j give another approximate solution obtained in the same way. Denote by ∆a i , ∆b j the absolute errors of the coefficients, i.e., ∆a i =ã i − a i , ∆b j =b j − b j ; these errors arise due to perturbations of the approximated function f (x) or due to calculation errors. Set It is easy to verify that the following exact equality is valid: As mentioned in the Introduction, the fact that the problem of calculating coefficients is ill-conditioned can nevertheless be accompanied by high accuracy of the approximants obtained. This means that the approximants P/Q and P / Q are close to the approximated function and, therefore, are close to each other, although the coefficients of these approximants differ greatly. In this case the relation ∆Q/ Q = ∆Q/(Q + ∆Q) of the denominator considerably exceeds in absolute value the left-hand side of equality (29). This is possible only in the case when the difference ∆P/∆Q−P/Q is small, i.e., the function ∆P/∆Q is close to P/Q, and, hence, to the approximated function. Thus the function ∆P/∆Q will be called the error approximant. For a special case, this concept was actually introduced in [5]. In the sequel, we shall see that in many cases the error approximant provides indeed a good approximation for the approximated function, and, thus, P/Q and P / Q differ from each other by a product of small quantities in the right-hand side of (29). The thing is that the errors ∆a i , ∆b j are not arbitrary, but are connected by certain relations.
Let an abstract construction method for the approximant of the form (21) be linear in the sense that the coefficients of the approximant can be determined from a homogeneous system of linear algebraic equations. The homogeneity condition is connected with the fact that, when multiplying the numerator and the denominator of fraction (21) by the same nonzero number, the approximant (21) does not change. Denote by y the vector whose components are the coefficients a 0 , a 1 , . . . , a n , b 0 , b 1 , . . . , b m . Assume that the coefficients can be obtained from the homogeneous system of equations where H is a matrix of dimension (m + n + 2) × (m + n + 1). The vectorỹ is an approximate solution of system (30) if the quantity Hỹ is small. If y andỹ are approximate solutions of system (30), then the vector ∆y =ỹ − y is also an approximate solution of this system since H∆y = Hỹ − Hy Hỹ + Hy . Thus it is natural to assume that the function ∆P/∆Q corresponding to the solution ∆y is an approximant to f (x). It is clear that the order of the residual of the approximate solution ∆y of system (30), i.e., of the quantity H∆y , coincides with the order of the largest of the residuals of the approximate solutions y andỹ. For a fixed order of the residual the increase of the error ∆y is compensated by the fact that ∆y satisfies the system of equations (30) with greater "relative" accuracy, and the latter, generally speaking, leads to the increase of accuracy of the error approximant.
To obtain a certain solution of system (30), one usually adds to this system a normalization condition of the form where λ i , µ j are numerical coefficients. As a rule, the equality b 0 = 1 is taken as the normalization condition (but this is not always successful with respect to minimizing the calculation errors).
Adding equation (31) to system (30), we obtain a nonhomogeneous system of m+n+2 linear algebraic equations of type (22). If the approximate solutions y and y of system (30) satisfy condition (31), then the vector ∆y satisfies the condition It is clear that the above reasoning is not rigorous; for each specific construction method for approximations it is necessary to carry out some additional analysis. More accurate reasoning is given below, in §8, for the classical Padé approximants, and in §14, for the linear and nonlinear Padé-Chebyshev approximants. The presence of the error autocorrection mechanism described above is also verified by a numerical experiment (see below).
The effect of error autocorrection reveals itself for certain nonlinear construction methods for rational approximations as well. One of these methods is considered below, in §12-14 (nonlinear Padé-Chebyshev approximation).
It must be emphasized that (as noted in §3) the coefficients of the best Chebyshev approximant satisfy the system of linear algebraic equations (9) and are computed as approximate solutions of this system on the last step of the iteration process in algorithms of Remez's type. Thus, the construction methods for the best rational approximants can be regarded as linear. At least for some functions (say, for cos π/4x, −1 x 1) the linear and the nonlinear Padé-Chebyshev approximants are very close to the best ones in the sense of the relative and the absolute errors, respectively. The results that arise when applying calculation algorithms for Padé-Chebyshev approximants can be regarded as approximate solutions of system (9) which determines the best approximants. Thus the presence of the effect of error autocorrection for Padé-Chebyshev approximants gives an additional argument in favor of the conjecture that this effect also takes place for the best approximants.
Finally, note that the basic relation (29) becomes meaningless if one seeks an approximant in the form a 0 ϕ 0 + a 1 ϕ 1 + · · · + a n ϕ n , i.e., the denominator in (21) is reduced to 1. However, in this case the effect of error autocorrection (although much weakened) is also possible; this is connected with the fact that the errors ∆a i approximately satisfy certain relations. Such a situation can arise when using the least squares method. §8. Padé approximations Let the expansion of a function f (x) into a power series (the Taylor series at zero) be given, i.e., (32) f The classical Padé approximant for f (x) is a rational function of the form where P n (x) and Q m (x) are polynomials of degree n and m, respectively, satisfying the relation i.e., the first m + n + 1 terms of the Taylor expansion in powers of x (to x m+n inclusive) of f (x) and R(x) are the same. The Padé approximation gives the best approximant in a small neighborhood of zero; it is a natural generalization of the expansion of functions into Taylor series and is closely connected with the expansion of functions into continued fractions. Numerous papers are devoted to the Padé approximation; see, for example, [11-16, 5, 6]. One can evaluate the coefficients b j in the denominator of fraction (33) by solving the homogeneous system of linear equations where k = 1, . . . , m and c l = 0 for l < 0. One can take any nonzero constant as b 0 . The coefficients a i are given by the formulas The text of the corresponding Fortran program is given in [11]. For large m the system (36) is ill-conditioned. Moreover, the problem of computation for coefficients of Padé approximants is also ill-conditioned independent of a particular solving algorithm for this problem, see [6,43,44]. In Y. L. Luke's paper [5] the following reasoning is given. Let ∆a i , ∆b j be the errors in the coefficients a i , b j which arise when numerically solving system (36). We shall ignore the errors of the quantities c i and x and we shall consider that, according to (37), the errors in the coefficients a i have the form From (37') it follows that the latter, after the change of indices, yields the relation similar to (34): Thus, there are reasons to expect that the error approximant approximates indeed the function f (x) and the effect of error autocorrection takes place. In [5] the corresponding experimental data for the function e −x for x = 2, m = n = 6, 7, . . . , 14, and for x = 5 are given and the experiments with the functions x −1 ln(1 + x), (1 + x) ±1/2 , xe x ∞ x t −1 e −t dt are briefly described; see also [6]. A natural generalization of the classical Padé approximant is the multipoint Padé approximant (or Padé approximant of the second kind), i.e., a rational function of the form (33) whose values coincide with values of the approximated function f (x) at some points x i (i = 1, 2, . . . , m + n + 1). This definition is extended to the case of multiple points, and for x i = 0 for all i it leads to the classical Padé approximations see [11,14,15]. The calculation of coefficients in the multipoint Padé approximant can be reduced to solving a system of linear equations, and there are reasons to suppose that in this case the effect of error autocorrection takes place as well. §9. Linear Padé-Chebyshev approximations and the PADE program Consider the approximant of the form (33) to the function f (x) on the segment [−1, 1]. The absolute error function of this approximant has the following form: 18 where T k (x) are the Chebyshev polynomials, w(x) = 1/ √ 1 − x 2 . This concept (in a different form) was introduced in [45] and allows a generalization to the case of other orthogonal polynomials (see [11,33,34,39,40]). Approximants of this kind always exist [39]. Reasoning in the same way as in §5 and applying Cheney's theorem, we can find out why the linear Padé-Chebyshev approximants are close to the best ones.
Let P n (x) and Q m (y) be represented in the form (35). Then the system of equations (39) is equivalent to the following system of linear algebraic equations with respect to the coefficients a i , b j : The homogeneous system (40) can be transformed into a nonhomogeneous one by adding a normalization condition; in particular, any of the following equalities can be taken as this condition: In [1,9] the program PADE (in Fortran, with double precision) which allows to construct rational approximants by solving the system of equations of type (40) is briefly described. The complete text of a certain version of this program and its detailed description can be found in the Collection of algorithms and programs of the Research Computer Center of the Russian Acad. Sci [7]. For even functions the approximant is looked for in the form (44) R(x) = a 0 + a 1 x 2 + · · · + a n ( and for odd functions it is looked for in the form respectively. The program computes the values of coefficients of the approximant, the absolute and the relative errors, and gives the information which allows to estimate the quality of the approximation (see §4 above). In particular, a version of the PADE program is implemented by means of minicomputer of SM-4 class constructs the error curve, determines the presence of alternation, and produces the estimate of the quality of the approximation by means of quantity (12). For the calculation of integrals, the Gauss-Hermite-Chebyshev quadrature formula is used: where s is the number of interpolation points; for polynomials of degree 2s − 1 this formula is exact, so that the precision of formula (46) increases rapidly as the parameter s increases and depends on the quality of the approximation of the function ϕ(s) by polynomials. To calculate the values of Chebyshev polynomials, recurrence relation (15) is applied.
If the function f (x) is even and an approximant is looked for the form (44), then system (40) is transformed into the following system of equations: where k = 0, 1, . . . , m + n. If f (x) is an odd function and an approximant is looked for in the form (45), then, first, by means of the solution of system (47) complemented by one of the normalization conditions, one determines an approximant of the form (44) to the even function f (x)/x, and then the obtained approximant is multiplied by x. This procedure allows to avoid a large relative error for x = 0.
The version of the PADE program described in [7] is implemented by means of computers of IBM 360/370 class and requires 60 K bytes of main memory; the volume of this program in Fortran (including comments) is 581 lines (cards). The program execution time depends on the type of the computer, on the approximated function, and on the values of control parameters. For example, the CPU time for determining, by means of the PADE program, an approximant of the form (1) to the function √ x on the segment [1/2, x] for m = n = 2 is 4.4s. In this case the normalization (43) is applied, and the number of checkpoints used while estimating the error is 1200; the compilation time is not taken into account 1 .
One of the versions of the program gives the estimate of the quality of the approximant obtained according to formula (12) (see §4 above). For example, for the function sin π 2 x for m = n = 2, and for the relative error we have q = 0.0625 whence it follows that δ min qδ ≈ 0.4 · 10 −9 . This estimate is rough and in fact, as is shown in Table 1, δ min /δ ≈ 0.84. For the absolute error the program gives in this case q = 0.71. The latter indicates to the closeness of this error to the best possible. The version of the program mentioned above allows to carry out the calculations in interactive mode varying the degrees m and n, the boundary points of the segment [A, B], the branches of the algorithm, the number of checkpoints when the errors are calculated, the number of interpolation points in the quadrature formula (46), and to estimate rapidly the quality of the approximation according to the error curve.
Remark. The program of constructing classical Padé approximants given in [11] is also called PADE, but, of course, here and in [11] different programs are discussed. §10. The PADE program. Analysis of the algorithm The quality of an approximant obtained by means of the PADE program mainly depends on the behavior of the denominator of this approximant and on the calculation errors. The fact that the corresponding systems of algebraic equations are ill-conditioned is the most unpleasant the source of errors of the method under consideration. Seemingly, the methods of this kind are not widely used due to this reason.
The condition numbers of systems of equations that arise while calculating, by means of the PADE program, the approximants considered above are also very large, for example, while calculating the approximant of the form (5) on the segment [−1, 1] to sin π 2 x for m = n = 3, the corresponding condition number is of order 10 13 . As a result, the coefficients of the approximant are determined with a large error. In particular, a small perturbation of the system of linear equations arising when passing from computer ICL 4-50 to ES-1045 (because of the calculation errors) gives rise to large perturbations in the coefficients of the approximant. Fortunately, the effect of error autocorrection (see §7 above) improves the situation, and the errors of the approximant have no substantial changes under this perturbation. This fact is described in the Introduction, where concrete examples are also given.
Consider some more examples connected with passing from ICL 4-50 to ES-1045. The branch of the algorithm which corresponds to the normalization condition (41) (i.e., to b 0 = 1) is considered. For arctg x the calculation of an approximant of the form (45) on the segment [−1, 1] for m = n = 5 by means of ICL-4-50 computer gives an approximation with the absolute error ∆ = 0.35 · 10 −12 and the relative error δ = 0.16 · 10 −11 . The corresponding system of linear algebraic equations has the condition number of order 10 30 ! Passing to ES-1045 we obtain the following: ∆ = 0.5 · 10 −14 , δ = 0.16 · 10 −12 , the condition number is of order 10 14 , and the errors ∆a 1 and ∆b 1 in the coefficients a 1 and b 1 in (45) are greater in absolute value than 1! This example shows that the problem of computing condition number of an ill-conditioned system is, in its turn, ill-conditioned. Indeed, the condition number is, roughly speaking, determined by values of coefficients of the inverse matrix (see §6 above, eqs (24) and (25)), every column of the inverse matrix being the solution of the system of equations with the initial matrix of coefficients, i.e., of an ill-conditioned system.
Consider in more detail the effect of error autocorrection for the approximant of the form (44) on the segment [−1, 1] to the function cos π 4 x for m = 2, n = 3.
Constructing this approximant both on the ICL-4-50 and the ES-1045 computer results in the approximation with the absolute error ∆ = 0.4 · 10 −13 and the relative error δ = 0.55 · 10 −13 which are close to the best possible. In both the cases the condition number is of order 10 9 . The coefficients of the approximants obtained by means of the computers mentioned above and the coefficients of the error approximant (see §7 above) are as follows: But the polynomial ∆Q is zero at x = 0, and the polynomial ∆P takes a small, but nonzero value at x = 0. Fortunately, equality (29) can be rewritten in the following way: Thus, as ∆Q → 0, the effect of error autocorrection arises because the quantity ∆P is close to zero, and the error of the approximant P/Q is determined by the error of the coefficient a 0 . The same situation also take place when the polynomial ∆Q vanishes at an arbitrary point x 0 belonging to the segment [A, B] where the function is approximated. It is clear that if one chooses the standard normalization (b 0 = 1), then the error approximant has actually two coefficients less than the initial one. Relations (38) and (39) show that in the general case the normalization conditions a n = 1 or b m = 1 result in the following: the coefficients of the error approximant form an approximate solution of the homogeneous system of linear algebraic equations whose exact solution determines the Padé-Chebyshev approximant having one coefficient less than the initial one. The effect of error autocorrection improves again the accuracy of this error approximant; thus, "the snake bites its own tail". A situation also arises in the case when the approximant of the form (44) to an even function is constructed by solving the system of equations (47). Sometimes it is possible to decrease the error of the approximant by means of the fortunate choice of the normalization condition. As an example, consider the approximation of the function e x on the segment [−1, 1] by rational functions of the form (1) for m = 15, n = 0. For the traditionally accepted normalization b 0 = 1, the PADE program yields an approximant with the absolute error ∆ = 1.4 · 10 −14 and the relative error δ = 0.53 · 10 −14 . After passing to the normalization condition b 15 = 1, the errors are reduced nearly one half: ∆ = 0.73 · 10 −14 , δ = 0.27 · 10 −14 . Note that the condition number increases: in the first case it is 2 · 10 6 , and in the second case it is 0 · 10 16 . Thus the error decreases notwithstanding the fact that the system of equations becomes drastically ill-conditioned. This example shows that the increase of accuracy of the error approximant can be accompanied by the increase of the condition number, and, as experiments show, by the increase of errors of the numerator and the denominator of the approximant. The fortunate choice of the normalization condition depends on the particular situation.
A specific situation arises when the degree of the numerator (or of the denominator) of the approximant is equal to zero. In this case the unfortunate choice of the normalization condition results in the following: the error approximant becomes zero or is not well-defined. For n = 0 it is expedient to choose condition (42), as it was done in the example given above. For m = 0 (the case of the polynomial approximation) it is usually expedient to choose condition (43). Otherwise the situation will be reduced to solving the system of equations (27 ′ ) in the case described in §6 above.
Since the double precision regime of ES-1045 corresponds to 16 decimal digits of mantissa in the computer representation of numbers, while running computers of this type it makes sense to vary the normalization condition only in case the condition number exceeds δ · 10 16 , where δ is the relative error of the obtained approximant. The value of the condition number of the corresponding system of linear algebraic equations is given by the PADE program simultaneously with other computation results.
The theoretical error of the method is determined, to a considerable extent, by the behavior of the approximant's denominator. It is convenient for the analysis, by dividing the numerator and the denominator of the fraction by b 0 to equate b 0 to 1. If the coefficients b 1 , b 2 , . . . , b m are small in comparison with b 0 = 1, which often happens in computation practice, then the absolute error ∆(x) and its numerator Φ(x) = f (x)Q(x) − P (x) are of the same order, so that the minimization of Φ(x) leads to the minimization of the error ∆(x), see §9 above. Note that the coefficients of approximant (45) to the function arctg x on the segment [−1, 1] are not small in comparison with b 0 . For example, for m = n = 3 the coefficient b 1 is almost one and half times greater than the coefficient b 0 . Thus, as shown in Table 1, the errors of the approximant to arctg x obtained by means of the PADE program are several times greater than the errors of the best approximants. Note that sometimes it is possible to improve the denominator of the approxi-mant or to reduce the condition number of the corresponding system of equations by extending the segment [A, B] where the function is approximated. Such an effect is observed, for example, when approximants to some hyperbolic functions are calculated.
Note that the replacement of the standard subroutine DGELG for solving system of linear algebraic equations by another subroutine of the same kind (for example, by the DECOMP program from [41]) does not essentially affect the quality of approximants obtained by means of the PADE program.
One could seek the numerator and the denominator of the approximant in the form where T i are the Chebyshev polynomials. In this case the system of linear equations determining the coefficients would be better conditioned. But the calculation of the polynomials of the form (50) by, for example, the Chenshaw method, results in lengthening the computation time, although it has a favorable effect upon the error of calculations, see [47,Chapter IV,§9]. The transformation of the polynomials P and Q from the form (50) into the standard form (35) also requires additional efforts.
In practice it is more convenient to use approximants represented in the form (1), (44), or (45), and calculate the fraction's numerator and denominator according the Horner scheme. In this case the normalization a n = 1 or b m = 1 allows to reduce the number of multiplications. Thus the PADE program gives coefficients of the approximant in the two forms: with the condition b 0 = 1 and with one of the conditions a n = 1 or b m = 1 no matter which one of the conditions (41)-(43) is actually used while solving the system of equations of type (39) or (40).
The PADE program (and the corresponding algorithm) can be easily modified, for example, to take into account the case when some coefficients are fixed beforehand. One can vary the systems of equations under consideration by changing the weight w(x), the interval where the functions are approximated, and the system of orthogonal polynomials. By a certain increase in complexity of the system of equations (40) it is possible to minimize the norm of the numerator Φ(x) of the error function ∆(x) in the Hilbert space L 2 w (see §5 above). The use of the PADE program does not require that the approximated function be expanded into a series or a continued fraction beforehand. Equations (39) or (40) and the quadrature formula (46) show that the PADE program uses only the values of the approximated function f (x) at the interpolation points of the quadrature formula (which are zeros of some Chebyshev polynomial).
On the segment [−1, 1] the linear Padé-Chebyshev approximants give a considerably smaller error than the classical Padé approximants. For example, the Padé approximant of the form (1) to the function e x for m = n = 2 has the absolute error ∆(1) = 4 · 10 −3 at the point x = 1, but the PADE program gives an approximant of the same form with the absolute error ∆ = 1.9 · 10 −4 (on the entire the segment), i.e., the latter is 20 times smaller than the previous one. The absolute error of the best approximant is 0.87 · 10 −4 . §11. The "cross-multiplied" linear Padé-Chebyshev approximation scheme As a rule, linear Padé-Chebyshev approximants are constructed according to the following scheme [45,3,11,12]. Let the approximated function be decomposed into the series in Chebyshev polynomials (51) f where the notation ′ m i=0 u i means that the first term u 0 in the sum is replaced by u 0 /2. The rational approximant is looked for in the form ; the coefficients b j are determined by means of the system of linear algebraic equations It is not difficult to verify that this algorithm must lead to the same results as the algorithm described in §9 if the calculation errors are not taken into account. The coefficients c k for k = 0, 1, . . . , n + 2m, are present in (53) and (54), i.e., it is necessary to have the first n + 2m + 1 terms of series (51). The coefficients c k are known, as a rule, only approximately. To determine them one can take the truncated expansion of f (x) into the series in powers of x (the Taylor series) and by means of the economization procedure transform it into the form (55) 26 where T k (x) are the Chebyshev polynomials, w(x) = 1/ √ 1 − x 2 . Cheney's theorem (see §5 above) shows that the absolute error function ∆(x) = f (x) − R(x) has alternation. Thus, there are reasons to assume that the nonlinear Padé-Chebyshev approximants are close to the best ones in the sense of the absolute error.
In the paper [32] the following algorithm of computing the coefficients of the approximant indicated above is given. Let the approximated function f (x) be expanded into series (51) in Chebyshev polynomials. Determine the auxiliary quantities γ i from the system of linear algebraic equations (57) m j=0 γ j c |k−j| = 0, k = n + 1, n + 2, . . . , n + m, assuming that γ 0 = 1. The coefficients of the denominator in expression (52) are determined by the equalities Finally, the coefficients of the numerator are determined by formula (54). It is possible to solve system (57) explicitly and to indicate the formulas for computing the quantities γ i . One can also estimate explicitly the absolute error of the approximant. This algorithm is described in detail in the book [33]; see also [11].
In contrast to the linear Padé-Chebyshev approximants, the nonlinear approximants of this type do not always exist, but it is possible to indicate explicitly verifiable conditions guaranteeing the existence of such approximants [33]. The nonlinear Padé-Chebyshev approximants (in comparison with the linear ones) have, as a rule, a somewhat smaller absolute errors, but can have larger relative errors. Consider, as an example, the approximant of the form (1) or (52) to the function e x on the segment [−1, 1] for m = n = 3. In this case the absolute error for a nonlinear Padé-Chebyshev approximant is ∆ = 0.258 · 10 −6 , and the relative error, δ = 0.252 · 10 −6 ; for the linear Padé-Chebyshev approximant ∆ = 0.33 · 10 −6 and δ = 0.20 · 10 −6 . §13. Applications of the computer algebra system REDUCE to the construction of rational approximants The computer algebra system REDUCE [48,49] allows to handle formulas at symbolic level and is a convenient tool for the implementation of algorithms of computing rational approximants. The use of this system allows to bypass the procedure of working out the algorithm of computing the approximated function if this function is presented in analytical form or when either the Taylor series coefficients are known or are determined analytically from a differential equation. The round-off errors can be eliminated by using the exact arithmetic of rational numbers represented in the form of ratios of integers.
Within the framework of the REDUCE system, the program package for enhanced precision computations and construction of rational approximants is implemented; see, for example [8]. In particular, the algorithms from §11 and §12 (which are similar to each other in structure) are implemented, the approximated function 27 being first expanded into the power (Taylor) series, f = ∞ k=0 f (k) (0)x k /k!, and then the truncated series consisting of the first N + 1 terms of the Taylor series (the value N is determined by the user) being transformed into a polynomial of the form (55) by means of the economization procedure. The algorithms implemented by means of the REDUCE system allow to obtain approximants in the form (1) or (52), estimates of the absolute and the relative error, and the error curves. The output includes the Fortran program of computing the corresponding approximant, the constants of rational arithmetic being transformed into the standard floating point form. When computing the values of the obtained approximant, this approximant can be transformed into the form most convenient for the user. For example, one can calculate values of the numerator and the denominator of the fraction of the form (1) according to the Horner scheme, and for the fraction of the form (52), according to Clenshaw scheme, and transform the rational expression into a continued fraction or a Jacobi fraction as well.
The ALGOL-like input language of the REDUCE system and convenient tools for solving problems of linear algebra guarantee simplicity and compactness of the programs. For example, the length of the program for computing linear Padé-Chebyshev approximants is sixty two lines. §14. The effect of error autocorrection for nonlinear Padé-Chebyshev approximations Relations (56) can be regarded as a system of equations for the coefficients of the approximant. Let the approximants R(x) = P (x)/Q(x) and R(x) = P (x)/ Q(x), where P (x), P (x) are polynomials of degree n and Q(x), Q(x) are polynomials of degree m, be obtained by approximate solving the indicated system of equations. Consider the error approximant ∆P (x)/∆Q(x), where ∆P (x) = P (x) − P (x), ∆Q(x) = Q(x) − Q(x). Substituting R(x) and R(x) in (56) and subtracting one of the obtained expressions from the other, we see that the following approximate equality holds: This and equality (29) imply the approximate equality where k = 0, 1, . . . , m + n, w(x) = 1/ √ 1 − x 2 . If the quantity ∆Q is relatively not small (this is connected with the fact that the system of equations (57) is ill-conditioned), then, as follows from equality (59), we can naturally expect that the error approximant is close to P/Q and, consequently, to the approximated function f (x).
Due to the fact that the arithmetic system of rational numbers is used, the software described in §13 allows to eliminate the round-off errors and to estimate the "pure" influence of errors in the approximated function on the coefficients of the nonlinear Padé-Chebyshev approximant. In this case the effect of error autocorrection can be substantiated by a more accurate reasoning which is valid both for nonlinear Padé-Chebyshev approximants and for linear ones, and even for the linear generalized Padé approximants connected with different systems of orthogonal polynomials. This reasoning is analogous to Y. L. Luke's considerations [5] given in §8 above.
Assume that the function f (x) is expanded into series (51) and that the rational approximant R(x) = P (x)/Q(x) is looked for in the form (52).
Let ∆b j be the errors in coefficients of the approximant's denominator Q. In the linear case these errors arise when solving the system of equations (53), and in the nonlinear case, when solving the system of equations (54). In both the cases the coefficients in the approximant's numerator are determined by equations (54), whence we have This implies the following fact: the error approximant ∆P/∆Q satisfies the relations which are analogous to relations (39) defining the linear Padé-Chebyshev approximants. Indeed, let us use the well-known multiplication formula for Chebyshev polynomials: where i, j are arbitrary indices; see, for example [11][12][13]33]. Taking (62) into account, the quantity f ∆Q − ∆P can be rewritten in the following way: This formula and (60) imply that i.e., in the expansion of the function f ∆Q − ∆P into the series in Chebyshev polynomials, the first n + 1 terms are absent, and the latter is equivalent to relations (61) by virtue of the fact that the Chebyshev polynomials form an orthogonal system. When carrying out actual computations, the coefficients c i are known only approximately, and thus the equalities (60), (61) are also satisfied approximately. Consider the results of computer experiments 2 that were performed by means of the software implemented within the framework of the REDUCE system and briefly described in §13 above. We begin with the example considered in §10 above, where the linear Padé-Chebyshev approximant of the form (44) to the function cos π 4 x was constructed on the segment [−1, 1] for m = 2, n = 3. To construct the corresponding nonlinear Padé-Chebyshev approximant, it is necessary to specify the value of the parameter N determining the number of terms in the truncated Taylor series (58) of the approximated function. In this case the calculation error is determined, in fact, by the parameter N .
The coefficients in approximants of the form (44) Both the approximants have absolute errors ∆ equal to 0.4 · 10 −13 and the relative errors δ equal to 0.6 · 10 −13 , these values being close to the best possible. The condition number of the system of equations (57) in both the cases is 0.4 · 10 8 . The denominator ∆Q of the error approximant is zero for x = x 0 ≈ 0.70752 . . . ; the point x 0 is also close to the root of the numerator ∆P which for x = x 0 is of order 10 −8 . Such a situation was considered in §10 above. Outside a small neighborhood of the point x 0 the absolute and the relative errors have the same order as in the "linear case" considered in §10. Now consider the nonlinear Padé-Chebyshev approximant of the form (44) on the segment [−1, 1] to the function tg π 4 x for m = n = 3. In this case the Taylor series converges very slowly, and, as the parameter N increases, the values of coefficients of the rational approximant undergo substantial (even in the first decimal digits) and intricate changes. The situation is illustrated in Table 2, where the following values are given: the absolute errors ∆, the absolute errors ∆ 0 of error approximants 4 (there the approximants are compared for N = 15 and N = 20, for N = 25 and N = 35, for N = 40 and N = 50), and also the values of the condition number cond of the system of linear algebraic equations (57). In this case the relative errors coincide with the absolute ones. The best possible error is ∆ min = 0.83 · 10 −17 . Table 2 N 15 20 25 35 40 50 cond 0.76 · 10 7 0.95 · 10 8 0.36 · 10 10 0.12 · 10 12 0.11 · 10 12 0.11 · 10 12 ∆ 0.13 · 10 −4 0.81 · 10 −6 0.13 · 10 −7 0.12 · 10 −10 0.75 · 10 −12 0.73 · 10 −15 ∆ 0 0.7 · 10 −4 0.7 · 10 −8 0.2 · 10 −9 §15. Small deformations of approximated functions and acceleration of convergence of series Let a function f (x) be expanded into the series in Chebyshev polynomials, of this series. Using formula (62), it is easy to verify that the linear Padé-Chebyshev approximant of the form (1) or (52) to the function f (x) coincides with the linear Padé-Chebyshev approximant to polynomial (63) for N = n + 2m, i.e., it depends only on the first n + 2m + 1 terms of the Fourier-Chebyshev series of the function f (x); a similar result is valid for the approximant of the form (44) or (45) to even or odd functions, respectively. Note that for N = n + 2m the polynomialf N is the result of application of the algorithm of linear (or nonlinear) Padé-Chebyshev approximation to f (x), the exponents m and n being replaced by 0 and 2m + n.
The interesting effect mentioned in [9] consists in the fact that the error of the polynomial approximantf n+2m depending on n + 2m + 1 parameters can exceed the error of the corresponding Padé-Chebyshev approximant of the form (1) x. This deformation does not affect the first twenty terms in the expansion of this function in Chebyshev polynomials and, consequently, does not affect the coefficients in the corresponding rational Padé-Chebyshev approximant, but leads to a several orders increase of its error. Thus, a small deformation of the approximated function can result in a sharp change in the order of error of a rational approximant.
Moreover the effect just mentioned means that the algorithm extracts from polynomial (63) additional information concerning the next components of the Fourier-Chebyshev series. In other words, in this case the transition from Fourier-Chebyshev series to Padé-Chebyshev approximant accelerates convergence of series. A similar effect of acceleration of convergence of power series by passing to the classical Padé approximant is known (see [11,14,15]).
It is easy to see that the nonlinear Padé-Chebyshev approximant of the form (1) to the function f (x) depends only on the first m + n + 1 terms of the Fourier-Chebyshev series for f (x), so that for such approximants a more pronounced effect of the type indicated above takes place.
Since one can change the "tail" of the Fourier-Chebyshev series in a quite arbitrary way without affecting the rational Padé-Chebyshev approximant, the effect of acceleration of convergence can take place only for the series with an especially regular behavior (and for the corresponding "nice" functions).
Note that the effect of error autocorrection indicates to the fact that the variation of an approximated function under deformations of a more general type may have little effect on the rational approximant considered as a function (whereas the coefficients of the approximant can have substantial changes). Accordingly, while deforming the functions for which good rational approximation is possible, the approximant's error can rapidly increase.
There are interesting results distinguishing the classes of functions for which an efficient rational approximation is possible, for example, the classes of functions which are approximated by rational fractions considerably better (with a higher rate of convergence), then by polynomials; see, in particular, [10,[50][51][52]. The reasoning given above indicate that of a special interest are "individual" properties of functions which guarantee their effective rational approximation. There are reasons to suppose that solutions of certain functional and differential equations possess properties of this kind. Note that in papers [16,37], starting from the fact that elementary functions satisfy simple differential equations, it is shown that these functions are better approximated by rational fractions than by polynomial ones (we have in mind the best approximation); because of complicated calculations only the following cases were considered: the denominator of a rational approximant is a linear function or (for even and odd functions) is a polynomial of degree 2. §16. Applications to computer calculation Ti computer calculation of function values is reduced in fact to carrying out a finite set of arithmetic operations with the argument and constants, i.e. to computing the value of a certain rational function. Now we list some typical applications of methods for constructing rational approximants. Often it happens that a function f (x) is to be computed many times (for example, when solving numerically a differential equation) and with a given accuracy. In this case the construction of a rational approximant to this function (with a given accuracy) often produces the most economic algorithm for computation of values of f (x). For example, if f (x) is a complicated aggregate of elementary and special functions every one which can be calculated using the corresponding standard programs, then values of the function f (x) can, of course, be computed by means of these programs. But such an algorithm is often too slow and produces an unnecessary extra precision.
Standard computer programs for elementary and special functions, in their turn, are based, as a rule, on rational approximants. Note that although the accuracy of rational and polynomial approximants to a given function is the same, the computation of the rational approximant usually requires a lesser number of operations, i.e., it is more speedy; see, for example [1,3,12,13,24,25,31].
The coefficients of rational approximants to basic elementary and special functions can be found in reference handbooks; we note especially the fundamental book [3], see, also, for example, [12,13]. But a computer can have certain specific properties requiring algorithms and approximants (for effective standard programs of computing functions) which are absent in reference handbooks. In that case the construction programs for rational approximants, including the PADE program described in §9 above (see also [1,7,9]), can be useful.
For example, decimal computers (including calculators) are widely used at present. The reason is that the use of decimal arithmetic system (instead of the standard binary one) enables the user to avoid a considerable loss of computing time needed for the transformation of numbers from the decimal representation to the binary one and vice versa. This is especially important if the amount of the input/output operations is relatively large; the latter situation is characteristic for calculations in the interactive mode. A method of computing elementary functions on decimal computers which uses the technique of rational approximants is described in the Appendix below. The main idea of this method consists in the fact that the computation of values of various elementary functions, by means of simple algorithms, is reduced to the computation of a rational function of a fixed form. Roughly speaking, all basic elementary functions are calculated according to the same formula. Only the coefficients of the rational expression depend on a calculated function. §17. Nonlinear models and rational approximants One of the main problems of mathematical modeling is to construct analytic formulas (models) that approximately describe the functional dependence between different quantities according to given "experimental" data concerning the values of these quantities. In particular, let the set of real numbers x 1 , . . . , x ν which are values of the "independent" variable x be given, and for every value x i of this variable the value y i of the "dependent" variable y be given. The problem is to construct a function y = F (x) such that the functional dependence can be represented by an analytic formula of a certain form, and the approximate equality y i ≈ F (x i ) be valid for all i = 1, 2, . . . , ν, where the function F (x) should take "reasonable values" at points x lying between the given points x i . In practice the values y i are usually given with errors.
As it was noted above, computer calculation of functions is finally reduced to computation of some rational functions. Thus in many cases it is natural to construct an analytic model in the form of the rational function (1), where the degrees of the numerator and the denominator and also the values of the coefficients are determined in the process of modeling, see [14]. Of course, in this case we have in mind only the one-factor models. One can construct multi-factor models by using rational functions of several variables.
If we have a simple program of constructing rational approximants to continuous functions defined on finite segments of the real line, then we can reduce the construction of a model to constructing rational approximants to a continuous function (although in numerical analysis, as a rule, the goal is to reduce continuous problems to discrete ones). The construction of a model is carried out in two steps. On the first step a continuous function f (x) such that f (x i ) = y i is constructed. A linear or a cubic spline (depending on the user's choice) is used as f (x). The function whose graph coincides with the polygonal line consisting of segments of straight lines that connect the points (x i , y i ) with the coordinates x i , y i is the linear spline; the cubic spline is described, for example, in [41]. On the second step the model is constructed by means of the PADE program. This approach guarantees the regular behavior of the model on the entire range of the argument.
If there are reasons to assume that the initial data lie on a sufficiently smooth and regular curve, then it is expedient to use a cubic spline. And if there are reasons to assume that the initial data contain considerable errors or deviations from theoretically admissible data, then it is expedient to use a linear spline: the behavior of a cubic spline at intermediate points in this case will be irregular.
The method for constructing models described above was implemented (together with I. A. Andreeva) as the SPLINE-PADE program. This program prints out the graphs of splines and rational approximants (together with the initial data), and this facilitates the analysis of models. Of course, while choosing and analyzing models, it is necessary to take into account the theoretical requirements on the model which are connected with specific features of a particular problem.
Other approaches to the construction of models in the form of rational functions can be found, for example, in [14].
The above results connected with the effect of error autocorrection show that similar models can have quite different coefficients. Thus the coefficients of models of this kind are, generally speaking, unstable; and one should be very careful when trying to give a substantial interpretation for these coefficients.
APPENDIX
A method of implementation of basic calculations on decimal computers 1. Introduction. A large relative amount of input/output operations is a characteristic feature of modern interactive computer systems. This results in a waste of computing time of systems with binary number representation: numbers are transformed from the decimal representation to the binary one and vice versa. Therefore, certain computers use decimal arithmetic system. As a rule, the use of decimal arithmetic system leads to a decrease in the rate of calculations and to additional memory requirements connected with specific coding of decimal numbers. The decrease in the rate of calculations is due to the fact that the implementation of decimal operations, as compared to that of binary ones, is more complicated; moreover, the binary representation is more convenient for implementing algorithms for calculating certain functions then the decimal one. Since the performance rate of floating point arithmetic operations and the rate of calculating elementary functions determine, to a considerable extent, the rate of mathematical data performing, the quality of the corresponding algorithms is, especially for cheap personal systems, of great importance.
Here we consider methods of implementation of the floating point arithmetic system and of organizing computations for elementary functions. These methods are convenient to use on decimal computers (this pertains both to the software and hardware implementation). They guarantee a sufficient economy of memory simultaneously with a relatively high performance rate of calculations. Examples of effective software implementation of these methods are given in [1,53]. The hardware implementation is described in the patent [55]. The methods under consideration are also of interest for octal and hexadecimal computers.
2. Floating point arithmetic system. When carrying out arithmetic operations with floating point numbers, the exponents of these numbers undergo only the operations of addition, subtraction, and comparison. Almost all computers have means for these operations since they are necessary for the command and the address codes operations. This fact provides an opportunity to use the binary representation for the exponents when implementing the floating point arithmetic system. Since exponents are integers lying in certain bounds, the transformation of exponents from binary to decimal representations does not encounter serious obstacles. The choice of an appropriate algorithm depends on the structure of a computer and the method of coding of decimal numbers. For the standard coding 8421, when each decimal digit corresponds to a binary tetrad, it is possible to use the fact that in this case the numbers from 0 to 9 have the same coding in the binary and the decimal representations. Therefore the binary representation x 2 of a number x can be converted into the decimal representation x 10 by successively subtracting (in the binary arithmetic system) the numbers from 0 to 9 from x 2 and forming the number x 10 from the sums of these numbers (in the decimal arithmetic system). Similarly, a decimal integer can be converted into a binary one.
Binary representation of exponents enables one to save memory, and the combination of decimal operations with more rapid binary operations of addition type enhances the performance rate. As a rule, the software implementation of the floating point arithmetic system leads to the fact that floating point operations take two orders as much time when compared with fixed point operations. The implementation described in [1] is much more efficient: for seven decimal digit numbers, the transition from the fixed point to the floating point regimes results in double computing time for multiplication and division, and to reduction of the rate of addition and subtraction by one decimal order. where y is the reduced argument, and the coefficients a, b, c, α, β depend on the approximated function. Thus all algorithms of computation for basic elementary functions have the common block (1), and this fact guarantees an economy of memory. This block can be implemented both as a carefully devised part of software or as a part of hardware; this can enhance the performance rate. For the reduction algorithms described below, the approximant of the form (1) can guarantee 8-9 accurate decimal digits. Because of specific features of a particular computer and the way the common block is implemented, it can be required that expression (1) be transformed into a certain form, for example, into the form or into a Jacobi fraction of the form (1 ′′ ) R(y) = y c + µ y 2 + ν + ae R(y) = y a + by 2 + cy 4 + dy 6 α + βy 2 + γy 4 + y 6 which can be transformed into the form similar to (1 ′ ) or (1 ′′ ), i.e., R(y) = y a + y 2 (b + y 2 (c + dy 2 )) α + y 2 (β + y 2 (γ + y 2 )) , The coefficients a, b, c, d, α, β, γ, ξ, η, µ, ν, ae, λ in formulas (1 ′ ), (1 ′′ ), (2), (2 ′ ), (2 ′′ ) are constants that depend on the approximated function. The approximants of the form (2), (2 ′ ) or (2 ′′ ) guarantee 12-13 accurate decimal digits 6 .
The reduction algorithms are uniform; in particular, for calculations with ordinary and enhanced precision the same reduction algorithms are used. These algorithms are described in section 4 below. The errors of approximants and values of the coefficients in expressions (1), (2) and in their modifications are given below. These coefficients are either taken from [3], or calculated by means of the PADE program described in §9 above.
The calculation of the natural logarithm is reduced to the case of the decimal logarithm by means of the relation ln x = (ln 10)(lg x).
6. Analysis of the algorithms. It is easy to see that the algorithms of calculating trigonometric and inverse trigonometrical functions do not depend on on the arithmetic system of the computer. On the contrary, while implementing the computing algorithms for exponentials, logarithms and functions that are expressed through them (hyperbolic and inverse hyperbolic functions 7 , x y ) the binary arithmetic system has an essential advantage over the decimal one. For example, for binary arithmetic system the computation of the logarithm is reduced to finding an approximant on the segment [1/2, 1] (and not on the segment [1/10, 1]); since 1/2 is much closer to zero than 1/10, this implies that the approximation rate increases considerably. While computing ln x according to the scheme described above on a binary computer, the approximant of the form (1) which depending on five parameters can be replaced by a more exact approximant (on a smaller segment) which depending only on three parameters. A similar situation arises while calculating an exponential. But the use of the decimal arithmetic system leads to a certain equilibrium between the difficulty of computing logarithmic and exponential functions, on one hand, and trigonometric functions, on the other. Thus in this case the use of a separate common block of the form (1) or (2) is justified.
Implementation of algorithms for calculating elementary functions.
For the software implementation it is expedient to use representations (1 ′′ ) and (2 ′′ ) for rational approximants in the form of Jacobi fractions; this allows to minimize the number of arithmetic operations. The rate of computation of functions can be increased by implementing the calculation of Jacobi fractions mentioned above by means of the fixed point arithmetic system as described in [1].
A method of hardware implementation for algorithms under consideration is described in the patent [55]. In this case it is expedient to use representations (1 ′ ) and (2 ′ ) for rational approximants and to carry out computations of the fraction numerator and denominator in parallel. For example, when computing expression (2 ′ ), the value y 2 being computed beforehand, it is possible to use the summator to compute γ + y 2 and the multiplier to compute d · y 2 simultaneously. Then y 2 is multiplied by (γ + y 2 ) and simultaneously the quantity c is added to d · y 2 , and so on. Under such an implementation, additional hardware requirements are minimal since almost all computers have a summator and a multiplier. Table 4. Enhanced precision
|
2019-04-12T09:09:07.663Z
|
2001-01-05T00:00:00.000
|
{
"year": 2001,
"sha1": "54411458dc45f1d4b022b9e2106f9e5a77db3203",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "82213c24fb261b030459eea2b85ea133e5107539",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
263334585
|
pes2o/s2orc
|
v3-fos-license
|
Brauer's problem 21 for principal blocks
Problem 21 of Brauer's list of problems from 1963 asks whether for any positive integer k there are finitely many isomorphism classes of groups that occur as the defect group of a block with k irreducible characters. We solve this problem for principal blocks. Another long-standing open problem (from 1982) in this area asks whether the defect group of a block with 3 irreducible characters is necessarily the cyclic group of order 3. In most cases we reduce this problem to a question on simple groups that is closely related to the recent solution of Brauer's height zero conjecture.
Introduction
In Problem 21 of his famous list of open problems in representation theory, R. Brauer asks whether for any positive integer k there are finitely many isomorphism classes of groups that occur as the defect group of a block with k irreducible characters ( [Bra63]). This is equivalent to the question of whether the order of a defect group can be bounded from above in terms of the number of irreducible characters in the block. This conjecture was proved for solvable groups by B. Külshammer [Kul89] in 1989 and then for p-solvable groups [Kul90] in 1990. On the other hand, using E. Zelmanov's solution of the restricted Burnside problem, it was proved by Külshammer and G. R. Robinson that the Alperin-McKay conjecture implies Brauer's Problem 21 [KR96]. Hence, L. Ruhstorfer's recent solution of Alperin-McKay for p " 2 [Ruh] implies that Brauer's Problem 21 holds for this prime. In the main result of this paper, we prove Brauer's Problem 21 for principal blocks.
Theorem A. Brauer's Problem 21 has an affirmative answer for principal blocks for every prime. 2010 Mathematics Subject Classification. Primary 20C15, 20C20. The authors thank the Isaac Newton Institute for Mathematical Sciences (INI) in Cambridge and the organizers of the Summer 2022 INI program Groups, Representations, and Applications: New Perspectives, supported by EPSRC grant EP/R014604/1, where part of this work was completed. The second and third-named authors also thank the National Science Foundation Grant No. DMS-1928930, which supported them while they were in residence at the Mathematical Sciences Research Institute in Berkeley, California, during the Summer of 2023. The first and second-named authors are supported by Ministerio de Ciencia e Innovación (Grants PID2019-103854GB-I00 and PID2022-137612NB-I00 funded by MCIN/AEI/10.13039/501100011033 and "ERDF A way of making Europe" ). The first-named author also acknowledges support by Generalitat Valenciana CIAICO/2021/163. The second-named author is supported by a CDEIGENT grant CIDEIG/2022/29 funded by Generalitat Valenciana. The third-named author also gratefully acknowledges support from the National Science Foundation, Award No. DMS-2100912, and her former institution, Metropolitan State University of Denver, which holds the award and allows her to serve as PI. She also thanks the first and second authors and the CARGRUPS research team at U. Valencia for a productive stay in March 2023. The authors thank A. Maróti and G. Navarro for many useful conversations on Theorem C.
Recall that Landau's theorem asserts that the order of a finite group is bounded from above in terms of the number of conjugacy classes. As pointed out by Brauer [Bra63], Landau's argument provides the bound |G| ď 2 2 kpGq . Brauer's Problem 3 asks for substantially better bounds. This problem has also generated a large amount of research. L. Pyber [Pyb92] found an asymptotically substantially better bound, although it is still not known whether there exists a bound of the form |G| ď c kpGq for some constant c. We refer the reader to [BMT] for the best known bound as of the writing of this article. Note that Brauer's Problem 21 asks for a blockwise version of Landau's theorem. As Brauer did with Landau's theorem, it also seems interesting to ask for asymptotically good bounds for the order of a defect group in terms of the number of characters in the block. Our proof of Theorem A provides an explicit bound that surely will be far from best possible. For almost simple groups, we obtain a better bound in Theorem 2.1.
Given a Brauer p-block B of a finite group G with defect group D, we will write kpBq to denote the number of irreducible complex characters in B. R. Brauer himself proved that if kpBq " 1 then D is the trivial group ([Nav98, Theorem 3.18]). More than 40 years later, J. Brandt proved that if kpBq " 2 then D is the cyclic group of order 2. However, despite a large amount of work in the area in recent years, the conjecture remains open when kpBq ě 3. It has been speculated since Brandt's [Bra82] 1982 paper that if kpBq " 3 then the defect group is cyclic of order 3. It seems that it was known to Külshammer that this follows from the Alperin-McKay conjecture since 1990 [Kul90]. A proof of this fact appeared in [KNST14], where Külshammer, G. Navarro, B. Sambale and P. H. Tiep formally state Brandt's speculation as a conjecture.
We present a condition on quasisimple groups that would imply the Külshammer-Navarro-Sambale-Tiep conjecture (that is, that kpBq " 3 implies that the defect group is of size 3).
Condition B. Let p be an odd prime and let S be a non-abelian simple group of order divisible by p. We say that Condition B holds for pS, pq if the following holds: let K be a quasisimple group of order divisible by p with center Z, a cyclic p 1 -group, and K{Z " S. Let B be a non-principal faithful p-block of K with |cdpBq| ą 1 and let D be a defect group of B, not cyclic and elementary abelian. Then there are at least 4 irreducible characters in B not AutpKq-conjugate.
Theorem C. Let p be a prime. If p is odd, suppose that Condition B holds for pS, pq for all non-abelian composition factors S of G. Then the Külshammer-Navarro-Sambale-Tiep conjecture holds for G.
We remark that this reduction and Condition B have played an influential role in the recent solution of Brauer's height zero conjecture [MNST]. In fact, the fundamental Theorem B of [MNST] is a slightly weaker version of Condition B: it shows there always exist 3 irreducible characters in B not AutpKq-conjugate. In fact, we will see in Remark 5.7 that this is tight. Although Condition B seems to hold in many situations, we will see an example of a family of simple groups for which Condition B does not hold, for p " 5.
In Section 2, we prove Brauer Problem 21 for the principal blocks of almost simple groups, which is used in Section 3 to prove Theorem A. In Section 4, we prove Theorem C. We conclude the paper by discussing Condition B in Section 5.
BP21 for almost simple groups
The following is the main result of this section.
Theorem 2.1. Let p be a prime. Let S ď A ď AutpSq, where S is a finite nonabelian simple group, and p | |S|. Let P P Syl p pAq and let k :" kpB 0 pAqq be the number of irreducible complex characters in the principal block of A. Then we have: Note that in the context of Theorem 2.1, any character in IrrpB 0 pSqq lies below some character in IrrpB 0 pAqq by [Nav98, Theorem (9.4)], so that kpB 0 pAqq ě k AutpSq pB 0 pSqq, where we write k AutpSq pB 0 pSqq for the number of distinct AutpSq-orbits intersecting IrrpB 0 pSqq.
The following is the main result of [HS21], from which we obtain a bound for p in terms of the number of irreducible characters in a given principal block.
Lemma 2.3 (Hung-Schaeffer Fry). Let p be a prime and let G be a finite group with p | |G|. Let B 0 denote the principal p-block of G. Then kpB 0 q 2 ě 4pp´1q.
Lemma 2.4. Let p be a prime and let G be a finite group with p | |G|. Assume that a Sylow p-subgroup P P Syl p pGq is cyclic, and let B 0 denote the principal p-block of G. Then |P | ă kpB 0 q 2 .
2.1. Notation and Additional Preliminaries. Let q be a power of a prime. By a group of Lie type, we will mean a finite group obtained as the group G F of fixed points of a connected reductive algebraic group G overF q under a Steinberg morphism F : G Ñ G endowing G with an F q -structure. In our situation of finite simple groups, we will often take G to further be simple and simply connected, so that G F is, with some exceptions dealt with separately, the full Schur covering group of a simple group S " G F {ZpG F q.
Writing G " G F , we let G˚denote the group pG˚q F , where the pair pG˚, F q is dual to pG, F q, with respect to some maximally split torus T of G. Given a semisimple element s P G˚(that is, an element of order relatively prime to q), we obtain a rational Lusztig series EpG, sq of irreducible characters of G associated to the G˚-conjugacy class of s.
When s " 1, the set EpG, 1q is comprised of the so-called unipotent characters. Each series EpG, sq contains so-called semisimple characters, and if C G˚p sq is connected, there is a unique semisimple character, which we will denote by χ s .
The following lemma will help us obtain many semisimple characters in the principal block. Here, we write ZpGq " ZpGq pˆZ pGq p 1 , where ZpGq p P Syl p pZpGqq.
Lemma 2.5. Let p be a prime and let G :" G F be a group of Lie type defined over F q with p ∤ q and such that ZpGq is connected or such that p is good for G and C G˚p sq is connected. Let s P G˚be a semisimple element with order a power of p. Then the corresponding semisimple character χ s P IrrpGq lies in the principal p-block B 0 pGq of G and is trivial on ZpGq p 1 .
Proof. The first statement, also noted in [HS21,Theorem 5 Throughout, for q an integer and p a prime not dividing q, we let d p pqq denote the order of q modulo p if p is odd, and d 2 pqq is the order of q modulo 4.
For the remainder of Section 2, we will let G " G F for G a simple, simply connected reductive group and F : G Ñ G a Steinberg endomorphism such that G{ZpGq is a simple group of Lie type. Further, we will address the case that S is a simple group with an exceptional Schur multiplier (see [GLS98, Table 6.1.3] for the list of such S), sporadic, or alternating separately in the proof of Theorem (b) below, and hence until then, we assume further that ZpGq is a nonexceptional Schur multiplier for the simple group of Lie type S :" G{ZpGq. Let r S denote the group of inner-diagonal automorphisms of S.
Exceptional Groups.
We first consider the exceptional groups, by which we mean the groups S " G 2 pqq, 2 B 2 pq 2 q, 2 G 2 pq 2 q, F 4 pqq, 2 F 4 pq 2 q, 3 D 4 pqq, E 6 pqq, 2 E 6 pqq, E 7 pqq, and E 8 pqq, when p is a prime not dividing q. Let P P Syl p pSq. Then either P may be identified with a Sylow p-subgroup of G or pp, Gq P tp3, E 6 q, p2, E 7 qu and |P | " |P |{p withP a Sylow p-subgroup of G.
If G is not of Suzuki or Ree type (i.e. G is not one of 2 B 2 pq 2 q, 2 G 2 pq 2 q, or 2 F 4 pq 2 q), let e :" d p pqq and let Φ e :" Φ e pqq denote the eth cyclotomic polynomial in q. If G is a Suzuki or Ree group, instead let Φ e :" Φ ppq as in [Mal07,Section 8]. In either case, let p b be the the highest power of p dividing Φ e and let m e denote the largest positive integer such that Φ e me divides the order polynomial of pG, F q. From [GLS98, Theorem 4.10.2], we see thatP contains a normal abelian subgroup P T ✁P such thatP {P T is isomorphic to a subgroup of the Weyl group W " N G pTq{T. We also haveP " P T if and only if P T is abelian (see [Mal14, Proposition 2.2]). Similarly, a Sylow p-subgroup of the dual group G˚contains a group isomorphic to P T .
Proposition 2.6. Let p " 2 and let S be an exceptional group of Lie type as above with 2 ∤ q. Let P P Syl 2 pSq and write k 0 :" k AutpSq pB 0 pSqq. Then |P | ď 2 14`8k 0 . (In particular, Theorem 2.1(b) holds in this case.) Proof. First consider the case S " 2 G 2 pq 2 q with q 2 " 3 2n`1 ą 3. Then we have |P | " 8, so the statement is clear. Hence we assume that S is not of Suzuki or Ree type. Let H " G˚" pG˚q F and notice that S " rH, Hs and ZpG˚q is connected. Then notice that the semisimple characters χ s P IrrpHq of H for s P H˚" G of 2-power order lie in B 0 pHq by Lemma 2.5. Let 2 b`1 || pq 2´1 q and note that G contains an element of order 2 b . For 1 ď i ď b, let s i P G be of order 2 i , so that the semisimple characters χ s i of H for 1 ď i ď b lie in B 0 pHq. Further, since the |s i | are distinct, these lie in distinct AutpSq-conjugacy classes, using [NTT08, Corollary 2.5]. Then choosing an irreducible constituent χ 1 s i on S for each i, we obtain b characters in B 0 pSq in distinct AutpSq-classes. Considering in addition the trivial character, we obtain k 0 ě b`1.
On the other hand, letting r be the rank of G, we have r ď 8 and |P T | ď p2 b`1 q r ď p2 b`1 q 8 by the description of P T in [GLS98, Theorem 4.10.2]. Further, |P {P T | ď |W | 2 ď 2 14 . Hence |P | ď |P | ď 2 14¨28k 0 , as stated. Recalling that k ě 7 (see Remark 2.2), in the situation of Theorem 2.1(b) we have |P | ď 7 5¨73k ď k 5`3k . Now, when p is odd, a similar argument can be used. However, we aim for a better bound. In this case, [GLS98, Theorem 4.10.2] further tells us that P T has a complement P W inP and we have P T -C me p b unless pp, Gq " p3, 3 D 4 pqqq, in which case P T -C 3 aˆC 3 a`1 . In the following, let W pE 8 q denote the Weyl group W obtained in the case that G " E 8 .
Proposition 2.7. Let S be an exceptional group of Lie type as above, and let P P Syl p pSq with p an odd prime not dividing q. Let k 0 :" k AutpSq pB 0 pSqq. Then if P is cyclic, we have |P | ă p k 0 . Otherwise, we have for some constant C ex ď 36|W pE 8 q| 2 . In particular, when S ď A ď AutpSq with kpB 0 pAqq ě 5, this yields |P | ď kpB 0 pAqq kpB 0 pAqq 2 in either case.
It should be noted, however, that in the last statement, P P Syl p pSq, rather than Syl p pAq. Proof. Keep the notation above. Suppose first that a Sylow p-subgroup of G is abelian. If P is cyclic, then P "P " P T " C p b . Here we may argue similarly to Proposition 2.6 to obtain b ă k 0 , and hence |P | ă p k 0 . Lemma 2.4 further yields the last statement in this case. Hence, we may assume that P is not cyclic, so that m e ě 2. By the discussion preceeding [HS21, Theorem 5.4], we have where g is the size of the subgroup of OutpSq of graph automorphisms, d :" r r S : Ss is the size of the group of diagonal automorphisms, and |W e | is the so-called relative Weyl group for a Sylow Φ e -torus of G. Since d ď 3, g ď 2, and |W e | is bounded by the size of the largest Weyl group for the types under consideration, |W pE 8 q|, we have Notice that p bpme´1q ě p bme{2 for m e ě 2. Then we have a |P | ď b |P | ď 6|W pE 8 q|k 0 , and hence the statement holds.
We now assume thatP is nonabelian. By considering only the semisimple characters of G corresponding to elements of G˚found in a copy of P T , the exact same arguments as in [HS21, Section 5] yield that the bound (1) still holds in this case.
By considering the degree polynomials, we see that in each case, we have a |P | ď p bpme´1q again, except possibly if G " G 2 pqq, p " 3, m e " 2, and |P | " p 2b`1 . Then a |P | " ?
3|W e |k 0 , where the last inequality is because d " 1 " g in this case. In all cases, then, we see that the statement holds.
2.3. Classical Groups. We now turn to the case of classical groups. In this section, let G " G F be a group of Lie type defined over F q , where q is a power of a prime q 0 and G is a simple, simply connected reductive group of type A n´1 with n ě 2, C n with n ě 2, B n with n ě 3, or type D n with n ě 4 but G ‰ 3 D 4 pqq, and such that G is a nonexceptional Schur covering group for the simple group S :" G{ZpGq. That is, G " SL ǫ n pqq, Sp 2n pqq, Spin 2n`1 pqq, or Spin2 n pqq, and S " PSL ǫ n pqq, PSp 2n pqq, PΩ 2n`1 pqq, or PΩ2 n pqq, respectively, for the corresponding values of n. Let H be the related groups H :" GL ǫ n pqq, Sp 2n pqq, SO 2n`1 pqq, respectively SO2 n pqq. We remark that, taking Ω :" O q 1 0 pHq, we have Ω is perfect and S " Ω{ZpΩq " G{ZpGq. We also have ZpΩq ď ZpHq and further H{Ω and ZpHq are both 2-groups if H ‰ GL ǫ n pqq. Note that the dual group of H is H˚" GL ǫ n pqq, SO 2n`1 pqq, Sp 2n pqq, and SO2 n pqq, respectively. Let p ‰ q 0 be a prime and write r P for a Sylow p-subgroup of H˚. We remark that if X P tG, Su, then |P | ď | r P | for P P Syl p pXq.
2.3.1. Sylow p-Subgroups of Symmetric Groups. Since the Sylow p-subgroups of classical groups are closely related to those of symmetric groups, we begin with a discussion of the latter. Let w be a positive integer with p-adic expansion Then with this, we see |Q| " pw!q p ď p w .
Unipotent Characters.
Recall that EpG, 1q is the set of unipotent characters of G.
Since unipotent characters are trivial on ZpGq, we may say that χ P IrrpSq is a unipotent character of S if it is the deflation of some unipotent character of G. The following observation will be useful in the cases of defining characteristic and when p " 2.
Lemma 2.8. Let S be one of the groups S " PSL ǫ n pqq with n ě 2, PSp 2n pqq with n ě 2, PΩ 2n`1 pqq with n ě 3, PΩ2 n pqq with n ě 4, or PΩ2 n pqq with n ě 4. Then there are at least n non-AutpSq-conjugate unipotent characters of S.
Proof. The unipotent characters of G are described in [Car93, Section 13.8]. From this, we have the number of unipotent characters in the case PSL ǫ n pqq is the number of partitions πpnq of n. In the remaining cases, the unipotent characters of G lying in the principal series are in bijection with the characters of the Weyl group W pC n q, W pB n q, W pD n q, or W pB n´1 q, respectively, each of which contains a quotient group isomorphic to a symmetric group S n (resp. S n´1 in the case of W pB n´1 q). In each of these cases, there are also non-principal series unipotent characters. Then the number of unipotent characters is more than πpnq (resp. πpn´1q) in these cases. Note that πpnq ě n, with strict inequality for n ě 4, and that further πpnq ě 2n for n ě 7.
With the exception of PSp 4 pqq with q even and PΩ2 n pqq, all unipotent characters of the groups under consideration are AutpSq-invariant (see [Mal08, Theorem 2.5]), and we see that there are at least n such characters in each case. For PSp 4 pqq with q even, there are six unipotent characters, with two of them interchanged by the exceptional graph automorphism. For PΩ2 n pqq with n ě 5, we see there are at least 2n unipotent characters (explicitly for n " 5, 6, and since πpnq ě 2n for n ě 7), and the AutpSq-orbits have size at most 2. The group PΩ8 pqq has 14 unipotent characters and the AutpSq-orbits have size at most 3. In all cases, then, we see that there are at least n AutpSq-orbits of unipotent characters.
2.3.3.
Bounds in the Case of Classical Groups for Defining Characteristic or p " 2.
Corollary 2.9. Let S be one of the groups as in Lemma 2.8. Assume that p | q or that p " 2 and q is odd. Then k AutpSq pB 0 pSqq ě n.
Proof. In defining characteristic, we have IrrpB 0 pSqq " IrrpSqztSt S u, where St S is the Steinberg character (see [CE04,Theorem 6.18]). In the case p " 2 and q is odd, we have B 0 pSq is the unique block containing unipotent characters, by [CE04, Theorem 21.14]. Then the statement follows from Lemma 2.8 and the fact that IrrpSq contains non-unipotent characters.
Lemma 2.10. Let S be as in Lemma 2.8 with q odd, and let 2 b`1 be the largest power of 2 dividing q 2´1 . Let B 0 pSq be the principal 2-block of S. Then b`1 ď k AutpSq pB 0 pSqq.
Proof. As before, let S " G{ZpGq with G " G F of simply connected type. Recall that B 0 pGq is the unique unipotent block of G by [CE04, Theorem 21.14], and hence B 0 pGq is exactly the union of rational Lusztig series EpG, sq with s P G˚having order a power of 2. From the structure of the Sylow 2-subgroup of G˚described in [GLS98, Theorem 4.10.2] (see also [CF64]), we can see that the group O q 1 0 pG˚q contains an element of order 2 b´1 . For 1 ď i ď b´1, let s i P O q 1 0 pG˚q be of order 2 i . Then the semisimple characters χ s i for 1 ď i ď b´1 lie in B 0 pGq and are trivial on ZpGq by the dual version of [DM20, Proposition 11.4.12 and Remark 11.4.14]. Further, these lie in distinct AutpSqconjugacy classes, using [NTT08, Corollary 2.5]. Combining with Lemma 2.8, we see k AutpSq pB 0 pSqq ě b´1`n ě b`1.
2.3.4.
Bounds in the Case of Classical Groups for Nondefining Characteristic with p Odd. Now let p be an odd prime not dividing q and let d :" d p pqq. If G " SL ǫ n pqq, let e :" d p pǫqq, and otherwise let e :" d p pq 2 q. Further, let b ě 1 be the largest integer such that p b divides pǫqq e´1 , respectively q 2e´1 .
We begin by discussing the Sylow p-subgroups r P of H˚, which have been described by Weir [Wei55].
First, consider the case H " GL ǫ n pqq. Let n " ew`r, where r, w are positive integers with 0 ď r ă e and w is written with p-adic expansion as in (2). A Sylow p-subgroup of H " GL ǫ n pqq is then of the form r P " Hence r P contains a subgroup of the formP " C w p b . Further, r P X SL ǫ n pqq is a Sylow p-subgroup of G " SL ǫ n pqq. Now consider H " Sp 2n pqq, SO 2n`1 pqq, and SO ǫ 2n pqq. The structure of r P in these case builds off of the case of linear groups above. If H˚P tSO 2n`1 pqq, Sp 2n pqqu, we have r P is already a Sylow p-subgroup of GL 2n`1 pqq (and hence of GL 2n pqq) when d is even, and are Sylow subgroups of the naturally-embedded GL n pqq if d is odd. In particular, writing n " ew`r with r, w as before, we have r P is again of the form r P -P a 0 0ˆ¨¨¨ˆP at t , where each P i is a Sylow p-subgroup of GL dp i pqq (and hence again P i - If H˚" SO2 n pqq, then we have embeddings SO 2n´1 pqq ď H˚ď SO 2n`1 pqq, and r P is a Sylow subgroup of either SO 2n´1 pqq or SO 2n`1 pqq. In this case, letting m P tn, n´1u so that r P is a Sylow subgroup of SO 2m`1 pqq and now writing m " ew`r with w again written as in (2), r P can again be written r P -P a 0 0ˆ¨¨¨ˆP at t with each P i a Sylow subgroup of GL dp i pqq.
In all cases, we remark that p t ď w ď p t`1 and that t " 0 corresponds to the case that a Sylow p-subgroup of H is abelian. Further, r P contains a subgroup of the formP -C w p b . Lemma 2.11. With the notation above, we have b`1 ď k AutpSq pB 0 pSqq.
Proof. We will show that there are at least b characters in IrrpB 0 pSqqzt1 S u lying in distinct AutpSq-orbits.
First, let G " SL ǫ n pqq and let r G :" H " GL ǫ n pqq, and note r G˚-r G. We have AutpSq " r S¸D, where D is an appropriate group of graph and field automorphisms and r S :" r G{Zp r Gq. Recall that a Sylow p-subgroup r P of r G contains a subgroup of the form C w p b . Assume for the moment that e ą 1, so that p ∤ |Zp r Gq| and p ∤ r r G : Gs. Hence, for 1 ď j ď b, we may let s j P r G˚-r G be an element of order p j . The corresponding semisimple character χ s j of r G is trivial on Zp r Gq and lies in B 0 p r Gq, using Lemma 2.5. Hence, each χ s j can be viewed as a character of B 0 p r Sq. Further, note that since p ∤ |Zp r Gq|, s i and s α j z cannot be r G-conjugate for any i ‰ j and any α P D and z P Zp r Gq. If instead e " 1, we have n " w ě 2. For 1 ď j ď b, let λ j P C p b ď Fq 2 with |λ j | " p j , and let s j be an element of C n p b ď r P of the form diagpλ j , λ´1 j , 1, . . . 1q, where 1 appears as an eigenvalue with multiplicity n´2. Then again χ s j P IrrpB 0 p r Gqq by Lemma 2.5 and is trivial on Zp r Gq by the dual version of [DM20, Proposition 11.4.12, and Remark 11.4.14], since s j P r r G, r Gs " G. Further, we again see that s j is not r G-conjugate to s α i z for any i ‰ j, α P D, and z P Zp r Gq, by considering the eigenvalues. In either case, we let χ i for 1 ď i ď b be a constituent of χ s i restricted to S. Then χ i cannot be AutpSq-conjugate to χ j for i ‰ j, using [NTT08, Corollary 2.5] along with [DM20, Proposition 11.4.12 and Remark 11.4.14]. Hence, we see at least b distinct AutpSq-orbits represented in IrrpB 0 pSqqzt1 S u.
Now let G be one of the remaining groups as in the beginning of the section. Then |ZpGq| is a power of 2. In each case, a Sylow p-subgroup of G (or, equivalently, of S) and of G˚contains a subgroup of the form C p b . Here, we may again, for each 1 ď j ď b, let s j P G˚be a semisimple element of order p j . Then since each p|s j |, |ZpGq|q " 1, we have by [MT11,Exercise 20.16] that C G˚p s j q is connected since p ě 3 is good for G, and hence the corresponding semisimple character χ s j of G lies in B 0 pGq and is trivial on ZpGq by Lemma 2.5. That is, we may again view χ s j as a character in IrrpB 0 pSqqzt1 G u. Since s i cannot be AutpG˚q-conjugate to s j for any i ‰ j, we see χ s i and χ s j cannot be AutpSq-conjugate as before and we again have k AutpSq pB 0 pSqq ě b`1.
Lemma 2.12. With the above notation, we have at least w unipotent characters in B 0 pSq that are not AutpSq-conjugate, and hence w ď k AutpSq pB 0 pSqq. If t ě 1, this yields p ď p t ď k AutpSq pB 0 pSqq. If H " Sp 2n pqq or SO 2n`1 pqq, we see from [Mal17, Section 5.2] that the number of unipotent characters in B 0 pHq is kp2e, wq ą 2w, which again are AutpSq-invariant by [Mal08, Theorem 2.5] unless H " Sp 4 pqq with q even. In the latter case, the unipotent characters are at worst permuted in pairs by AutpSq, and hence again there are at least w non-AutpSq-conjugate such characters.
If H " SO2 n pqq, then B 0 pHq contains either at least kp2e, wq unipotent characters or at least pkp2e, wq`3kpe, w{2qq{2 when w is even, using [Mal17, Section 5.3 and Lemma 5.6]. One can see that these numbers are again at least 2w, and by [Mal08, Theorem 2.5], again the unipotent characters are at worst permuted in pairs by AutpSq unless H " SO8 pqq.
In the latter case, [RSV21, Lemma 3.10] gives the claim.
2.4. The Proof of Theorem 2.1. The following will be useful in the proof of Theorem 2.1, as well as in the proof of Theorem A below. Here for a group G, we write k p pGq to denote the number of conjugacy classes of p-elements of G.
Lemma 2.13. Let G be a finite group. Then k p pGq ď kpB 0 pGqq. In particular, the number of chief factors of G of order divisible by p is at most kpB 0 pGqq.
Proof. Let tx 1 , . . . , x t u be a set of representatives of the non-central conjugacy classes of p-elements of G. By [Nav98, Theorem 4.14], B 0 pC G px i qq G is defined for every i " 1, . . . , t. By [Nav98, Theorem 5.12]) and Brauer's third main theorem ([Nav98, Theorem 6.7]), we have that as wanted.
Finally, we can prove Theorem 2.1.
Proof of Theorem 2.1. Recall from Remark 2.2 that we may assume that k :" kpB 0 pAqq ě 7. If S is a sporadic group, Tits group, group of Lie type with exceptional Schur multiplier, or alternating group A n with n ď 7, then the result is readily checked using GAP and its Character Table Library [GAP]. We therefore assume that S is not one of these groups. Throughout, let P P Syl p pAq and P 0 P Syl p pSq such that P 0 " P X S.
(I) If S is an alternating group with n ě 8, then A P tA n , S n u. Note that 2k ě kpB 0 pS n qq. Let n " pw`r with 0 ď r ă p. Then we have |P | ď p2, pq¨|P 0 | " pn!q p " pppwq!q p ď p pw by (3). Further, by [ From now on, we assume S is a simple group of Lie type. Let S " G{ZpGq, where G " G F for a simple, simply connected reductive group G and a Steinberg endomorphism F : G Ñ G, where ZpGq is the full, nonexceptional Schur covering group of S. Write k 0 :" k AutpSq pB 0 pSqq so that k 0 ď k.
(II) We will first show (b). First, assume S is defined in characteristic p, so that |P 0 | " q |Φ`| , where Φ`is the set of positive roots of G (see [MT11,Proposition 24.3]). We have IrrpB 0 pSqq " IrrpSqztSt S u, where St S is the Steinberg character (see [CE04,Theorem 6.18]), and k 0 ě q r |ZpGq|¨|OutpSq| , as in [HS21,Section 2D]. Let f be the integer (or half-integer, in the case of Suzuki and Ree groups 2 G 2 pq 2 q, 2 F 4 pq 2 q, 2 B 2 pq 2 q) such that q " p f , and note that ? q r " p rf {2 ď p rf {f " q r {f , unless q " 8 and r " 1. In the latter case, S " PSL 2 p8q and |P 0 | " 8, so the statement holds. So, we assume pq, rq ‰ p8, 1q.
Here, we include the full argument for the groups PSL ǫ n pqq (n ě 2), which correspond to G of type A n´1 . Table 2.4 gives relevant values for various groups of Lie type, and from this information, the arguments in the other cases are similar. So, let S " PSL ǫ n pqq. Then |P 0 | " q npn´1q{2 ; r " n´1; |OutpSq| ď 2f¨pn, q´ǫq ď 2f n; and |ZpGq| " pn, q´ǫq ď n. By Corollary 2.9, we have k 0 ě n. Together, this gives Then q pn´1q{2 ď 2k 3 0 ď k 4 0 , so |P 0 | " q npn´1q{2 ď k 4n 0 ď k 4k 0 0 ď k 2k 2 0 0 . Finally, we may assume S is a group of Lie type defined in characteristic different than p. If S is of exceptional type, then Propositions 2.6 and 2.7 yield (b).
Hence, we may assume S is of classical type, and we let H, r P , andP be as in Section 2.3. Recall that we have |P 0 | ď | r P |. If p " 2, we further have | r P | ď | GL n pq 2 q| 2 ď 2 pb`1qn pn!q 2 ď 2 pb`2qn , where 2 b`1 is the largest power of 2 dividing q 2´1 and the last inequality is from (3). In particular using Lemma 2.10 and Corollary 2.9, in this case |P 0 | ď 2 k 2 0`k 0 ă k 2k 2 0 0 . Now we assume p is odd. If r P is abelian, note that r P "P in the notation before. Then |P 0 | ď p bw ă k 2k 2 0 , from Lemmas 2.3, 2.11, and 2.12, and the statement holds. We are left with the case that S is classical and t ě 1. Then by Lemmas 2.11 and 2.12, along with (3), we see that |P 0 | ď p bw¨p w!q p ď p bw¨pw " p pb`1qw ď k k 2 0 0 , which completes the proof of (b).
(III) We now complete the proof of (a). Let G be defined over F q , where q " q f 0 for some prime q 0 and integer f . (In the case of Suzuki and Ree groups, we instead let q 2 :" q f 0 with f an odd integer.) Further, write f :" p f 1¨m with pm, pq " 1.
From part (b), recall that |P 0 | ď k 2k 2 . Note that |P {P 0 | " |A{S| p and this number is at most p f 1`1 unless S " D n pqq or 2 D n pqq with p " 2 and |A{S| 2 ď 2 f 1`3 or S " PSL ǫ n pqq with n ě 3 and p | pn, q´ǫq, in which case |A{S| p divides 2p b`f 1 with p b || pq´ǫq.
Recall that AutpSq " r S¸D with D a group of field and graph automorphisms as before. A Sylow p-subgroup of r SA X D contains a cyclic group of size p f 2 , where f 2 ď f 1 and | r SA X D| p ď p f 2`1 . Then A must also contain an element of order p f 2 , and hence elements of orders p i for 1 ď i ď f 2 . Then k p pAq ě f 2 , and hence k ě f 2 by Lemma 2.13. Now, if S is not one of the exceptions mentioned above, we have |A| p ď | r SA| p " | r S| p¨| r SAXD| p ď |P 0 |¨p f 2`1 . If S " D n pqq or 2 D n pqq with p " 2, we have | r SA| 2 ď |P 0 |¨2 f 2`3 . If S " PSL ǫ n pqq with n ě 3 and p | pn, q´ǫq, we have | r SA| p ď |P 0 |¨p b`f 2`1 , where p b || pq´ǫq. Then using (b) and Lemmas 2.10, and 2.11, we have in each case that |P | " |A| p ď k 2k 2¨p f 2`k .
Proof of Theorem A
In this section we complete the proof of Theorem A. We begin with some additional general observations that will be useful in the proof.
Lemma 3.1. Let G be a finite group and let N IJ G. If b P BlpNq is covered by B P BlpGq then kpbq ď |G : N|kpBq.
Lemma 3.2. Let G be a finite group and suppose that N " S 1ˆ¨¨¨ˆSn is a normal subgroup, where where S i is simple nonabelian and p divides |S i | for all i. Then n ď kpB 0 pGqq.
Proof. Let 1 ‰ x i P S i be a p-element for every i. Note that G acts on tS 1 , . . . , S n u by conjugation. Therefore, the elements 1, . . . , 1q, px 1 , x 2 , 1, . . . , 1q, . . . , px 1 , . . . , x n q are representatives of n different conjugacy classes of p-elements of G. By Lemma 2.13, n ď kpB 0 pGqq. Lemma 3.3. Suppose that S 1 , . . . , S n are nonabelian simple groups of order divisible by a prime number p and let S 1ˆ¨¨¨ˆSn ď G ď AutpS 1 qˆ¨¨¨ˆAutpS n q. Let k " kpB 0 pGqq, where B 0 pGq is the principal p-block of G. Then Proof. Write A i " AutpS i q and A " A 1ˆ¨¨¨ˆAn . Let π i be the restriction to G of the projection from A onto A i for every i. Set K i " Ker π i . Notice that G{K i is isomorphic to an almost simple group G i with socle S i . Furthermore, the intersection of the K 1 i s is trivial, so G embeds into the direct product of the groups G{K i . Furthermore, B 0 pG{K i q Ď B 0 pGq for every i.
By Theorem 2.1, we have that for every i. Since S i is normal in G for all i, by Lemma 3.2 we have that n ď k, and hence as desired.
We define the function f pkq " k k¨p k!kq 4pk!kq 3 .
Theorem 3.4. Let G be a finite group and let R be the p-solvable radical of G. Then |G : R| p ď f pkq, where k " kpB 0 pGqq.
Proof. Without loss of generality, we may assume that R " 1. Let F " F˚pGq be the generalized Fitting subgroup, which in this case is a direct product of non-abelian simple groups of order divisible by p. Write F " S 1ˆ¨¨¨ˆSn . By Lemma 3.2, we obtain that n ď k. Since C G pF q ď ZpF q " 1, it follows that G embeds into Γ " AutpF q. Note that A " AutpS 1 qˆ¨¨¨ˆAutpS n q is a normal subgroup of Γ and Γ{A is isomorphic to a subgroup of S n . In particular, |Γ{A| ď n! ď k!. Put N " G X A and note that |G : N| ď k!. By the well-known Legendre's inequality, we have that pk!q p ď p k , so pk!q p ď k k . Write k 1 " kpB 0 pNqq. It follows from Lemma 3.3 that |G| p " |G : N| p |N| p ď k k¨k14k 13 . Now, Lemma 3.1 implies that as wanted.
Recall that the socle SocpGq of a finite group G is the product of the minimal normal subgroups of G. We can write SocpGq " ApGqˆT pGq, where ApGq is the product of the abelian minimal normal subgroups of G and T pGq is the product of the non-abelian minimal normal subgroups of G. Note that T pGq is a direct product of non-abelian simple groups.
Finally, we set gpkq " 2 2 k f pkq k . The following completes the proof of Theorem A.
Notice that the p-solvable radical of G{C i is trivial, so by Theorem 3.4 applied to G{C i , we have that
It follows that
Remark 3.6. Arguing in a similar way, we can see that if a finite group G does not have simple groups of Lie type in characteristic different from p as composition factors, then |G : O p 1 pGq| can be bounded above in terms of kpB 0 pGqq. We sketch the proof. First, we know that if p is a prime and S is a simple group of Lie type in characteristic p or an alternating group, then |S| is bounded from above in terms of |S| p . Therefore, the same happens for almost simple groups with socle of Lie type in characteristic p or alternating. Now, let R be the p-solvable radical of a finite group G. We can argue as in the proof of Theorem 3.4 to see that |G : R| is bounded from above in terms of kpB 0 pGqq. Using Lemma 3.1, we see that kpB 0 pRqq is bounded from above in terms of kpB 0 pGqq. Since R is p-solvable, IrrpB 0 pRqq " IrrpR{O p 1 pRqq. Using that O p 1 pRq " O p 1 pGq and Landau's theorem, we deduce that |R : O p 1 pGq| is bounded from above in terms of kpB 0 pRqq. The result follows.
Remark 3.7. We have already mentioned that the case p " 2 of Brauer's Problem 21 was already known by [KR96] and [Ruh]. However, this relies on Zelmanov solution of the restricted Burnside problem. As discussed in [VZ99] the bounds that are attainable in this problem are of a magnitude that is incomprehensibly large. The bound that we have obtained for principal blocks, although surely far from best possible, is much better than any bound that relies on the restricted Burnside problem.
Recently, there has been a large interest in studying relations among (principal) blocks for different primes. For instance, what can we say about the set of irreducible characters that belong to some principal block? The groups with the property that all irreducible characters belong to some principal block were determined in [BZ11]. As a consequence of Brauer's Problem 21 for principal blocks, we see that for any integer k there are finitely many groups with at most k irreducible characters in some principal block. Note that this is a strong form of Landau's theorem. In this corollary, given a prime p, we write B p pGq to denote the principal p-block of G.
Corollary 3.8. The order of a finite group is bounded from above in terms of | Ť p IrrpB p pGqq|. Proof. By Theorem A, we know that for any prime p, |G| p is bounded from above in terms of kpB p pGqq. It follows that |G| p is bounded from above in terms of | Ť p IrrpB p pGqq|. In particular, if p is a prime divisor of |G|, then p is bounded from above in terms of | Ť p IrrpB p pGqq|. The result follows.
Blocks with three irreducible characters
In this section, we prove Theorem C. As usual, if B is a p-block of a finite group G, lpBq is the number of irreducible p-Brauer characters in B. By [Kul84], we know that if kpBq " 3 and lpBq " 1, then the defect group is cyclic of order 3. So we are left with the case lpBq " 2.
Lemma 4.1. Let N ✁ G and let B be a p-block of G with defect group D. Suppose that B covers a G-invariant block b of N such that D is a defect group of b. If b is nilpotent then kpBq " kpB 1 q, where B 1 P BlpN G pDqq is the Brauer first main correspondent of B.
Proof. Since D is a defect group of b, we have that the Harris-Knörr correspondent of B (see [Nav98,Theorem 9.28]) with respect to b is B 1 , the Brauer first main correspondent of B. By the work in [KP90] (see the explanation at the beginning of [Rob02, Section 3], for instance) we have that B and B 1 are Morita equivalent, and hence, they have the same number of irreducible characters.
We write cdpBq to denote the set of degrees of the irreducible (ordinary) characters in B. We write k 0 pBq to denote the number of irreducible (ordinary) characters of height zero in B. The following is Theorem C.
Theorem 4.2. Let G be a finite group and let B be a p-block of G. Suppose that Condition B holds for pS, pq for all simple non-abelian composition factors S of G. Let D be a defect group of B. If kpBq " 3, then |D| " 3.
Step 0. We may assume that p is odd. Suppose that p " 2. By [Lan81, Corollary 1.3(i)], if |D| ą 2 we have that 4 divides k 0 pBq ď kpBq " 3, which is absurd. Hence we have that |D| " 2. But in this case we know that kpBq " 2, by [Bra82]. This is a contradiction, so p is odd.
Step 2. If N is a normal subgroup of G, and b is a p-block of N covered by B, we may assume that b is G-invariant.
Let G b be the stabilizer of b in G. By the Fong-Reynolds correspondence ([Nav98, Theorem 9.14]), if c is the block of G b covering b such that c G " B, we have that kpcq " kpBq " 3 and if E is a defect group of c, then E is a defect group of B. If G b ă G, by induction we are done.
Step 3. We may assume that if N is a normal subgroup of G and b is a p-block of defect zero of N covered by B, then N is central and cyclic. In particular, we may assume that ZpGq " O p 1 pGq is cyclic.
Write b " tθu. Since θ is of defect zero, we have that pG, N, θq is an ordinary-modular character triple and there exists pG˚, N˚, θ˚q an isomorphic ordinary-modular character triple with N˚a p 1 -group central in G˚and cyclic (see [Nav98, Problems 8.10 and 8.13]). Notice also that since G˚{N˚-G{N, the set of non-abelian composition factors of G˚is contained in the set of non-abelian composition factors of G, so Condition B holds for all non-abelian composition factors of G˚. If : IrrpG|θq Ñ IrrpG˚|θ˚q is the bijection given by the isomorphism of character triples and B " tχ 1 , χ 2 , χ 3 u, we have that B˚" tχ1, χ2, χ3u is a p-block of G˚. Now, if D˚is a defect group of B˚and |D˚| " 3, we claim that |D| " 3 (notice that in this case p " 3, so we just need to prove that |D| " p). Indeed, let χ P IrrpBq of height zero. Since isomorphism of character triples preserves ratios of character degrees and all the characters in B˚are of height zero (because D˚has prime order), we have " χ˚p1q p " |G˚: D˚| p " |G˚: N˚| p p " |G : N| p p .
Since b is G-invariant, we have that D X N is a defect group of b by [Nav98, Theorem 9.26], so D X N " 1 because b has defect zero. Now, we have as claimed. Hence we may assume that N is central and cyclic. In particular, by Step 1 we have that ZpGq " O p 1 pGq is cyclic.
Step 4. There is a unique G-conjugacy class of non-trivial elements in D.
Let b be the p-block of N covered by B. Since b is G-invariant, we have that D X N is a defect group of b by [Nav98, Theorem 9.26]. By a theorem of Brauer ([Nav98, Theorem 5.12]) we have that where tx 1 , x 2 , . . . , x k u are the representatives of the non-central G-conjugacy classes of p-elements of G. Since |ZpGq| p " 1 by Step 1, we have By [Nav98,Theorem 4.14], if x i P D, then there is b P BlpC G px i qq such that b G " B. Since lpBq " 2 and kpBq " 3, we have that there is just one G-conjugacy class of nontrivial elements in D.
Step 5. If N is a non-central normal subgroup of G, then D ď N. In particular, if b is the only block of N covered by B, then D is a defect group of b.
Let b be the p-block of N covered by B. Again, since b is G-invariant, we have that D X N is a defect group of b (by [Nav98, Theorem 9.26]). Since D X N ą 1 (otherwise b is of defect zero and N is central by Step 3), we have that there is an element 1 ‰ x P D XN. If 1 ‰ y P D, y is G-conjugate to x by Step 4 and thus y P N, as wanted.
Step 6. If N is a normal subgroup of G, b is the block of N covered by B and all the irreducible characters in b have the same degree, then N is central.
Suppose that N is not central. By Step 5 we have that D is a defect group of b. By [OT83, Proposition 1 and Theorem 3] we have that D is abelian and has inertial index 1. By [BP80, 1.ex.3], we know that b is nilpotent. Hence by Lemma 4.1 we have that kpB 1 q " kpBq " 3, where B 1 is the Brauer first main correspondent of B in N G pDq. If N G pDq ă G, by induction we are done. Hence we may assume that D ✁ G, but this is a contradiction with Step 1. Therefore N is central.
Step 7. If N is a normal subgroup of G, and b is the unique block of N covered by B, then Irrpbq has at most three G{C G pNq-orbits.
Suppose that there are more than three G{C G pNq-orbits in Irrpbq, and let θ i P Irrpbq be a representative for these orbits (so there are at least four of them). By [Nav98, Theorem 9.4] we can take χ i P IrrpBq lying over θ i . By Clifford's theorem, the χ i are all different. But this is a contradiction since kpBq " 3.
Step 8. We may assume that D is not cyclic. Otherwise, by Dade's theory of blocks with cyclic defect [Dad66], we have that kpBq " k 0 pBq " k 0 pB 1 q " kpB 1 q where B 1 P BlpN G pDq|Dq is the Brauer correspondent of B, and hence we may assume that D is normal in G. In this case we are done by Step 1.
Step 9. Write Z " ZpGq and G " G{Z. Then G has a unique minimal normal subgroup K " K{Z, which is simple.
Let K{Z be a minimal normal subgroup of G{Z. Since Z " O p 1 pGq, we have that K{Z is not a p 1 -group. Since O p pGq " 1, K{Z is not a p-group. Hence K{Z is semisimple. Notice that K{Z is the unique minimal normal subgroup of G{Z. Indeed, if K 1 {Z, K 2 {Z are minimal normal subgroups of G{Z, then by Step 5, D Ď K 1 X K 2 " Z " O p 1 pGq and hence D " 1, a contradiction.
Write K " K{Z. Then K " S 1ˆ¨¨¨ˆSt , where S i is non-abelian simple and S i " S 1 i is a component of G and hence rS 1 i , S 1 j s " 1 whenever i ‰ j ([Isa08, Theorem 9.4]). Furthermore, S i " S 1 i Z, so rS i , S j s " 1 whenever i ‰ j. We want to show that t " 1. By Step 5 we have that D is a defect group of b, the only block in K covered by B. If D X S i " 1 for all i " 1, . . . , t, then D " 1, a contradiction. Hence there is i such that D X S i ą 1. Without loss of generality we may assume that D X S 1 ą 1. Let b 1 be the only block of S 1 covered by b and notice that, since b 1 is K-invariant, D X S 1 is a defect group of b 1 . We claim that D Ę S 1 . Suppose otherwise. Notice that D g i is a defect group of b g i " b and hence D g i " D k for some k P K. Now, which is a p 1 -group. This is a contradiction, so D Ę S 1 .
Let 1 ‰ x P D X S 1 . If D X S i " 1 for all i ‰ 1, we have that D " D X S 1 which is a contradiction by the previous paragraph. Hence there is i ‰ 1 such that D X S i ‰ 1. Let 1 ‰ x i P D X S i . Now xx i , x P D and by Step 4 we have that x and xx i are G-conjugate, which is not possible. Hence t " 1, as wanted.
Step 10. Final step. Now K 1 is a quasi simple group with center a cyclic p 1 -group. If b is the unique block of K 1 covered by B, we have that D is a defect group of b by Step 5 and hence is not cyclic elementary abelian by Step 8. We claim that b is faithful. Let X " kerpbq. By Theorem [Nav98, Theorem 6.10], we have that X ď Z X K 1 . Now, let ψ P Irrpbq, then ψ lies over 1 X and hence, there is χ P IrrpBq lying over 1 X . Now, by [Nav98, Theorem 9.9 (c)] we have that kpBq " kpBq " 3, whereB is the block of G{X containing χ. If X ą 1, by induction we obtain that |D| " |DX{X| " 3, and we are done. Hence we may assume that X " 1. By Condition B, there are at least four AutpK 1 q-conjugacy classes of irreducible characters in b, which is a contradiction by Step 7.
On Condition B
We end the paper with a discussion on Condition B. In [MNST, Theorem B], a statement similar to Condition B but requiring only 3 distinct orbits is proven. Unfortunately, for groups of Lie type in non-defining characteristic, the strategy used there is not quite sufficient to obtain 4 orbits. In fact, we will see that this is not always attainable. However, here we address several situations in which we do obtain Condition B.
Proof. This can be seen using the GAP Character Table Library. We note that the groups with exceptional Schur multipliers are listed in [GLS98, Theorem 5.2. Let p ě 3 be prime. Let K be a quasisimple group with K{ZpKq -A n , an alternating group with n ą 11. Let B be a p-block for K with noncyclic, positive defect. Then k AutpKq pBq ě 4. In particular, Condition B is true if K is a covering group for S -A n for n ą 11.
Proof. The proof here is essentially the same as that of [MNST,Proposition 3.4]. Let n andŜ n denote the double covers, respectively, of A n and S n . Recall that AutpSq " S n and Autp n q "Ŝ n . Following [Ols93], a p-block of S n has kpp, wq ordinary irreducible characters, and a p-block ofŜ n lying over the nontrivial character of ZpŜ n q (a "spin block") has k˘pp, wq ordinary irreducible characters, where w is the so-called "weight" of the block. We remark that our assumption that a defect group is noncyclic forces w ě 2.
From [Ols93,(3.11) and Section 13], we see that these numbers are larger than 6 (and hence there are strictly more than 3 AutpKq-orbits represented in a given block B of K) if p ě 3 and w ě 2, except for the case pp, wq " p3, 2q and B is a spin block, in which case k˘p3, 2q " 6. In this case, [Ols93,Proposition 13.19] forces at least one of the characters in the block ofŜ n to restrict to the sum of two characters of n , and hence our block again contains characters from strictly more than 3 AutpKq-orbits.
Proposition 5.3. Condition B holds for K a quasisimple group with S :" K{ZpKq of Lie type defined in characteristic p with a non-exceptional Schur multiplier.
Proof. We may assume that K is not an exceptional cover of S :" K{ZpKq, as the latter have been discussed in Proposition 5.1. Now, every p-block of K is either maximal defect or defect zero, by [Hum71,Theorem.]. Hence the defect groups of B are Sylow p-subgroups of K. Now, the condition that a Sylow p-subgroup is abelian and non-cyclic forces S " PSL 2 pp a q for some integer a ě 2, so we may assume that K " SL 2 pp a q is the Schur covering group of S. In this situation, the blocks of maximal defect are in bijection with the characters of ZpKq. Namely, we have B 0 pKq, which contains all members of IrrpK|1 ZpKq qztStu and a second block of maximal defect containing all characters of K that are nontrivial on ZpKq. (See [Hum71, Section 5].) By inspection (see [GM20, Tab. 2.6]) there are four degrees for characters in B 0 pKq, and three in the second block of maximal defect.
Hence, it suffices to show that there are two semisimple characters χ s of the same degree q˘1 that are not AutpKq-conjugate and nontrivial on ZpKq. (The latter is equivalent to s R rK˚, K˚s using [DM20, Proposition 11.4.12 and Remark 11.4.14]).
Since a ě 2, p a´1 must have at least two distinct divisors, so we consider x 1 , x 2 P C p a´1 with these orders. Let s i :" diagpx i , 1q P r K˚:" GL 2 pp a q for i " 1, 2. Note that s i R r r K, r Ks " SL 2 pp a q . Further, s α 1 cannot be conjugate to s 2 z for any z P Zp r Kq and α P AutpKq. Hence the two semisimple characters χ s i of r K for i " 1, 2 cannot be AutpKqconjugate and restrict to distinct characters of K. Hence constituents of these restrictions are not AutpKq-conjugate.
This leaves us to consider groups S of Lie type in non-defining characteristic. Recall that by Proposition 5.1, we may assume that S does not have an exceptional Schur multiplier. Hence the Schur covering group of S is of the form G " G F , where G is a simple, simply connected algebraic group and F : G Ñ G is a Frobenius endomorphism endowing G with an F q -rational structure, where p ∤ q.
Given a semisimple s P G˚of p 1 -order, a fundamental result of Broué-Michel shows that the set E p pG, sq is a union of p-blocks of G, where E p pG, sq is obtained as the union of series EpG, stq as t runs over elements of p-power order in C G˚p sq. (See [CE04, Theorem 9.12]. ) We first dispense of the Suzuki and Ree groups.
Proof. Note that the Schur multiplier for S is trivial or S was considered already in Proposition 5.1. Hence, we let K " S. Further, AutpSq{S is cyclic, generated by field automorphisms. For p ě 3 a prime not dividing q 2 , the Sylow p-subgroups of S " 2 B 2 pq 2 q and S " 2 G 2 pq 2 q are cyclic. So, first let K " 2 F 4 pq 2 q with q 2 " 2 2n`1 . Note that K˚" K is self-dual. In this case, the semisimple classes, centralizers, and maximal tori are given in [Shi75], and the blocks are studied in [Mal90].
First, suppose that p | pq 2´1 q. Then K has a unique unipotent block (namely, B 0 pKq) with noncyclic defect group, which contains more than 3 characters of distinct degree. Similarly, there is a unique noncyclic block of positive defect in each series EpK, sq for s P tt 1 , t 2 , t 3 u, with t i as in [Shi75], using [Mal90, Bem. 1]. The remaining blocks of positive defect are cyclic. If s is one of the classes of the form t 1 or t 2 , then this noncyclic block contains two characters from EpK, sq with distinct degrees. The centralizers C K psq contain the maximal torus Z 2 q 2´1 , from which we may obtain t, t 1 P C K psq p that are not AutpKq-conjugate (taking, for example, p-elements from classes t 1 and t 2 ). This yields four characters in the block that are not AutpKq-conjugate, as desired. For s of the form t 3 , we have C K psq is the full maximal torus Z 2 q 2´1 , and for any t P C K psq p , we have C K pstq " C K psq. Hence we see that every irreducible character in this block has the same degree.
When p ∤ pq 2´1 q, each E p pK, sq contains at most one block of positive defect (see [Mal90, Bem. 1]). First, assume p | pq 2`1 q. Here, the noncyclic blocks correspond to s P tt 4 , t 5 , t 14 u. The set EpK, sq contains 3, 2, 1 distinct character degrees, respectively, in these cases, and each C K psq contains the maximal torus Z 2 q 2`1 . As before, there is only one character degree in the block in the latter case. In the other cases, we argue analogously to the previous paragraph to obtain four characters in the block that are not AutpKq-conjugate.
If instead p | pq 4`1 q, then there are three distinct character degrees in EpK, sq with s P tt 7 , t 9 u. Then considering any character in EpK, stq with t P C K psq p , we obtain a fourth character in the block that is not AutpKq-conjugate to these three. If instead s P tt 12 , t 13 u, we obtain as before that every character in the block has the same degree. The remaining blocks in this case have cyclic defect groups. Now, let K " 3 D 4 pqq. In this case, the blocks have been studied in [DM87]. Using the results there, we may argue analogously to the situation above.
In the remaining cases, we would hope to appeal to the strategy employed in [MNST,Section 3]. Namely, with the above results, the results of loc. cit. largely reduce the problem of proving Condition B to the following: Indeed, from our above results, we may assume that S is a group of Lie type defined in characteristic distinct from p and that the Schur covering group for S is G " G F where G is a simple, simply connected group with F a Frobenius endomorphism. Note then that K is a quotient of G by some central subgroup and that, from our assumption that p ∤ Z in Condition B, [Nav98, Theorem 9.9(c)] tells us that it suffices to prove Condition B when K " G{ZpGq p , where ZpGq p is the Sylow p-subgroup of ZpGq. Our assumption that p ě 3 then means that we may assume that K " G unless S " PSL ǫ n pqq with p | pq´ǫq or S " E ǫ 6 pqq with p " 3 | pq´ǫq. Then indeed, when K " G, Condition 5.5 implies Condition B by [MNST, Proposition 3.9 and Lemma 3.12] (see also Remarks 3.10, 3.11 of Loc. Cit.) and the fact that Bonnafe-Rouquier correspondence preserves isomorphism types of defect groups when the defect is abelian, by [KM13,Theorem 7.16]. Of course, in the cases of S " PSL ǫ n pqq with p | pq´ǫq or S " E ǫ 6 pqq with p " 3 | pq´ǫq, some additional work is needed, as was the case in [MNST,Prop. 3.16(b) and Theorem 3.21].
However, this method is not quite sufficient for completing the proof of Condition B. Unfortunately, although we see by following the proofs in [MNST, Sections 3.4-3.6] that Condition 5.5 holds in many situations, it turns out that there are indeed cyclic quasiisolated blocks with k AutpGq pBq " 2. (This is already pointed out in [MNST] after the statement of Theorem 3.1.) We list some additional situations in the following: Example 5.6. Let G " SL ǫ n pqq and let r G " GL ǫ n pqq. Let p be an odd prime not dividing q and let B be a p-block of G with positive defect. The following situations lead to exceptions to Condition 5.5: (I) If B is noncyclic: ‚ n " 3, p " 5 || pq´ǫq, and B lies under a block r B of r G " GL ǫ 3 pqq indexed by a semisimple 5 1 -element r s P r G with C r G pr sq -C 3 q´ǫ . In this case, k AutpGq pBq ě 3 (and equality can occur) and |cdp r Bq| " 1. Note: this includes the quasiisolated block of SL 3 pqq under the block indexed by the semisimple element diagp1, ζ 3 , ζ´1 3 q, where |ζ 3 | " 3. (See also Remark 5.7.) ‚ n " 3, p " 3 || pq´ǫq, and B lies under a block r B of r G " GL ǫ 3 pqq indexed by a semisimple 3 1 -element r s P r G with C r G pr sq -C 3 q´ǫ . In this case, k AutpGq pBq ě 3 (and equality can occur), |ZpGq| " 3, and the block of S contained in B is cyclic. ‚ p " 3 || pq`ǫq and B lies under a block r B of r G " GL ǫ n pqq with defect group C 2 3 ď C 2 q`ǫ . (II) If B is cyclic: ‚ n " 2, p || pq´ǫq, and B lies under a block r B of r G " GL ǫ 2 pqq indexed by a semisimple p 1 -element r s P r G with C r G pr sq -C 2 q´ǫ . Here k AutpGq pBq ě 2 (and equality can occur). Note: this includes the quasi-isolated block of SL 2 pqq under the block indexed by the semisimple element diagp´1, 1q. ‚ B lies under a block r B of r G indexed by a semisimple p 1 -element with C r G pr sq -C q δ´η where δ " n and p || pq δ´η q. Here k AutpGq pBq ě 2 (and equality can occur).
We also remark that it is still not known whether there are 3-blocks with 3 irreducible characters with defect group C 3ˆC3 . This problem appeared first in [Kiy84] and has come up often. See, for instance, [AS22,p. 677]. Our proof of Theorem C shows that if one could prove Condition B with the additional hypothesis that D " C 3ˆC3 then this problem would have a negative answer. On the other hand, a proof of Condition B when |D| ą C for some universal constant C, would settle the case kpBq " 3 of Brauer's Problem 21 for arbitrary groups.
|
2023-10-03T06:42:39.005Z
|
2023-09-29T00:00:00.000
|
{
"year": 2023,
"sha1": "35094593fed68d043990fdefa9a74bd7b3b1352a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "35094593fed68d043990fdefa9a74bd7b3b1352a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
262186975
|
pes2o/s2orc
|
v3-fos-license
|
COVID-19 pandemic and other factors associated with unfavorable tuberculosis treatment outcomes—Almaty, Kazakhstan, 2018–2021
Introduction The COVID-19 pandemic negatively influenced the availability of tuberculosis (TB) services, such as detection, diagnosis and treatment, around the world, including Kazakhstan. We set out to estimate the COVID-19 pandemic influence on TB treatment outcomes by comparing outcomes among people starting treatment before the pandemic (2018–2019) and during the pandemic (2020–2021) and to determine risk factors associated with unfavorable outcomes. Methods We conducted a retrospective cohort study among all people newly diagnosed with drug-sensitive pulmonary or extrapulmonary TB at least 18 years old who initiated treatment from 2018 to 2021 in Almaty. We abstracted data from the national electronic TB register. Unfavorable treatment outcomes were ineffective treatment, death, loss to follow-up, results not evaluated, and transferred. We used multivariable Poisson regression to calculate adjusted relative risk (aRR) and 95% confidence intervals (95%CI). Results Among 1548 people newly diagnosed with TB during the study period, average age was 43 years (range 18–93) and 52% were male. The number of people initiating treatment was higher before than the pandemic (935 vs. 613, respectively). There was significantly different proportions before compared to during the pandemic for people diagnosed through routine screening (39% vs. 31%, p < 0.001), 60 years and older (16% vs. 22%, p = 0.005), and with diabetes (5% vs. 8%, p = 0.017). There was no difference in the proportion of HIV (8% in both periods). Unfavorable outcomes increased from 11 to 20% during the pandemic (aRR = 1.83; 95% CI: 1.44–2.31). Case fatality rose from 6 to 9% (p = 0.038). Risk factors for unfavorable TB treatment outcomes among all participants were being male (aRR = 1.44, 95%CI = 1.12–1.85), having HIV (aRR = 2.72, 95%CI = 1.99–3.72), having alcohol use disorder (aRR = 2.58, 95%CI = 1.83–3.62) and experiencing homelessness (aRR = 2.94, 95%CI = 1.80–4.80). Protective factors were being 18–39 years old (aRR = 0.33, 95%CI = 0.24–0.44) and 40–59 years old (aRR = 0.56, 95%CI = 0.41–0.75) compared to 60 years old and up. Conclusion COVID-19 pandemic was associated with unfavorable treatment outcomes for people newly diagnosed with drug-sensitive TB in Almaty, Kazakhstan. People with fewer comorbidities were at increased risk. Results point to the need to maintain continuity of care for persons on TB treatment, especially those at higher risk for poor outcomes during periods of healthcare service disruption.
Introduction
On March 11, 2020, the World Health Organization (WHO) declared Coronavirus Disease 2019 (COVID-19) to be a pandemic.In the immediate absence of an effective vaccine, "non-pharmaceutical interventions" (NPIs) such as social distancing, restrictions on travel, and remaining at home, were recommended as some of the main strategies to reduce the likelihood of disease transmission.With the exponential growth in the number of seriously ill people, these NPIs served as some of the main tools to reduce the immediate burden on the healthcare system personnel and resources (1).These restrictions and the demands placed on health care personnel (including personnel shortages) led to the postponement of elective health care procedures as well as decreased access to routine care, including the management of people with active tuberculosis (TB).Among countries with a large burden of TB, the reduction in core TB services led to reductions in the detection, diagnosis and treatment of patients with TB (2).
WHO estimates that in many countries with a heavy burden of TB, the number of TB notifications decreased by 18% in 2020 compared to 2019, as COVID-19 pandemic control measures were taken (2).The number of people under active treatment for TB globally also decreased in 2020, totaling 2.8 million people, 1.4 million fewer than in 2019.
After the introduction of the direct observed therapy strategy (DOTS) in 1999, the TB incidence per 100,000 people in Kazakhstan dropped from 162.5 in 2002 to 49.2 in 2020-an overall average decline of about 8-10% per year (3).Also, national TB mortality per 100,000 population decreased from 39.7 in 1999 to 1.9 in 2020.In Almaty, incidence decreased from 70.1 to 23.1 per 100,000 from 2010 to 2021 (Figure 1).From 2010 to 2019, the proportion of TB patients identified during occupational screening fluctuated between 38.8 to 36.6%; during 2020 and 2021, occupational screening only identified 34.2 and 34.0%, respectively (Figure 2).
The progress made in fighting TB in Kazakhstan, as well as worldwide, has been threatened by the COVID-19 pandemic.In particular, the pandemic has led to a decrease in the timely detection of TB in 2020 due to complex factors that resulted in reduced access to services (4).Specific impacts in Kazakhstan include: (1) reduced coverage of the population by preventive TB examinations (44.5% in 2020 compared to 41.9% in 2019), and (2) reduced detection of TB during routine medical check-ups (49.8 to 44.9 per 100,000 population in 2019 to 2020 respectively) (5).
A review of studies on the impact of the COVID-19 pandemic on TB services in various countries revealed that the COVID-19 pandemic negatively affected many aspects of TB control.In India, during the 8-week isolation due to the COVID-19 pandemic, the detection of TB decreased by 59% (6).In China, the diagnosis of multiple-drug-resistant (MDR) TB in the first quarter of 2020 decreased by 17% compared to the same period in 2019 (7).A study in Iran also showed a 55.6% decrease in new TB case detection during the March to June 2020 lockdown compared to previous years (8).A recent study in Italy showed that, despite efforts to maintain TB services, there was a sudden increase in service disruption during the COVID-19 outbreak (9).These service interruptions will likely have long-term consequences on TB burden, and a modeling study predicts a 4% increase in TB deaths worldwide and 5.7% excess deaths in India over the period from 2020 to 2025 due to the COVID-19 lockdown (10)(11)(12).Studies using national data to assess the impact of the COVID-19 pandemic on TB services have not previously been conducted in any Central Asian country.This study examined the potential impact of COVID-19 on TB detection and treatment in Kazakhstan and will help guide recommendations for further planning and policy development of TB control programs in Kazakhstan, as well as other countries with similar economies and health care systems.
The specific aim of the study was to assess the association of the COVID-19 pandemic period and related risk factors with adverse TB treatment outcomes among people newly diagnosed with TB in Almaty, Kazakhstan, 2018-2021.
Study design
We conducted a retrospective cohort study among people with newly diagnosed TB in Almaty; data were abstracted from patient registries between 08/20/2022 and 12/15/2022.Eligibility for this study was restricted to patients at least 18 years old, living in Almaty, with a first-time TB diagnosis who initiated TB treatment between 2018 and 2021.
Data collection
Patient data was abstracted from Kazakhstan's national electronic database -"Information System National Electronic Register of Tuberculosis Patients." The database is a longitudinal registry where all people diagnosed with TB are mandatorily registered and tracked.The system contains demographic and clinical data on all people ever diagnosed with TB in Kazakhstan.
Study participants
From 2018 to 2021 a total of 2,246 patients with TB were registered in Almaty, Kazakhstan.Analysis was restricted to 1,548 adults 18 years old and above, who were diagnosed for the first-time with drugsensitive TB.People meeting these criteria without an individual identification number (n = 24) were excluded from the study.
To assess the effect of the COVID-19 pandemic on unfavorable TB treatment outcomes, we included only people who initiated treatment and would have already completed treatment before the study began.We excluded people with drug-resistant TB because currently in Kazakhstan the duration of treatment for this group requires several years.
Key definitions
We used WHO categories and reporting framework for TB, 2013 revision (updated Dec 2014; Jan 2020) to classify treatment outcomes as favorable or unfavorable (13).People were classified as having favorable treatment outcomes if they were considered to be cured or completed treatment.The definition of cured was someone who became smear or culture negative in the last month of treatment and on at least one previous occasion.People were classified as having unfavorable treatment outcomes if they had any of the following outcomes: treatment failure or switched 2nd line treatment, death from any cause, and loss to follow-up or not evaluated.Treatment failure was defined as having completed treatment but remaining smear or culture positive after treatment completion.
Drug-sensitive TB is TB caused by mycobacteria whose strains are sensitive to first-line anti-TB drugs (rifampicin, isoniazid).MDR TB is TB caused by mycobacteria whose strains are resistant to at least rifampicin and isoniazid.
Statistical analysis
We assessed the accuracy and completeness of the data by constructing a line-by-line list of patients in a separate database and sorting them according to the variables under study.Statistically significant value of p was set to (p < 0.05).We analyzed the data and performed statistical calculations using R version 4.2.2 (R Foundation for Statistical Computing, Vienna, Austria).
We calculate crude risk ratios (cRRs) and used the chi-square test to measure the relationship between each individual risk factor, including time of treatment initiation, patient characteristics, with treatment outcome (successful versus unsuccessful).Power to detect difference in proportion from p1 = 0.11 to p2 = 0.20 from unequal samples (n1 = 935 and n2 = 613) was 0.99.We ran bivariable and multivariable Poisson regression to assess the contribution of treatment period and risk factors to unfavorable treatment outcomes.We checked for multicollinearity and interactions between explanatory variables.None were found.Results are presented as adjusted risk ratios (aRRs) and 95% confidence intervals.
Ethical considerations
Ethical approval of the study was received from the local ethical commission of the NAO Kazakh National Medical University named after S.D. Asfendiyarov, Kazakhstan.This activity was reviewed by the CDC and was conducted consistently with applicable United States federal law and CDC policy. 1ermission to conduct the study was granted by the Local Internal Review Board of the Kazakh School of Public Health and the Internal Review Board at the US Centers for Disease Control and Prevention.Patients' informed consent was deemed not necessary, because this is a retrospective analysis of program data.
Results
We identified 1,548 people who had been newly diagnosed and initiated treatment with drug-sensitive TB from 2018 to 2021.Of these, 60% did so before the COVID-19 pandemic and 40% during the pandemic.Mean age was 43 years old and 50% were 18-39 years old (Table 1).Distribution across age groups differed significantly by period, and a greater proportion of people were 18-39 years old before the pandemic than during the pandemic (52% vs. 46%, respectively).Half (52%) were male and sex did not differ by period of detection.While 58% of all patients in the study were unemployed, the proportion of patients unemployed was similar in the pre-pandemic and pandemic time periods (58% versus 60%, respectively).More people were detected during routine screening before the pandemic (39%) than during the pandemic (31%).Also, more people were detected due to the presentation of symptoms during the pandemic (68%) compared to the pre-pandemic period (60%).People newly diagnosed with drug-sensitive TB were more likely to have diabetes during the pandemic than before (8% vs. 5%).
The proportion of people completing treatment was lower during than before the pandemic (58% vs. 51%, respectively; Table 2).Also, more people were transferred to second-line treatment during the pandemic than before (7% vs. 2%, respectively).The proportion who died from TB or other causes was also significantly higher during (9%) than before the pandemic (6%).There was no significant difference by period of treatment initiation for other outcomes.
People who were newly diagnosed with drug-sensitive TB and initiated on treatment during the pandemic period were 1.85 times more likely [95% confidence interval (CI) = 1.46 to 2.36] to experience an unfavorable outcome compared to people who started treatment prior to the pandemic period (Table 3).People who were 18 to 39 years of age or 40 to 59 years of age were less likely to have an unfavorable outcome (cRR = 0.36 and 0.74, respectively) compared to people who were 60 years or older at time of treatment initiation.Males were more likely to have an unfavorable outcome compared to females (cRR = 1.66).People who were living with HIV or who had alcohol use disorder were more likely to have unfavorable treatment outcome, cRR = 2.49 and 2.99, respectively compared to people without those conditions.
Employment status had five categories that were significantly related to treatment outcome.People who were manual laborers compared to all other categories were less likely to have unfavorable outcome (cRR = 0.58).People who were office workers compared to all other categories were less likely to have unfavorable outcome (cRR = 0.22).People who were students compared to all other categories were less likely to have unfavorable outcome (cRR = 0.08).People who were experiencing homelessness compared to all other categories were more likely to have unfavorable outcome (cRR = 2.94).People who were retired compared to all other categories were more likely to have unfavorable outcome (cRR = 1.46).
Discussion
Our study found that the COVID-19 pandemic period was associated with unfavorable treatment outcomes among adults newly diagnosed with drug-sensitive TB treatment in Almaty, Kazakhstan.This impact remained even after adjusting for several other risk factors including age, sex, HIV status, alcohol use disorder and employment status.
TB detection during the COVID-19 pandemic
The overall number of people diagnosed with TB was substantially lower in the first two-years of the pandemic compared to the 2 years before the pandemic.This is consistent with the annual trends in Almaty where there has been a decreasing trend in TB incidence over the last decade, from 70.1 per 100,000 in 2010 to 35.1 in 2017 (the year before our study began).While community control measures, like use hand and respiratory hygiene practices, and social distancing, taken at the onset of the pandemic may have contributed to the reduced transmission of tuberculosis (14), it should be noted that health service delivery disruptions and reduced access to care may have led to fewer screening opportunities and fewer TB incident cases during the pandemic (15,16).Nevertheless, reduced screening and healthcare service disruptions may also have contributed to the decrease.
The proportion of people newly detected with drug-sensitive TB during routine screening was significantly less during the pandemic than before (2).Systematic screening for TB is a central component of the global strategy to end TB (17).Screening helps detect TB disease early and reduces the risk of unfavorable treatment outcomes.Restrictive lockdowns introduced nationally in Kazakhstan at the onset of the pandemic made it harder for people to leave their houses to go receive preventive healthcare services, including TB screening for people at increased risk of developing TB disease.Also, even if people could leave, preventive services were often not available, because of disruptions in provision of primary care services throughout the country, including Almaty, during this time.People may also have been reluctant to obtain preventive services due to the risk of getting COVID-19 in healthcare facilities because rates of COVID-19 were high among healthcare providers (18).Not surprisingly, the proportion of people detected with TB who tested because of TB symptoms was higher during the pandemic.Respiratory symptoms of COVID-19 can be similar to those of TB.During the initial phase of the pandemic and before testing was widely available, all people with respiratory symptoms consistent with COVID-19 in Kazakhstan were hospitalized.TB diagnostic tests would have been performed as a differential diagnosis of COVID-19.This is also consistent with our finding that the proportion of people diagnosed with TB increased in groups at higher risk for COVID-19, specifically older populations and people with diabetes.These are two commonly known risk factors for severe 20).
TB treatment outcomes during the COVID-19 pandemic
As expected, our study showed a decrease in the proportion of people completing TB treatment successfully during the pandemic.In Kazakhstan, as in other countries, some TB hospitals and care facilities were reappropriated to provide inpatient care for COVID-19 patients.Similarly, healthcare providers who usually treat people with TB were often reassigned to care for people with COVID-19 (4).Further amplifying this shortage of services, was the increased morbidity of COVID-19 among providers themselves (21).The reassignment of providers away from TB services could have resulted in reduced oversight and continuity of care for directly observed therapy (DOT) services (22).
Our results are consistent with other studies that show the negative impact the COVID-19 pandemic has had on TB treatment outcomes (7,9,23).Disruptions in treatment during the pandemic, may also have contributed to the increased proportion of people who failed to complete treatment or who were referred to second line treatment.
Disruptions in treatment may have also contributed to increased mortality, which was 50% higher during the pandemic (9% during vs. 6% before the pandemic).Notably, the proportion whose death was not attributed to TB was increased.There is no information on the cause of death in the database, but COVID-19 may have played a role because patients with active pulmonary TB who acquire COVID-19 have a two times greater risk of COVID-19 mortality (24).
Treatment outcomes
Treatment success rate in our study of 85% was below the 90% target set by WHO, but it is consistent with the global treatment success rate of 86% for new and relapse cases (2).However, the success rate is higher than the success rate for the European region of 72%.Also, the case fatality ratio of 7% in our study is within the WHO target of 10% set for 2020, and in line with the 2025 target of 6.5%.Although case fatality ratios are below targets, there was a significant increase in all-cause mortality among TB patients during the pandemic.The majority of deaths were not attributable to TB.From the data we cannot determine if COVID-19 was a risk factor for the increased fatality rate; however, studies elsewhere have demonstrated that people with TB are at greater risk of dying from 26).
Consistent with literature, men were more likely than women to have an unfavorable treatment outcome, as were people 60 years and older compared to young and middle-aged adults (27).Delayed-care seeking behavior and smoking status, which we did not measure in our study, are known to contribute to sex differences in TB outcomes.Also consistent with literature was the finding that people with health comorbidities and less social stability, such as alcohol use disorder, HIV, and experiencing homelessness, are more likely to have unfavorable treatment outcomes compared to people without these disadvantages (28).
Study limitations
Due to the retrospective study design based on available data, we are limited to the information that is entered into the electronic database.There may also be errors in the entry of information into the database by employees of medical organizations, such as incorrect clinical and demographic data, and incomplete completion of medical records.Also, because data is collected by medical providers, our results are subject to self-report bias for certain variables with high stigma, such as drug and alcohol use.This bias likely results in underestimation of alcohol and drug use disorder in our study.Also, some variables had few responses and should therefore be interpreted with caution.Our study also did not assess any direct interactions between TB and COVID-19 because there was no information or inconsistently captured information about COVID-19 in the database.This information was incorporated into the database after the study period.Lastly, as an observational study limited to variables that could be found in medical records, we cannot control for all factors that could have contributed to differences in TB outcomes pre and during the COVID-19 pandemic.
Study results in context
Decrease in proportion of people being newly diagnosed with TB from routine screening including occupational health screening, point to the need for maintenance of these essential services during periods of public health emergencies.Service continuity plans that support health care facilities to minimize disruption and ultimately increase the resilience of health services during public health emergencies are needed in preparation for future healthcare crisis (29).
Although there was a decrease in successful TB treatment outcomes during the pandemic, several strategies were adopted during this time that may have mitigated further negative impacts.One strategy included improved triage of patients at primary care and hospital entry.All patients presenting with cough, chest complaints or fever were immediately separated, given respirators, or surgical masks if respirators were not available, and were tested for COVID-19, TB, pneumonia, and acute respiratory viral infections.
Another strategy included the adoption of polymerase chain reaction (PCR) and enzyme-linked immunosorbent assay (ELISA) for rapid testing for differential diagnosis of different respiratory illnesses.During the beginning of the pandemic, Kazakhstan adopted a modified algorithm for rapid laboratory diagnosis of COVID-19 and TB.Rapid diagnosis using PCR-based methods made it possible to almost immediately diagnosis TB and initiate appropriate treatment.
Lastly, the country scaled up video observation therapy for TB.In video observed therapy, healthcare providers observe patients taking their anti-TB medications daily using live or recorded video.Studies elsewhere have found that adherence to treatment is higher among patients on video observed therapy than compared to in-person direct observed therapy (30).In 2018, Kazakhstan began to provide TB patients with smartphones to keep communication with their healthcare providers.Then in 2020, Kazakhstan launched a program to provide smartphones to all TB patients throughout the country (31).The use of video of the observed treatment (VOT) therapy in Kazakhstan allowed clinical staff to continue TB treatment in outpatient settings without interruption during the COVID-19 pandemic.The use of digital technologies during the COVID-19 pandemic also made it possible for providers to maintain communication with patients: conduct online consultation, speak with patients by phone, via telemedicine and mobile messaging.
Conclusion
The COVID-19 pandemic was associated with unfavorable treatment outcomes for people newly diagnosed with drug-sensitive TB in Almaty, Kazakhstan.People with comorbidities (HIV or alcohol use disorder) and those experiencing homelessness were at increased risk of unfavorable outcomes.Detection through routine screening was reduced and the case fatality rate among people on TB treatment was increased during the pandemic.Results point to the need for maintaining routine TB screening and continuity of care for people on TB treatment, especially people at the highest risk of unfavorable outcomes, during times of healthcare service disruptions due to public health emergencies like COVID-19.
FIGURE 2
FIGURE 2 Annual proportion of new tuberculosis diagnosis from routine occupational screening in Almaty, Kazakhstan, 2010-2021.
TABLE 1
Socio-demographic and epidemiological characteristics of adults newly diagnosed with drug-sensitive TB, grouped by years at first registration before and during COVID-19 pandemic, 2018-2021, Almaty, Kazakhstan (n = 1548).Office worker category captures management, business or financial operations, computer and math, architecture and engineering, sciences, education, sales and related, office and administrative support.Unknown or missing responses are excluded from analysis.Bolded numbers represent p-values < 0.05.
TABLE 3
Risk factors associated with unfavorable treatment outcome among adults newly diagnosed with drug-sensitive TB, Almaty, 2018-2021.
|
2023-09-24T15:26:56.923Z
|
2023-09-21T00:00:00.000
|
{
"year": 2023,
"sha1": "d8cb87ad9f83c5d11d9485d0d7e64fd003bfb035",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2023.1247661/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bea20d0608255d75e9ceda16c1b2bdc12eaee72",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
244077964
|
pes2o/s2orc
|
v3-fos-license
|
Novel Treatment Approaches for Substance Use Disorders: Therapeutic Use of Psychedelics and the Role of Psychotherapy
The use of psychedelics in a therapeutical setting has been reported for the treatment of various diagnoses in recent years. However, as psychedelic substances are still commonly known for their (illicit) recreational use, it may seem counterintuitive to use psychedelic therapy to treat substance use disorders. This review aims to discuss how psychedelics can promote and intensify psychotherapeutic key processes, in different approaches like psychodynamic and cognitive behavioral therapy, with a spotlight on the treatment of substance use disorders (SUD). There is promising evidence of feasibility, safety, and efficacy of psychedelic therapy in SUD. In the whole process of former and current psychedelic therapy regimes that have shown to be safe and efficacious, various psychotherapeutic elements, both psychodynamic and behavioral as well as other approaches, can be identified, while a substantial part of the assumed mechanism of action, the individual psychedelic experience, cannot be distinctly classified to just one approach. Psychedelic therapy consists of a complex interaction of pharmacological and psychological processes. When administered in well-defined conditions, psychedelics can serve as augmentation of different psychotherapy interventions in the treatment of SUD and other mental disorders, regardless of their theoretical origin.
Introduction
Psychedelics, also known as serotonergic hallucinogens, exert their main effects via stimulation of the serotonin 5-HT 2A receptor [1]. Not included in the use of the term psychedelics in this article are dissociative anesthetics (e.g., ketamine), empathogen-entactogen stimulants (e.g., MDMA; 3,4-methylenedioxy-meth-amphetamine), ibogaine, or new psychoactive substances (NPS, "legal highs"). The best known psychedelics are LSD (5R,8R-lysergic acid diethylamide), psilocybin, DMT (N,N-dimethyltryptamine), and mescaline (3,4,5-trimethoxyphenethylamine). These substances cause an altered state of consciousness, usually lasting several hours, with profound changes in perception, including hallucinations, synesthesia, altered experience of time and space, and strong activation of emotions and emotionally formative memories [1]. In Europe, psychedelics are best known to be used in recreational settings, while overall lifetime-prevalence levels among young adults have been generally low and stable for a number of years between 1 and 2% [2]. While their individual and societal harm ranks the lowest among many illegal legal substances, and their addiction/dependence potential seems negligible, [3,4], most psychedelics are controlled under the Convention on Psychotropic Substances of 1971. It might therefore seem counterintuitive to use these substances to treat addiction.
However, in this article, the authors present psychotherapeutic frameworks and possible mechanisms of action, in 1 3 order to show why and how addiction treatment with psychedelics may be beneficial, which has been suggested in a number of epidemiological and observational trials [5][6][7][8][9][10][11], clinical trials [12][13][14][15], metanalyses [16], and conceptual articles [17][18][19][20][21]. There is growing evidence that psychedelics can be used to intensify psychotherapy processes in certain conditions, and thus serve as a non-specific augmentation of certain psychotherapeutic processes, which play a role in different forms of psychotherapy. We put a focus on psychodynamic therapy and CBT in this article for the high availability of empirical data in reported trials, as well as for their well-known historical and conceptional differences, which makes the similarities concerning the combination with psychedelic even more striking. High quality empirical data about other psychotherapies (e.g., hypnotherapy, group therapy) in psychedelic therapy is sparse though, and would be worth being further investigated.
Psychedelic Use, Research, and Therapy
Psychedelics have been used for several millennia in the context of traditional practices and shamanic rituals, for healing physical and mental disorders, for religious reasons and divination [22]. Since the beginning of the twentieth century, psychedelics have aroused the interest of botanists, psychologists, and psychiatrists. This interest was reinforced by the discovery of the psychedelic effects of LSD in 1943 [23]. Along with the associated counterculture in the 1960s, psychedelics became increasingly popular for recreational use. It is estimated that more than 30 million people living in the USA have used LSD, psilocybin, or mescaline in their lifetime [24].
In a clinical and research context, these substances were initially used as an experimental disease model for research on psychotic disorders ("psychotomimetics"), and rarely for self-exploration among psychiatrists. Further aspects of their therapeutic potential in the treatment of mental disorders were recognized later [25]. In fact, investigators who first tried LSD as an alcoholism treatment hypothesized that a psychotic-like experience would mimic a safer version of delirium tremens, because delirium tremens often preceded sobriety, but were also frequently fatal. However, these investigators found that under interpersonally supportive conditions, instead of psychotic-type experiences, patients often had positively-valenced experiences, for example insightful, peak, or mystical-type effects (e.g., feeling at one with the universe) that led to sobriety [26][27][28]. It was psychiatrist Humphry Osmond who coined the term psychedelic in 1957 to describe the mind-manifesting, revealing properties of these substances, after having observed that high doses of LSD in patients with alcohol dependence lead to profound experiences and strong effects on alcohol relapse prevention [26]. Between the late 1950s and early 1970s, approximately 40,000 patients worldwide with different mental disorders (substance use disorders, affective disorders, neurosis etc.) were treated with psychedelicsprimarily LSD, mescaline, and psilocybin-sometimes with reports of remarkable therapeutic success described in over 1000 publications [29]. Following the largely socially and politically motivated international ban on psychedelics in 1971, almost all research activity in this field came to a halt. It was not until the 1990s that interest in psychedelic research has resurged. Since about the year 2000, there has been an increasing number of basic science and clinical studies [23,25].
Independent from the diagnosis which was intended to treat, these trials have generally been based on fairly similar study protocols that focused on the careful selection and preparation of patients (usually referred to as set) and the provision of a safe and trusting environment (usually referred to as setting) [23]. "Set" and "setting" are an expression going back to the 1950s/1960s and often used in general language. In the context of research, however, this expression might be replaced in the coming years while trials discover the complex interplay of environmental, individual, and pharmacological factors. The approach nowadays is similar to the historical model of psychedelic therapy, or psychedelic peak therapy [41,42]. In this model, a moderate to high dose of a psychedelic (e.g., LSD ≥ 200 μg; psilocybin ≥ 20 mg) is applied to provide an overwhelming, transformative experience [43,44]. In current studies, a brief therapeutic intervention phase is conducted for a period of one to three months, of which only a small proportion of sessions is under the influence of psychedelics. Between one and four sessions are conducted with administration of a psychedelic, at intervals with one to several weeks inbetween each session [23]. In contrast to established pharmacotherapies (e.g., with antidepressants), a continuous daily substance administration is not provided.
In recent clinical trials on psychedelic therapy, a study protocol is commonly applied in most trials, hereafter also referred to as the current standard model. In this carefully refined protocol with emphasis on safety issues, the treatment is usually divided into three phases: (1) preparation, (2) dosing sessions, and (3) integration of the psychedelic experience [4,23]. In the preparation phase (1), patients are informed about the mechanisms of action of 1 3 the psychedelics, about possible aversive parts of the experience (fear, sadness, loss of ego-boundaries, somatic side effects) and how to deal with them if they should arise. Therapists will explore the biographical and motivational background and current stress factors, and will begin to establish a therapeutic alliance. During dosing sessions (2), the patient is lying down in a comfortably arranged room in the presence of one or two therapists, wearing eyeshades and listening to a playlist of specifically selected emotion-eliciting music. The patient is encouraged to surrender to the inner experience with a mindful, accepting attitude, and to interact as little as possible with the environment during the acute effect of the substance, which usually lasts several hours. In the integration phase (3), the subjective psychedelic experience is discussed with the therapists, with a special emphasis on its meaning for the current life situation and the symptoms (e.g., the addictive behavior), helping the patient to integrate the experience into his daily life and to find a way out of the addiction and towards mental well-being. In most of the mentioned studies, no or very few specific psychotherapeutic interventions took place according to the current standard model. A focus was put on the deliberate and standardized design of set and setting, whose crucial importance for the psychedelic experience and thus for the success of the therapy is becoming increasingly clear and which are the subject of current investigations [45][46][47]. However, as discussed below, the studies addressing alcohol and tobacco dependence included a variety of classical psychotherapeutic elements, which took place not in the dosing session itself but rather in the preparation and integration sessions [14,15]. In the mentioned studies, there were different control conditions applied, ranging from non-controlled openlabel [14,15,32] to randomized controlled trials, the latter with low-dose psilocybin or LSD [35], niacin [36][37][38]40], a specially designed placebo [31] or methylphenidate as active placebo. One trial controlled with waiting list [30], and one study used a double-dummy RCT to compare psilocybin to 6 weeks of an SSRI [34]. Thus, recent trials adopted a fairly standardized therapeutic intervention, while many did not fulfil the criteria of the current gold standard of controlled clinical trials (RCTs), which is mostly due to difficulties in successfully blinding the psychedelic experience [48]. Another problem is the small sample size of the existing trials, and potential bias due to a high rate of self-referred, highly motivated patients, which-in combination with highly motivated researchers and therapists, and the lack of effective blinding procedures-could have overstated treatment effects. Some of the methodological challenges are being addressed by upcoming trials like the EPIsoDE-Study (ClinicalTrials. gov: NCT04670081).
Risk and Potential Harm of Psychedelic Therapy
As applied in modern clinical trials, the risk of psychedelics in the treatment of mental disorders incl. SUD is considered very low, if the crucial points of patient selection (e.g., no personal or family history of psychosis), control of individual (set), and environmental factors (setting) are carefully addressed [4]. Although challenging reactions including fear and anxiety are common even in controlled settings and can be managed appropriately, when ingested in uncontrolled settings, such effects sometimes lead to dangerous behavior that can harm the self or others. In those vulnerable to psychotic disorders, such experiences may be destabilizing and instigate or worsen such disorders. [49]. In the cited clinical trials since 2011, not a single case of persisting psychotic reaction is reported. The risk with regard to the development of an addiction is very low and considered negligible for classic psychedelics such as psilocybin and LSD [50].
Psychodynamic Approaches
As the first variant of psychedelic therapy, the so-called psycholytic (loosening the soul or mind) therapy developed in the 1950s, especially in Europe [41,42]. Psycholytic therapy involved the repeated use of a low to moderate dose of a psychedelic (e.g., 50-200 μg LSD) as part of months to years of outpatient psychodynamic therapy [51,52]. In substance-assisted sessions, an intensification of emotional experience, an alleviation of neurotic defense mechanisms such as repression, denial, rationalization, and a marked increase in the number of the patient's free associations have been observed regularly [51]. This may allow patients to re-surface long-forgotten, emotionally aversive memories or traumas, often resulting in a cathartic abreaction of the feelings that have come to consciousness. By using psychoanalytic techniques (e.g., confrontation, interpretation) during substance-free sessions in order to further process the content rising into the patient's consciousness during substance-assisted sessions, patients and therapists reportedly gained deeper insights into repressed neurotic conflicts and unconscious patterns of response and interaction [51,53].
These elements are universal in the psychodynamic treatment of any given mental disorder, be it depression or substance use disorder/addiction. Only a few psychodynamic hypotheses can be found that are specific about addiction treatment with psychedelics. Early US and Canadian trials on treatment of alcohol dependence with high-dose LSD from the 1950s-1970s suggested that the peak experiencewhile often framed by therapists from the alcoholics anonymous movement, and perceived by patients as "spiritual awakening"-led to deep insights and instant psychological transformation, changing the patients´ dysfunctional patterns, increasing self-esteem, revealing their true personal values, and leading to a phase of profound well-being, so that imagining the return to alcohol consumption triggered feelings of disgust [26]. The proposed psychological factors share some commonalities with the general psychodynamic idea of making the unconscious conscious.
Recently, Moreton and colleagues suggested an increased confrontation with one's own dying and death during the substance-assisted sessions as a psychodynamic mechanism of action, as it can result in a reduction of the mostly repressed death anxiety inherent in humans and thus bring about a decrease in symptoms of various mental disorders [54]. This existential psychotherapy approach is in line of the Krupitsky's findings of high recovery rates in heroin and alcohol addicts after ketamine-assisted existentially oriented psychotherapy, which also provided "positive transformation of nonverbalized (mostly unconscious) self-concept and emotional attitudes to various aspects of self and other people, positive changes in life values and purposes, important insights into the meaning of life and an increase in the level of spiritual development." [55,56]. Although ketamine is not a serotonergic hallucinogen, it nonetheless shows some overlapping effects, and thus Krupitsky's work is likely relevant.
Another psychodynamic element is the so-called age regression, as was concluded from patients' experience reports during the psychedelic experience. In addition to an increased ability to remember repressed and long-past events (facilitation of autobiographical memories [57]), this can also include an archaic mode of experience with symbolic imagery, which, depending on the psychoanalytic background of the therapist, was classified in terms of archetypes (C.G. Jung) or primary-process thinking (S. Freud) [51,53]. Further therapeutically useful effects were described as an intensification of the therapeutic relationship and an increased transference and countertransference, all factors which can also play an important role in the psychodynamic treatment of drug addiction [58].
Finally, there are some interesting psychoanalytical hypotheses and theories about the development of substance use disorders which have not yet been connected to psychedelic therapy models. Winnicott, building on the framework of object relations theory, argued that addiction could be understood as "regression to the early stage at which the transitional phenomena are unchallenged" [59]; it would be reasonable that age regression during a psychedelic experience can help discover and understand the early developmental deficits on a cognitive and emotional level, and "work this through" in subsequent therapy sessions, in order to no longer depend on the (pseudo-)translational object, which is the drug of addiction. The self-medication theory by Khantzian was created from clinical observation of patients with substance use disorders, who showed difficulties in regulating affect, self-esteem, relationships, and self-care, all of which result in strong painful affective states that are in turn being regulated in a dysfunctional way with substance abuse [60,61]. The psychedelic state could help the patient find important insights about those supposedly repressed, shame-laden mechanisms and the related structural personality deficits, while providing a positive mood (feelings of unity, bliss, trust in the therapeutic relationship, etc.) to enable correcting emotional experiences with the therapist. Figure 1 provides an overview of the psychodynamic processes and factors that are thought to be intensified and enhanced by psychedelic-assisted therapy sessions.
Most studies of psychedelic-augmented psychodynamic treatment date from the 1950-1970s, so the scientific standard and control of biases was heterogeneous and rather low. Methodological details cannot be provided, as the principal method was phenomenological observations and narratives on a single case base-a major problem in all psychodynamic therapy research. Further trials with more rigorous designs are needed to elucidate the proposed mechanisms.
Although in recent decades, some segments of psychological science have deemphasized psychoanalytic theory due to concern over a disprovable framework largely based on nonexperimental clinical observation, psychodynamic models remain of interest. In fact, psychedelic research might itself constitute an experimental framework for investigating purported psychodynamic mechanisms. In the discussion about Fig. 1 Psychodynamic factors in psychedelic therapy, as described, e.g., by Leuner, Gasser, Cohen (for references see text). Colored flashes show factors and processes that are reported to be enhanced by psychedelics. The effect on the patient's dreams has not yet been sufficiently investigated; the therapist's interpretations should not be affected since he does not ingest any substances the mechanisms of action of psychedelic therapy, psychoanalytic concepts have been revisited and modernized in recent years. One reason for this is the strong phenomenological similarity between the psychedelic experience and the dream experience [62]. Interpretation of dream content has been one of the central therapeutic techniques of psychoanalysis since the beginning of occidental psychotherapy. It was first outlined by Sigmund Freud in 1900 as the "royal road to the unconscious" [63]. Recent neurobiological and phenomenological studies have described the similarities of the psychedelic state with the dream state, e.g., in terms of perception, mental imagery, emotion activation, and self-and body experience [64]. Accordingly, primary-process thinking, which is characterized by fusion and transformations of mental images, bizarre experiential content, and illogical cognitions and emotions, can be understood as an organizing principle for both dreams and psychedelic states [65].
In summary, the psychedelic experience in a therapeutic setting has been reported to intensify key mechanisms of psychodynamic therapy like free association, regression, transference, weakened defense, deep insights, catharsis and exposure to existential themes, and thus augmenting the therapy process. With regard to SUD, psychedelics addressed processing of specific affects and thought patterns like guilt, grief, disgust, relational problems, and self-harm.
From Psychodynamic Theory to Predictive Processing
The abovementioned psychoanalytic perspective on the effects of psychedelics also serves as one conceptual framework in current neurobiological models, especially in the entropic brain theory and the REBUS (relaxed beliefs under psychedelics) model [66][67][68]. The authors describe a shift from rational-logical thinking (psychoanalytically: secondary process, "Ego") to associative-instinctual thinking (primary process, "Id"), which can lead to an awareness of previously imperceptible (unconscious) psychological content.
As the REBUS model is based on the influential predictive processing (PP) theory from cognitive and brain science, it is worth exploring PP models for reward learning to better understand addiction. While some authors suggest that dopaminergic brain signaling coding for aberrant salience is a key mechanism in substance use disorder, other authors argue that compulsive drug seeking and taking is the behavioral consequence of high precision weighting of priors in the PP account, as a reaction to strong bodily sensations (craving), in order to reduce reward error signal [69]. One could suppose that psychedelics, by reducing the higher prior precision weighting, also reduces craving and drug seeking, not only during the acute experience, but also during the "neuroplastic window" of days or weeks after the intake, allowing new insights (e.g., about drug consumption, personal values) to be processed if adequate psychotherapy is provided.
Krähenmann et al. note that, in the psychedelic state, the two cognitive modes described above (Ego/Id) can coexist in parallel, so that here most likely a hybrid state of dream and waking consciousness, similar to lucid dreaming, is present [64,65,70,71]. The structural similarities between dreaming and psychedelic experience suggest that the images, emotions, and thoughts arising in the psychedelic state can be of therapeutic value for the further course of therapy in a manner similar to dreams and free associations in regular psychodynamic treatment. This theory is supported by work in the field of neuropsychoanalysis demonstrating the compatibility of psychodynamic concepts and the entropic brain theory and predictive processing [72,73].
Cognitive-Behavioral Approaches
Although the current standard model of psychedelic-assisted therapy largely refrains from applying targeted psychotherapeutic interventions, it is often considered that here, too, the therapeutic effect of the psychedelic experience is based on psychological mechanisms such as those generally effective in psychotherapy [74][75][76]. From the perspective of cognitive-behavioral therapy, one might assume that psychedelics can facilitate learning processes that ultimately lead to permanent changes in dysfunctional beliefs.
This assumption is in line with current neurobiological and information-theoretical theories of the acute effects of psychedelics. One currently influential approach is the REBUS model mentioned above [67]. According to this model, activation of 5-HT 2A receptors in cortical association areas such as those of the default mode network leads to a profound destabilization of assumptions about the world and the self (belief relaxation): assumptions that are rigid and barely modifiable in normal waking consciousness can be temporarily weakened in the psychedelic state while sensitivity to contradictory information is increased. When such destabilization affects, for instance, certain assumptions of the visual system (e.g., "sounds are not visible" or "light, in case of doubt, comes from above"), typical perceptual psychedelic phenomena such as synesthesia or the impression of wandering shadows and deforming objects may result. From a psychotherapeutic point of view, however, it appears more interesting to consider the destabilization of assumptions at higher levels of information processing-especially of dysfunctional belief structures that negatively affect the patient's self-image and social and emotional experience [77].
According to the cognitive-behavioral model [74] illustrated in Fig. 2, metacognitive assumptions which are temporarily destabilized during the psychedelic state can be permanently and positively shifted by confrontation with previously avoided experiential content, as often appears to occur in psychedelic therapy even without specific intervention by the therapist [78][79][80]. Of central importance here is the observation that during the psychedelic experience, attempts to avoid aversive experiential content often lead to increased aversion, whereas a mindful, accepting attitude typically warrants a less tormenting experience [74,78,[81][82][83]. Hence, a form of operant conditioning takes place, making possible deep and largely avoidancefree engagement with distressing feelings, thoughts, memories, and body perceptions. It appears likely that the revision of dysfunctional assumptions that may occur in this state is facilitated by psychedelic-induced belief relaxation. In a broader sense, this mechanism might reflect the psychedelic-related factors of cessation of substance abuse, which were reported e.g., by Noorani and colleagues. In this study, psychedelic therapy for smoking cessation and its perceived mechanisms of change were investigated. As essential factors to their efforts to quit smoking, participants reported valuable insights into a better, deeper or more essential understanding of themselves under the influence of psilocybin. These experiences were held accountable for the feeling of a decreased desire to smoke or the mere senselessness of smoking. Also, the holistic perception of oneself was described as a mechanism, e.g., insights revealing how anxiety and fear contributed to their smoking [14,84].
In conclusion, there seems to be a meaningful overlap between the mechanisms of action of psychedelic therapy and those of behavioral therapy, particularly with regard to the experiences made within the dosing sessions. Furthermore, embedding psychedelic interventions into behavioral therapy treatments before and after the actual psychedelic experience seems possible and reasonable, and has already been investigated in several studies [15,85]. With regard to the existing studies addressing alcohol and tobacco dependence, several classical CBT elements have been used within preparation and integration sessions. Johnson et al. included the assignment of a target quit date, a contract to quit, a smoking diary and smoking cessation specific interventions (e.g., "NURD program card" was to be read each time a cigarette was smoked: "This cigarette is giving me no satisfaction;" "This is an unpleasant experience;" "This cigarette is making me feel rotten; I am losing the desire to smoke." or WEST-D program card was to be read each time participants noticed an urge to smoke: "What's the trigger? Each time I feel like smoking, Stop, Think, Deprogram.") [14]. Bogenschutz et al. used Motivational Enhancement Therapy (MET) during preparatory sessions, which is a time-limited, usually four-session adaptation of motivational interviewing where the client is engaged to set goals, detect discrepancies between goals and the current situation, develop an intrinsic motivation to change, and is encouraged in her/ his self-efficacy [15]. In these two clinical trials on SUD treatment, psychedelic sessions can be seen as enhancers of the manualized SUD psychotherapy. The exact interaction between the pharmacological and the psychological effects and their proportion in the symptom reduction are not well understood yet, and warrant further investigation.
Against the backdrop of increasing evidence for positive synergistic effects between psychedelic interventions and mindfulness practices, as well as acceptancepromoting effects of psychedelics [78,[86][87][88][89], efforts to combine psychedelic-assisted interventions with the socalled "third-wave procedures" within CBT have recently increased [90]. This is especially true for the combination of psilocybin with acceptance and commitment therapy (ACT), whose basic features and therapeutic goals show numerous overlaps with the phenomenology of psychedelic states [81,91]. Correspondingly, sophisticated therapeutic concepts are currently being tested in clinical trials. Furthermore, the possibility of combining psychedelics with dialectical behavioral therapy (DBT) for the treatment of borderline personality disorder is also now being discussed [92]. [74]). Various interlocking aspects of an avoidance-reducing learning process, the promotion of which is the goal of many behavioral therapy interventions, are shown in the circle. The colored arrows represent proposed specific effect factors of psychedelic therapy
The Subjective Experience
Since the early phase of psychedelic research, the observation has been made repeatedly that the experiences made under the substance's influence have an impact on the course and also on the success of the therapy. Modern research seems to support this. For example, a recent study identified certain experiential qualities of the substance's effects that were particularly associated with a response to therapy in depressed patients [93]. A similar picture emerged as in the studies with healthy subjects, in which the majority of participants had a so-called mystical-type experience due to a high dose of a psychedelic, phenomenologically comparable to reports of spiritual awakening experiences [44,94,95]. These experiences appeared to have long-term positive effects on well-being [96], and were rated by most participants as one of the five most important events in life, even years later [97]. In both trials mentioned above with tobacco and alcohol use disorder, the intensity of subjective mystical-type effects was related to reduction of substance use [15,98]. Another phenomenon also associated with long-term positive changes in well-being in the context of acute substance use is the emotional breakthrough experience [99]. This refers to an intense experience of positive feelings that is perceived as cathartic, typically occurring with the successful resolution of emotionally stressful episodes of psychedelic experience, and often accompanied by valuable personal and interpersonal insights. As described, some authors found an increased feeling of connectedness to the self, to others and the world, as a main subjective effect of the experience [78]. Others pointed out the importance of different types of insight during the psychedelic experience for treatment outcome [43,100]. However, not only the acute substance effect during the psychedelic session seems to be decisive for the success of the therapy but also a phenomenon which is described as the so-called afterglow effect. The term describes a period of 2-4 weeks after the psychedelic experience, which is characterized by increased mindfulness and cognitive flexibility, which can lead to an improvement in the effectiveness of psychosocial or psychotherapeutic intervention [42]. The aspect of heightened cognitive and emotional flexibility might in part be a correlate of increased sub-acute neural plasticity after psychedelic intake. Nonhuman animal studies have shown such neuroplasticity [92], consistent with elevated BDNF levels in humans [101]. However, research has yet to directly tie such effects to subjective afterglow or therapeutic effects. As treatment response in most of the early LSD alcohol trials only lasted for several weeks-the usual length of the afterglow phase-this could be explained by the strong anti-craving properties of the afterglow state [102]. From this perspective, patients might create a permanent afterglow effect by repeated dosing one or several weeks apart, as performed in traditional religious groups in Northern and Southern America like Native American Church, Santo Daime and Unio do vegetal, whose communities show an exceptionally low rate of substance use disorders [8,103,104]. Other explanations of this low rate include the protective function of a religious community, or selection bias of individuals having turned away from unhealthy, addictive behavior towards a personally meaningful lifestyle.
Common Factors of Psychotherapy
Much has been written about the common factors in psychotherapy, and the high context sensitivity and increased suggestibility in the psychedelic state should make this even more important for therapy with psychedelics. Nayak, Johnson, and Gukasyan argue that common factors like therapeutic relationship and alliance, a healing setting, a conceptual scheme, and a healing ritual should play a major role in efficacy of any psychotherapy which is assisted by a psychedelic substance, especially with regard to the capacity of these compounds to enhance the sense of meaning to everything which is being processed [76]. The common factors approach is likely to become even more influential in psychedelic therapy and psychotherapy in general and will help blurring the lines between traditional psychotherapy schools. However, the somewhat dichotomous presentation in this article of the psychodynamic and the cognitive behavioral perspective served the purpose to facilitate understanding and adoption of the described complex interactions in psychedelic therapy from two major therapy schools, keeping in mind that many therapists adhere to specific therapy schools and probably identify with them.
Conclusions
In the history of psychedelic therapy, several variants developed in the twentieth century, which partly show substantial differences with regard to the dose of psychedelic substances, the practical implementation and the psychotherapeutic interpretation model. We have only focused on two classical therapy models, psychodynamic and behavioral therapy, and have not included, for example, humanistic and hypnotherapeutic methods in combination with psychedelic substances [105][106][107].
The current standard model applied in most recent clinical trials largely eschews specific psychotherapeutic intervention. The psychodynamic approach offers many compelling theoretical and practical arguments for synergies between the psychedelic state and the psychodynamic therapeutic process, while it has shown promising treatment success in the past. Early psychodynamic studies lacked rigorous methodology though, which makes it difficult to draw 1 3 solid conclusions about mechanisms of action and efficacy. Similarly, behavioral therapy and its further developments may provide a suitable conceptual framework for psychedelic therapy. Direct (head-to-head) differences in efficacy of psychedelic therapy based on different psychotherapies (e.g., psychodynamic therapy vs. behavioral therapy) have not yet been investigated, but would be informative for dismantling specific psychotherapy effects. However, a substantial part of the suggested psychedelic-related factors like valuable insights in oneself that are described above is neither distinctly classifiable to a psychodynamic approach, nor to a CBT approach.
Independently of certain psychotherapeutic schools of thought, it seems reasonable to view the therapeutic effects of psychedelics not as purely pharmacological, but essentially as a consequence of complex psychological processes that are enabled and triggered by the pharmacological substance effects. From this perspective, psychedelic therapy would always be substance-assisted psychotherapy. Such a view should certainly be applied not only to the understanding of psychedelics, but to that of all psychotropic drugs. However, in the case of psychedelics, the special importance of complex psychological processes is evident both in the strong dependence of therapeutic success on the subjective experience during the psychedelic experience, and in the equally crucial dependence of this experience on internal and external influencing factors referred to as set and setting. The exceptional mechanisms of action compared to other psychiatric drugs, which seem to be more similar to those of intensified psychotherapy, could promote further integration of psychopharmacological and psychotherapeutic approaches within psychiatry in the long term, particularly for the treatment of substance use disorders. To further elucidate the many open questions in the promising field of psychedelic therapy of SUD, particularly about the exact mechanisms of action, predictors and moderating factors, the right amount of psychotherapeutic intervention and relapse prevention, more clinical trials with higher sample size, better blinding and control conditions, and systematic comparison of different psychotherapy interventions are needed.
|
2021-11-14T14:16:46.810Z
|
2021-11-13T00:00:00.000
|
{
"year": 2021,
"sha1": "f3f9be0ce83b61cee45f6ed0bb44e111acf82331",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40429-021-00401-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "f42f575d696db3ca67f06153b904675fc1deb547",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
121312096
|
pes2o/s2orc
|
v3-fos-license
|
EXPLORING THE q-RIEMANN ZETA FUNCTION AND q-BERNOULLI POLYNOMIALS
We study that the q-Bernoulli polynomials, which were constructed by Kim, are analytic continued to βs(z). A new formula for the q-Riemann zeta function ζq(s) due to Kim in terms of nested series of ζq(n) is derived. The new concept of dynamics of the zeros of analytic continued polynomials is introduced, and an interesting phenomenon of “scattering” of the zeros of βs(z) is observed. Following the idea of q-zeta function due to Kim, we are going to use “Mathematica” to explore a formula for ζq(n).
Introduction
Throughout this paper, Z, R, and C will denote the ring of integers, the field of real numbers, and the complex numbers, respectively.
When one talks of q-extension, q is variously considered as an indeterminate, a complex number, or a p-adic number.In the complex number field, we will assume that |q| < 1 or |q| > 1.The q-symbol [x] q denotes [x] q = (1 − q x )/(1 − q).
In this paper, we study that the q-Bernoulli polynomials due to Kim (see [2,8]) are analytic continued to β s (z).By those results, we give a new formula for the q-Riemann zeta function due to Kim (cf.[4,6,8]) and investigate the new concept of dynamics of the zeros of analytic continued polynomials.Also, we observe an interesting phenomenon of "scattering" of the zeros of β s (z).Finally, we are going to use a software package called "Mathematica" to explore dynamics of the zeros from analytic continuation for q-zeta function due to Kim.
By (2.1), we easily see that where n j is a binomial coefficient.In (2.1), it is easy to see that with the usual convention of replacing β n (h | q) by β n (h | q).By differentiating both sides with respect to t in (2.1), we have q hn [n] m q .
Analytic continuation of q-Bernoulli polynomials
For consistency with the redefinition of β n = β(n) in (4.5) and (4.6), (5.1) The analytic continuation can be then obtained as where [s] gives the integer part of s, and so s − [s] gives the fractional part.Deformation of the curve β(2,w) into the curve β(3,w) via the real analytic continuation β(s,w), 2 ≤ s ≤ 3, −0.5 ≤ w ≤ 0.5.
|
2017-07-27T03:17:23.492Z
|
2005-01-01T00:00:00.000
|
{
"year": 2005,
"sha1": "b4868671e4d0538f514c90a09b81e0da2d720e3b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ddns/2005/816261.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "266b0e658f87d761056fff9f9b37a7f0ea3abe2a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
262053836
|
pes2o/s2orc
|
v3-fos-license
|
The role of intra-guild indirect interactions in assembling plant-pollinator networks
Understanding the assembly of plant-pollinator communities has become critical to their conservation given the rise of species invasions, extirpations, and species’ range shifts. Over the course of assembly, colonizer establishment produces core interaction patterns, called motifs, which shape the trajectory of assembling network structure. Dynamic assembly models can advance our understanding of this process by linking the transient dynamics of colonizer establishment to long-term network development. In this study, we investigate the role of intra-guild indirect interactions and adaptive foraging in shaping the structure of assembling plant-pollinator networks by developing: 1) an assembly model that includes population dynamics and adaptive foraging, and 2) a motif analysis tracking the intra-guild indirect interactions of colonizing species throughout their establishment. We find that while colonizers leverage indirect competition for shared mutualistic resources to establish, adaptive foraging maintains the persistence of inferior competitors. This produces core motifs in which specialist and generalist species coexist on shared mutualistic resources which leads to the emergence of nested networks. Further, the persistence of specialists develops richer and less connected networks which is consistent with empirical data. Our work contributes new understanding and methods to study the effects of species’ intra-guild indirect interactions on community assembly.
The authors investigate the role of intra-guild indirect interactions on plant-pollinator network assembly using motif analysis and a colonization model that incorporates adaptive foraging.They find that adaptive foraging promotes the coexistence of specialists and generalists within the same guild and that colonizers tend to form intra-guild indirect interactions with species of opposite niche breath (specialist colonizers with generalist incumbents for plants and generalist colonizers with specialist incumbents for pollinators).
The manuscript is generally well written, especially the discussion, and the core ideas---motif analysis for studying indirect interactions and the effects of adaptive foraging on community assembly---are novel and interesting.My main critique is that the study is heavily theoretical/modelling/simulationbased and would really benefit from additional analyses involving empirical networks.I also have some other suggestions for improving the manuscript.
--Tighter integration of empirical networks in analyses The authors provide some comparison of their theoretical work to empirical networks (L255, L260, etc.), but these analyses are relatively limited and appear as more of an afterthought.While I like the overall modelling approach and find the theoretical results interesting, it will be important to see whether the findings translate to empirical networks.I appreciate that it is not possible to study the assembly of these empirical networks, but can the authors do something like provide a breakdown of motif frequencies in empirical networks?Right now, the comparison centers on three broad network properties (richness, connectance, and nestedness, see Fig. 6) and the results themselves are not especially clear and convincing and, moreover, do not provide any additional support to the main findings regarding intra-guild competition and higher-level interaction patterns.
--Additional temporal analyses
The authors record motif groups in sets of three subsequent time steps (L124) but then group the motif data for all colonizers, regardless of when they established in a network (L127).It would be very interesting to see an explicit breakdown of the distribution of these three-motif sequences (i.e., similar to information presented in Table 1).It would also be cool to see if/how motif distributions change during the colonization trajectory, e.g., early vs. middle vs. late stages.
--Remind the reader of the four questions I like the four research questions (L76) but found it difficult to remember each question when they were referred to simply by Q1 etc.This was especially the case when reading the section "Core motifs produced by plant species" (L142).It would be helpful if sentences could be rewritten to remind readers of each question as the answers are presented.
--Clarify what information is presented in Table 1 and Fig. 4 I did not find the information in Table 1 very easy to interpret.Is the establishment rate over all 121 simulations?Is such information worth presenting by model (W/ AF etc.) rather than broken down by plants vs. pollinators?Which rates are meant to sum to 1? The bottom line is that I did not find Table 1 very effective.Also, it wasn't clear to me if Fig. 4 (which I did find intuitive) presented the same/similar information to Table 1.
--Consider specialists as having more than one interaction I would be curious to know if it would be possible to perform the analysis with specialists defined by a diet breadth greater than one (L218)?Could it also work if specialists had a potential diet breadth greater than one but only realized one interaction at a time?In a similar vein, do the authors think results would change significantly if species were categorized into specialists, generalists, and supergeneralists (with motifs reflecting this categorization)?Such an extension would help bring the work closer to empirical systems, in which diet breaths follow distributions that are much more complicated than just specialist having 1 interaction and generalists having > 1 interactions.Also, in simulations the number of interactions of a generalist is drawn from a uniform distribution (L388)---would results change if other distributions were used, particularly ones that more closely match empirical degree distributions?--Describe how results for plant-focused motifs fit with pollinator-focused motifs Results for pollinators are by-and-large considered *separately* from results for plants.I would like to see a discussion on how the results regarding pollinators (e.g., generalist colonizers with specialist incumbents) and plants (e.g., specialist colonizers with generalist incumbents) might emerge *together* from a modelling/technical standpoint; this could go around L253.
Minor comments --Fig.1.I don't know what a "passing" interaction is?Is there a better term that can be used instead?Maybe "transient" or "temporary"?Or perhaps add a definition in the caption.
--L109.Add a short explanation of the "dynamic species pool" and how it works.
--L122.It would be helpful to have an example of how "colonizers' interactions transform from one motif group to another."I had a hard time imaging what this would look like.
--Fig. 2. I think the figure title would be clearer if it was, "...each motif group."Also, personally I would find "Spec" easier to read than "Spc".
--Fig. 3.Because most plant-pollinator networks are drawn with plants as the lower guild and pollinators as the upper guild, it would be easier for me to follow if figures reflected this convention.
--Fig.6.I did not understand what was done based on the sentence (L272) beginning, "We performed a one-sided..." Consider expanding the technical description.
General comment
Dritz et al. developed a dynamic model to describe the assembly of mutualistic network, a surprisingly overlooked aspect of mutualistic networks.By including adaptive foraging in this article, they provided an innovative theoretical approach of the question, allowing them to account for the plasticity of interactions, that is often neglected.By combining this model with a motifs approach, they could track the impact of colonization on the network over the assembly process.This article is definitely a niece piece of science that provide a new and results about the dynamics of invasion in a network context.The introduction and discussion are well written and clear.I have no major comments about that manuscript.I have two main points of discussion I would like to raise: 1) about the use of motifs and possible bias and limits; 2) for the moment big parts of the methods are quite obscure and require lot of energy from the reader to be understood because we need to read previous article about the model, and the format (methods at the end) does not help.I detail a bit these points in the following paragraphs.Otherwise figures and results could a bit clearer, but as I said above, these are no major issues and this article is really a worthy reading, thanks to the authors for that.
Authors focused on indirect effects that propagate through path of length 2 only, so the shortest indirect effects possible, but is never highlighted, while it would be nice to discuss this point.However, many studies have shown that in diverse networks like those studied by authors, an important part of indirect effects propagate through long-paths (Higashi & Nakajima 1995;Nakajima & Higashi 1995;Guimarães et al. 2017;Pires et al. 2020).The same studies using the Jacobian matrix of dynamical systems to study indirect effects over all these possible path (Nakajima & Higashi 1995), that in contrast to motif approach allow to integrate these effects and measure their strength, including abundances and interaction strengths to calculate propagation from species to species.Thus, I wonder why did authors prefer motifs approach relative to this method?About motifs analyses, also see my comment for lines 395-406, that is, I think, of importance.
The model is not understandable without referring to other papers, because parameters and variables are not described at all in the main Methods and partially in Appendix S2, which is, I think, a problem to understand the study.Some important equations are missing, for example the equation linking to and to species abundances.To summarise, in absence of further explanations and description the equations presented in Methods are useless.Nerveless, once the reader has done the effort to read previous paper (Valdovinos et al. 2013), this model is definitely a niece piece of work and this paper a nice piece of science.But more efforts are needed in explaining (again) the model in the present study.
Point-by-point comments
Line 38: I do not know this paper but by quickly reading through I do not see how ref. 20 (Bogdziewicz et al. 2018) is linked to this point.I might have missed something, but good to check if it is the right reference.
Lines 75-79: here and in the following parts, I started to have a problem of terminology to fully understand.In question 1 a colonizer is a species that attempt to colonize the system, regardless the success of the colonization.But in following questions, a colonizer becomes a successful colonizer.I think it is important to distinguish both.When authors study motifs, do they present the result sonly for successful colonizer or all colonizers?Edit: answer to that question is present lines 407-413, but since Methods are at the end might be good to find a way to clarify it in the main text.
Lines 78-79: at this point, it is not intuitive what is behind question 3 for me.I struggle to see where this question goes, which kind of mechanism it aims to explore (competitive exclusion?niche partitioning?) and what are the assumptions of authors.
Lines 111-113: without further explanation, this sentence is very enigmatic.It could be either developed here, or removed from here and developed in Methods.
Line 116: before entering in motif description, I really missed basic information on network size at the end of the process.So, it starts by 3X3 networks, and could end with 150X150 networks if all colonizing species survived and there is no extinction.Knowing the average network size at the end of simulation would be helpful to understand motifs analyses.
Edit: After continuing reading I found richness and connectance plots latter, but I really think it misses some description of the dynamics and end point of the simulation before describing the motifs part, otherwise it is really abstract.Authors could have a first plot at this stage that describe the dynamics of simulations and number of successful colonization and extinctions.Something in the spirit of what is below.
Lines 124-127: I should admit that for me it was not clear all long the article I had the feeling I had to guess when authors used the motifs count calculated at the moment of the colonization, after the extinctions, or after the subsequent colonization event.I think it could be labelled more clearly.Each motif characterizes the niche breadth (specialist or generalist) of the focal colonizing species (specie highlighted by the box) and the niche breadth of species that share its mutualistic partner(s), as follows: 1) "Spc -Spc", specialist colonizer (blue) that interacts indirectly with at least one specialist in its guild (orange): … Fig. 3: in C, it took me time to guess that authors mean "No subsequent specialist establishment", as subsequent is missing.
It is a just a convention that can be changed but having the plants as the top guild of the bipartite network is not intuitive (they are often represented as the bottom guild, with pollinators as top guild).
Since the paper is complex, I would stick to classic norms as much as possible to keep the readers focusing on important complexity.
Table 1: so this table correspond to motifs account at the moment of colonizer's arrival?Could be good to be very clear about that.
Lines 285-288: Isn't table 1 instead of Figure 4 that shows this result?
Lines 293-297: I find the wording "mutualisms mediate species coexistence" a bit weird.I would have said "adaptive foraging mediates species coexistence", as without adaptive foraging mutualism does not maintain high coexistence.
Lines 346-350: About the emergence of nestedness through indirectly connected species it could be worthy to link this result to similar results found when introduced a phenological structure in mutualistic networks (Duchenne et al. 2021).Like adaptive foraging, phenological structure also protects specialist from extinctions through indirect effects, and thus promote nestedness and stable coexistence of a higher diversity (Duchenne et al. 2021).
Lines 376 -380: I could not find the meaning and units of , , .
Variable names are not described.Often for readability variables are represented with capital letters, while parameters with lowercase letters, I think following this convention would help to understand quickly the equations, that are here really hard to understand, especially without description.
Basic assumptions of the model (e.g.linear functional response) have to be guessed from the equation but are not explicitly mentioned.
Finally, I have the feeling that there are too many important information that have to be checked in appendix.In my opinion the model, which is the core of the article, should be described in the Methods not in supplementary.Showing the equations without parameters description is almost useless as readers do not have any idea of what they represent.The paper should be roughly understandable without the appendix I would say, here without the model description, it is not the case.
Equation 2 & 3: I spent a long time trying to understand why authors divided by in the functional response.This was not clear for me, even after reading Valdovinos et al. (2013).I guess it is because is already included in .This kind of stuffs could be explained explicitly to the readers, to avoid them to get headache :) Equation 3: Why do not use a classical functional response of type 2 for saturating the production of resources?As model behaviour is already described with this formulation, I understand that authors sticked t this choice, but I wonder if there is specific argument behind this choice.
Line 385: Why 121?where is that from?what are the parameter combination that is tested?
Lines 395-406: Authors performed motifs counts that are not corrected by richness, while they show that species richness is different between cases with or without adaptive foraging.I guess the number of different motifs in each categories (Spc-Gen, Gen-Spc, etc.) increases with network size, isn't it?
Richer is the network, more we expect motifs 1 and 3, because they are based on logical condition "at least", while we expect less motifs 1 and 4, because they are base on logical condition "only".This is exactly the pattern we observe between simulations with adaptive foraging (more motifs 1 and 3) and simulations without adaptive foraging (less motifs 1 and 3, more motifs 2 and 4), that could be explained completely by difference in species richness.
Thus, I wonder what do authors think about the possibility that difference sin motifs count is driven by difference in species richness mainly?
Lines 414-424: I realized at the end that what authors called network is not obvious.Do they mean the interaction matrix defined by (then including species abundance), or something independent from species abundance?Could be good to clarify it from the beginning.
Discussion: limits of the model are never highlighted or discussed, while I think it is always a good exercise and having the potential limits clearly written in the discussion helps the reader to link this work with previous one.
Table S1: last column of Table S1 is called "Motif frequency after establishments" where I guess after establishments is synonym of "after the subsequent colonization event" (line 127).I would suggest being very constant in the terminology used to help the readers, especially to maintain the word "subsequent" everywhere it is needed, to distinguish first colonization events than subsequent ones.
Appendix S2: if mortality is per capita, then it should be time -1 individuals -1 , isn't it?
Moreover, if parameters are drawn in gaussian distribution (that is not indicated), that means that negative values are possible, while it does not seem rational.How do authors deal with that?
REVIEWER COMMENTS
Reviewer #1 (Remarks to the Author): Review of "The role of intra-guild indirect interactions in assembling plant-pollinator networks," by Dritz et al.
The authors investigate the role of intra-guild indirect interactions on plant-pollinator network assembly using motif analysis and a colonization model that incorporates adaptive foraging.They find that adaptive foraging promotes the coexistence of specialists and generalists within the same guild and that colonizers tend to form intra-guild indirect interactions with species of opposite niche breath (specialist colonizers with generalist incumbents for plants and generalist colonizers with specialist incumbents for pollinators).
The manuscript is generally well written, especially the discussion, and the core ideas---motif analysis for studying indirect interactions and the effects of adaptive foraging on community assembly---are novel and interesting.My main critique is that the study is heavily theoretical/modelling/simulation-based and would really benefit from additional analyses involving empirical networks.I also have some other suggestions for improving the manuscript.
Response: We thank the reviewer for the constructive criticism and useful suggestions that have greatly improved the quality of our manuscript.We have incorporated them all in the new version of our manuscript except for the additional analysis involving empirical plant-pollinator networks.Our next response explains why our motif analysis cannot be applied to static networks, and why current temporal plantpollinator networks data are not well-suited for our analysis.We connect our results with empirical findings in the discussion section, which is the best we can do in terms of data-theory integration given the limitations of current empirical data.After all, one main reason we use theoretical approaches to investigate important questions in ecology is that empirical research has temporal, spatial, and other scale-related limitations.We elaborate on this in the discussion section (L450-461).
--Tighter integration of empirical networks in analyses The authors provide some comparison of their theoretical work to empirical networks (L255, L260, etc.), but these analyses are relatively limited and appear as more of an afterthought.While I like the overall modelling approach and find the theoretical results interesting, it will be important to see whether the findings translate to empirical networks.I appreciate that it is not possible to study the assembly of these empirical networks, but can the authors do something like provide a breakdown of motif frequencies in empirical networks?Right now, the comparison centers on three broad network properties (richness, connectance, and nestedness, see Fig. 6) and the results themselves are not especially clear and convincing and, moreover, do not provide any additional support to the main findings regarding intra-guild competition and higher-level interaction patterns.
Response: We agree with the reviewer that the comparison with empirical networks based on broad properties (richness, connectance, and nestedness) does not provide strong support for our main findings of how intra-guild indirect interactions of colonizers contribute to higher-level interaction patterns in assembling pollination networks.For this reason, that figure was moved to the Supplementary Information (see new Figure S1).
However, to the best of our knowledge, there is no empirical data on assembly of plant-pollinator networks both at the scale of 150 years and at a fine enough temporal resolution (i.e., sampled at least every 3 years) to make a fair comparison with our motif results.Moreover, the motifs developed in this study are not meant to be a metric to characterize the structure of a static pollination networks.Rather, they are meant as a tool to investigate the transient dynamics of colonizer establishment via intra-guild indirect effects (L154-157).Specifically, our motifs focus on a focal species that is followed from its colonization to subsequent extinctions and colonizations.Therefore, providing a breakdown of motif frequencies in empirical networks (shown below for all 121 empirical networks used for new Figure S1) would be an erroneous way to use our motif analysis as there is no assembly information that can be tracked for each focal species.Therefore, we cannot include this analysis in the new version of our manuscript.We explain this limitation of current empirical data, which is also the strength of our study -as we can unveil an understanding of network assembly that is not possible to obtain via empirical research, at least with current data -in L450-461.
--Additional temporal analyses
The authors record motif groups in sets of three subsequent time steps (L124) but then group the motif data for all colonizers, regardless of when they established in a network (L127).It would be very interesting to see an explicit breakdown of the distribution of these three-motif sequences (i.e., similar to information presented in Table 1).It would also be cool to see if/how motif distributions change during the colonization trajectory, e.g., early vs. middle vs. late stages.S2).We found that the distribution of motifs among attempted colonizers does not vary significantly between these groups.However, the establishment rate across all motif groups decreased over time, meaning that early colonizers established at a higher rate than later colonizers which is consistent with the empirical evidence we referred to in the discussion.We also expanded new Table S3 (prior Table S1) to more explicitly show the cumulative distribution of motifs (across all colonizers in each assembly model) at each of the three subsequent timesteps, that is: Motif frequency after establishment, motif frequency after extinctions, and motif frequency after subsequent establishment.
Response: Following the reviewer's suggestion, we extended our theoretical analysis to investigate whether motif distributions among attempted and established colonizers varied among early, mid, and late state colonizers by adding Fig S2, which more intuitively illustrates the information previously presented in old Table 1 (now new Table
--Remind the reader of the four questions I like the four research questions (L76) but found it difficult to remember each question when they were referred to simply by Q1 etc.This was especially the case when reading the section "Core motifs produced by plant species" (L142).It would be helpful if sentences could be rewritten to remind readers of each question as the answers are presented.
Response: Agreed.Sentences were rewritten throughout the results to clarify which question each result addresses.For example, for the section "Core motifs produced by plant species with adaptive foraging" these questions can be found in L193, L196, L200, and L204.
--Clarify what information is presented in Table 1 and Fig. 4 I did not find the information in Table 1 very easy to interpret.Is the establishment rate over all 121 simulations?Is such information worth presenting by model (W/ AF etc.) rather than broken down by plants vs. pollinators?Which rates are meant to sum to 1? The bottom line is that I did not find Table 1 very effective.Also, it wasn't clear to me if Fig. 4 (which I did find intuitive) presented the same/similar information to Table 1.S2 in --Consider specialists as having more than one interaction I would be curious to know if it would be possible to perform the analysis with specialists defined by a diet breadth greater than one (L218)?Could it also work if specialists had a potential diet breadth greater than one but only realized one interaction at a time?In a similar vein, do the authors think results would change significantly if species were categorized into specialists, generalists, and super-generalists (with motifs reflecting this categorization)?Such an extension would help bring the work closer to empirical systems, in which diet breaths follow distributions that are much more complicated than just specialist having 1 interaction and generalists having > 1 interactions.Also, in simulations the number of interactions of a generalist is drawn from a uniform distribution (L388)---would results change if other distributions were used, particularly ones that more closely match empirical degree distributions?
Response: We agree with the reviewer that Table 1 in the previous version of our manuscript was not easy to interpret so we moved it to the supplementary information (Table
Response: We thank the reviewer for making these comments, which helped us see that we needed to clarify that our assembly model actually reproduces all the attributes of empirical networks the reviewer points out.We address this set of comments by adding more panels to new Figure S1.We explain how the new panels of Fig. S1 answer each comment in the last paragraph of this response. The definition of specialist plant and pollinator species having only one interaction while generalist species having more than one, directly connects with qualitatively different behaviors of the Valdovinos et al's (2013) model.On the one hand, specialist plant species offer the most exclusive floral rewards to the pollinator species that visits them and specialist pollinator species offer the highest quality of visits to the one plant species they visit but cannot adaptively re-arrange their foraging efforts because they only interact with one plant species.On the other hand, the rewards offered by the generalist plant species are shared by several pollinator species (causing exploitative competition) and generalist pollinator species visit several plant species, diluting their conspecific pollen.Given these very distinct differences in model behavior between species having one interaction versus having more than one interaction, our motif analysis based in such a distinction is very effective in capturing the effect of intraguild indirect interactions on community assembly.
There is only a quantitative difference in model behavior between generalist and super-generalist pollinator species (e.g., between 3 and 20 interactions), that is, supergeneralist pollinators have the greatest niche flexibility and as a result become the most quantitatively specialized.However, distinguishing between generalists and supergeneralists does not increase our ability to detect intra-guild indirect interactions.
In summary, the distinction between specialists and generalists in our analysis is the most effective way to detect intra-guild indirect interactions in a very complex set up (assembly + population dynamics + adaptive foraging), which is a strength of our work.That is, it is a smart distinction to conduct an analysis that disentangles simple dynamic patterns from a very complex dynamical process.Our assembly model still produces degree distributions (including the full array of specialist to super-generalist species) observed in empirical networks that the reviewer is referring to (see new Fig S1A,B).Moreover, the species that colonize with only one interaction usually end up with more than one interaction throughout the assembly process.Thus, in essence we are already modeling species that have "a potential diet breadth greater than one but only realized one interaction at a time" rather than "true specialists" as the reviewer suggests.Finally, to the best of our knowledge, there is no empirical data on the distribution colonizers' degrees when they enter a network.Therefore, we believe that the uniform distribution is the most parsimonious.
--Describe how results for plant-focused motifs fit with pollinator-focused motifs Results for pollinators are by-and-large considered *separately* from results for plants.I would like to see a discussion on how the results regarding pollinators (e.g., generalist colonizers with specialist incumbents) and plants (e.g., specialist colonizers with generalist incumbents) might emerge *together* from a modelling/technical standpoint; this could go around L253.
Response: We followed the reviewer's suggestion by discussing in L406-414 how the core "Spec-Gen" and "Gen-Spec" motifs for plants and pollinators, respectively, can emerge together.That is, the specialist plant colonizers in motif group "Spec-Gen" can provide exclusive resources to generalist pollinator colonizers in motif group "Gen-Spec" who can in turn perform high quality visits by quantitatively specializing on those plants.This advantageous "mega-motif" combining plants' "Spec-Gen" and pollinators' "Gen-Spec" motifs is the most likely to emerge from assembly given its positive, direct effects on species of the opposite guild.Moreover, this "mega-motif" would lead to even more nested networks.
--Fig. 1.I don't know what a "passing" interaction is?Is there a better term that can be used instead?Maybe "transient" or "temporary"?Or perhaps add a definition in the caption.
Response: Changed "passing" to "extinct".The dashed line is meant to indicate which interactions disappear as a result of a species becoming extinct.
--L109.Add a short explanation of the "dynamic species pool" and how it works.
Response: This phrase was deleted.The intention was to say that colonizing species (identified by characteristics such as their foraging efficiency or nectar production rate) are randomly generated rather than originating from a finite regional species pool.However, on further investigation, the phrase "dynamic species pool" indicates in prior literature that eco-evolutionary feedbacks are influencing species' characteristics which is not the case in this model.Thus, we deleted the phrase to avoid any confusion with prior uses of this phrase.
one motif group to another."I had a hard time imaging what this would look like.
Response: An example was added to L165-167.That is "For instance, if a specialist colonizer belonging to motif group "Spec-Spec" transformed to motif group "Spec-Gen", this would indicate that all intra-guild indirect specialists went extinct".
--Fig. 2. I think the figure title would be clearer if it was, "...each motif group."Also, personally I would find "Spec" easier to read than "Spc".Response: We added an Appendix S3 to explain which structural metrics we used in our analysis and which statistical tests were performed.We performed Welch tests to statistically evaluate the hypothesis that networks assembled from the model with adaptive foraging are richer, less connected, have a higher pollinator:plant ratio, and are more nested than networks assembled from the model without adaptive foraging.The Welch tests were paired to compare networks populated by specialist plants and specialist pollinators at the same probability.Additionally, we performed Welch test to statistically evaluate the hypothesis that the connectance, plant:pollinator ratio, and richness of empirical networks is significantly different from simulated networks produced from our assembly model with adaptive foraging.
Response
--L327.I found the argument involving A. mellifera and Bombus spp.confusing since the example with Bombus spp.doesn't seem to support the general thesis of the paragraph.
Response: Agreed, thank you for pointing this out.We edited the text to acknowledge that A. mellifera follows the trend in our model but Bombus spp.do not (L386-389).
General comment
Dritz et al. developed a dynamic model to describe the assembly of mutualistic network, a surprisingly overlooked aspect of mutualistic networks.By including adaptive foraging in this article, they provided an innovative theoretical approach of the question, allowing them to account for the plasticity of interactions, that is often neglected.By combining this model with a motifs approach, they could track the impact of colonization on the network over the assembly process.This article is definitely a niece piece of science that provide a new and results about the dynamics of invasion in a network context.The introduction and discussion are well written and clear.I have no major comments about that manuscript.I have two main points of discussion I would like to raise: 1) about the use of motifs and possible bias and limits; 2) for the moment big parts of the methods are quite obscure and require lot of energy from the reader to be understood because we need to read previous article about the model, and the format (methods at the end) does not help.I detail a bit these points in the following paragraphs.Otherwise figures and results could a bit clearer, but as I said above, these are no major issues and this article is really a worthy reading, thanks to the authors for that.
Response: We thank the reviewer for the constructive criticism and useful suggestions that have greatly improved the quality of our work.We have addressed them all in the new version of our manuscript.We paid special attention to making the Methods much easier to follow, and provided all the information needed to understand them.Specifically, we brought all the methods from previous papers and our supplementary material to the new version of our Methods section.
Authors focused on indirect effects that propagate through path of length 2 only, so the shortest indirect effects possible, but is never highlighted, while it would be nice to discuss this point.However, many studies have shown that in diverse networks like those studied by authors, an important part of indirect effects propagate through long-paths (Higashi & Nakajima 1995;Nakajima & Higashi 1995;Guimarães et al. 2017;Pires et al. 2020).The same studies using the Jacobian matrix of dynamical systems to study indirect effects over all these possible path (Nakajima & Higashi 1995), that in contrast to motif approach allow to integrate these effects and measure their strength, including abundances and interaction strengths to calculate propagation from species to species.Thus, I wonder why did authors prefer motifs approach relative to this method?About motifs analyses, also see my comment for lines 395-406, that is, I think, of importance.
Response: Thank you for the suggestion as well as the great references.We added a discussion comparing the relative strengths of each method in L42-49.Previous studies have used mathematical methods to determine the strength of indirect effects through short and long paths by accounting for positive and negative feedbacks.Network motifs function as a complimentary tool to those methods by establishing a connection between indirect effects and network structure.The trade-off is that considering indirect effects through longer paths requires larger and more complex motifs which are more difficult to interpret.We chose to only consider indirect effects among species sharing a direct mutualistic partner (a path of length 2) to detect clear dynamic patters in a relatively complex set up (assembly + population dynamics + adaptive foraging).However, defining more complex motifs to capture indirect effects through longer paths is an avenue for future research.
The model is not understandable without referring to other papers, because parameters and variables are not described at all in the main Methods and partially in Appendix S2, which is, I think, a problem to understand the study.Some important equations are missing, for example the equation linking to and to species abundances.To summarise, in absence of further explanations and description the equations presented in Methods are useless.Nerveless, once the reader has done the effort to read previous paper (Valdovinos et al. 2013), this model is definitely a niece piece of work and this paper a nice piece of science.But more efforts are needed in explaining (again) the model in the present study.
Response: Thank you for pointing this out, we brought all the methods from previous papers and our supplementary material to the new version of our Methods section, which greatly improves the readability of our manuscript.We added descriptions of each parameter to the text of the methods.Additionally, we added equations for V_ij (Eq.5), sigma_ij (Eq.6), and gamma_i (Eq.7).Lastly, we expressed each equation in relation to the functional response to ease model interpretation.The functional response is defined in Eq. 8.
Point-by-point comments
Line 38: I do not know this paper but by quickly reading through I do not see how ref. 20 (Bogdziewicz et al. 2018) is linked to this point.I might have missed something, but good to check if it is the right reference.
Response: Agreed, this reference is related to the reproductive benefits of indirect effects but not assembly so this citation was moved to the last line in the paragraph.
Lines 75-79: here and in the following parts, I started to have a problem of terminology to fully Reviewer #1: Remarks to the Author: Review of revised "The role of intra-guild indirect interactions in assembling plant-pollinator networks," by Dritz et al.
I am Reviewer 1 from the first round of reviews.I have read the revised manuscript and the authors' response to reviews.I appreciate the thoughtful replies to my comments and understand the rationale when changes were made and when when they were not.I believe the work to be technically sound and I like the theoretical ideas and methods presented in the manuscript.However, I still would have liked to have seen greater integration with empirical data, although I understand how this is challenging.Perhaps the authors could provide more details on how their ideas could be tested with empirical data that could be collected on time scales of years rather than decades, beyond just the text added in the closing paragraph.In conclusion, while I cannot offer wholehearted support for publication, I would not stand in the way.
Fig. 2 :
Fig. 2: I think that a bit of colours could help to understand better the figure.
Fig. 2 |
Fig. 2 | Schematic representations of each motif.Each motif characterizes the niche breadth (specialist or generalist) of the focal colonizing species (specie highlighted by the box) and the niche breadth of species that share its mutualistic partner(s), as follows: 1) "Spc -Spc", specialist colonizer (blue) that interacts indirectly with at least one specialist in its guild (orange): … the revised manuscript) and replaced it with a new figure (Fig. 4 in the revised manuscript) which more intuitively shows the same information.To clarify, Fig. 4 in the previous manuscript showed the distribution of motifs following extinctions and subsequent establishment while Table 1 in the previous manuscript showed the distribution of motifs prior to extinctions and subsequent establishment.Captions of new Table S2, new Figure 4, and new Figure 7 clarify this difference.
:
Both suggestions were incorporated in what is now Figure 3 (old Fig 2).--Fig.3.Because most plant-pollinator networks are drawn with plants as the lower guild and pollinators as the upper guild, it would be easier for me to follow if figures reflected this convention.Response: Thank you for the suggestion, we flipped the motifs in the figure to conform with the convention of plants as the lower guild.--Fig.6.I did not understand what was done based on the sentence (L272) beginning, "We performed a one-sided..." Consider expanding the technical description.
|
2023-09-20T06:17:59.179Z
|
2023-09-18T00:00:00.000
|
{
"year": 2023,
"sha1": "a5da06cc4025c7a33a9069125d942d5068fd7c10",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-023-41508-y.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "550eb60159beb6c209598b6d8804ea104eefe470",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
139999748
|
pes2o/s2orc
|
v3-fos-license
|
Hydraulic Conductivity of the Permeable Asphalt Pavement – Laboratory vs In Situ Test
Permeable asphalt pavements (PAP) are a key measure for mitigating the climate change effects in urban areas. Cities are becoming increasingly dense and have large areas of waterproofing due to the excessive construction of buildings and highways that prevent the rainwater drainage into the soil. Recently, the PAP study with a double layer porous asphalt (DLPA) has been an alternative to the use of a porous asphalt single layer (PA), with recognized advantages in increasing water infiltration and, consequently, in decreasing surface runoff. It was developed in field a PAP of small dimensions to assess its capacity to respond to floods. The purpose of this study is to evaluate the hydraulic conductivity (K) of the DLPA applied on the PAP, both in laboratory conditions and in field conditions, and verify the representativeness of the laboratory results in relation to the results obtained in situ. In laboratory terms, the LCS permeameter was used, which evaluates the vertical and horizontal hydraulic conductivity, both in specimens produced in the laboratory and in cores extracted in situ. In the field, the LCS permeameter and the falling head permeameter were used to measure the hydraulic conductivity and the relative hydraulic conductivity (HC), respectively. The laboratory tests were performed according to Standards EN 12697-19 and NLT 327 and the in situ tests according to Standards EN 12697-40 and NLT 327. It was verified that the specimens produced in the laboratory of the two porous layers showed values of K (vertical and horizontal) lower than those obtained in the field cores, both for the individual layers PA and for the DLPA. Thus, it was found that the study in controlled environment differs in terms of results. This divergence justified the need to perform a field study in order to perceive the actual performance of the PAP surface layer. This study was characterized by the values of K (m/s) and HC (s−1), from which it was not possible to obtain a relation. From this study it was concluded that the measuring methods of the hydraulic conductivity in the laboratory were close to the behaviour of the same in situ, however its evaluation under real conditions is always essential.
Introduction
Excessive construction and waterproofing of natural soils in cities has been a problem for society. The main consequences associated with this waterproofing have a negative impact on the increase in the rainwater volume (flooding), the increase of the heat island effect urban and the contamination of the runoff water [1][2][3]. Permeable pavements emerged as a mitigation measure for these problems, [4][5][6][7]. Concretely, permeable asphalt pavements (PAP) have a completely porous structure, which allows the reduction of surface flow and allows the precipitated water to be mostly infiltrated and sent to the subsoil, if it has drainage capacity, or stored in a reservoir for future uses. The permeable surfaces of these pavements, namely the porous asphalt (PA), consist of a aggregates group, mainly coarse, that are arranged in such a way that the voids between them allow the infiltration of the waters, obtaining a voids content between 18 and 20% and thus a high permeability [8]. The pavements clogging with porous surfaces leads to the permeability loss, being this the main problem pointed to the PA application, highlighted in several studies [9,10]. This problem is mainly due to the deposition of long-term sediments from vehicle debris and/or transported by surface runoff from the adjacent impervious areas. Thus, the use of a double layer porous asphalt (DLPA) on the pavement surface attenuates the clogging unfavourable effects and, consequently, facilitates water drainage through the filter effect achieved by the two layers [11].
The in situ infiltration capacity of PAP and their hydraulic performance have been widely evaluated by the measurement of surface hydraulic conductivity. The hydraulic conductivity or permeability translates the greater or lesser ease with which the water moves in a determined pavement and depends on the fluid characteristics, the porous medium (texture and structure) and its water content [12,13]. Different devices and methods are known for directly measuring in situ hydraulic conductivity, such as single-ring and double-ring infiltrometers [9,14,15], falling or constant head permeameter and the LCS permeameter (Laboratorio de Caminos de Santander) [15][16][17]. Several studies with different surfaces in permeable pavements have been carried out in terms of the superficial hydraulic conductivity evaluation, such as porous concrete (PC), permeable interlocking concrete pavements (PICP), precast concrete blocks and plastic grid pavers [16][17][18][19][20][21][22]. Sañudo-Fontaneda et al. [23] carried out an in-depth investigation of the PMPC (polymer-modified porous concrete) permeable pavements hydraulic behaviour and PA, through the LCS permeameter in a parking lot after 5 years in service. Parkings with PMPC reduced the infiltration capacity from 0.020 m/s to 0.0041 m/s (79.43 %), while parking lots with porous asphalt reduced from 0.012 m/s to 0.0022 m/s (82.04 %). The study verified that the PA surface did not present significant differences of infiltration capacity between the different points studied in the parking lots compared to the porous concrete. Using the LCS permeameter in situ, with different surfaces, Fernández-Barrera et al. [16], obtained average discharge times of 21 s for surfaces with precast concrete blocks and porous asphalt under light traffic. While the porous asphalt surface with greater traffic intensity led to irregular values between about 50 s and above 1800 s, indicating that it needs maintenance in some areas to recover the infiltration. In a larger study, Al-Rubaei et al. [9] evaluated, through the double-ring infiltrometer, two porous asphalt pavements, which after 18 and 24 years in service had a substantially lower infiltration capacity than the initial one (> 4.8x10 -3 m/s to about 0.8x10 -5 and 0.4x10 -5 m/s) due to surface obstruction. Kumar et al. [14] also analysed the infiltration capacity with the double-ring infiltrometer of several permeable pavements, including PA, which decreased its hydraulic conductivity from 31.1x10 -3 m/s to 9.1x10 -3 m/s at the end of 4 years of service. Regarding the infiltration capacity evaluation and the hydraulic performance in laboratory conditions, the permeameter of the European Standard EN 12697-19 [24] and the Spanish Standard NLT-327 [25] have been used, the latter although it has been developed for in situ assays it is also suitable for use in laboratory tests. Marchioni et al. [21] showed in the laboratory that the simulated rainfall intensity did not produce significant differences in the discharge time of PC and PA slabs using the permeameter of EN 12697-19, through which they obtained times between 20 and 105 s in PA slabs.
From the different devices in situ, stand out the falling head permeameter and the LCS permeameter. The falling head permeameter method is described in European Standard EN 12697-40 [26], which indicates the results as a relative hydraulic conductivity (in s -1 ) resulting from the combination of vertical and horizontal hydraulic conductivity [12]. The LCS permeameter measures the infiltration capacity according to the Spanish Standard NLT-327, which provides a logarithmic equation from which the hydraulic conductivity is determined (in m/s [16]. However, it is difficult to compare the results of hydraulic conductivity, both between the different field methods and between these and laboratory methods. Fernández-Barrera et al. [16] state that samples taken from the pavement surface, subsequently tested in a laboratory, may not simulate the actual conditions to which the pavement is subjected and therefore in situ permeameters must also be used. To compare hydraulic conductivity results, it is essential to obtain the measurement parameter by the same methods wherever possible [12]. The comparative study performed by Maupin [27] between cores pavement and laboratory-produced test specimens, using falling head tests, showed inconsistency in the hydraulic conductivity values, which it was generally higher in field cores due to the existence of excessive voids. Gogula et al. [28] carried out comparative studies of laboratory and field tests, also with the principle of falling head, and obtained hydraulic conductivity values in the field much higher than those of the laboratory. The differences in this study are justified by the lack of horizontal flow limitation in field tests, since the same test principle was used. The results obtained with EN 12697-40 are not directly comparable with the results of other methods, such as those obtained by EN 12697-19. Even the permeameters used in situ, falling head and LCS, are not comparable to each other. The method described in EN 12697-40 only allows to establish changes in infiltration capacity over time, whereas the method described in NLT-327 allows, in addition to the characterization over time, to compare with other methods that follow the falling head principle, obtaining a hydraulic conductivity.
Hydraulic conductivity is one of the most important properties of a water permeable medium, such as permeable asphalt pavements, and has been the research subject by several studies. However, there is still a need to evaluate it in comparative terms between different methods used in the laboratory and in situ. Jiang et al. [29] report the importance of the permeable pavements construction with different materials combination and layers thickness, as well as obtaining relationships between laboratory and field results. In view of this need, a PAP with DLPA was constructed on the applied surface in a parking lot. The research objective was to evaluate the PAP behaviour relating to the hydraulic conductivity and to compare the results obtained by laboratory and in situ methods. To do this, in laboratory were tested laboratory produced test specimens and cores extracted from field slabs and, in situ a scheme was defined with different test points in three parking spaces, measuring the discharge time through the LCS permeameter and the falling head permeameter.
Methodology
The present investigation was carried out in a car parking built in August 2017 in the municipality of Covilhã (Portugal) with 37.5 m 2 , consisting of three spaces with dimensions of 2.5x5 m. The PAP characterization was performed after 4 and 7 months of the parking lot being in operation for light traffic, without performing any maintenance operation. Since the car park is inserted in a leisure area, it is expected that it will be used mainly by light vehicles which leads to a less pavement surface layer deterioration.
The PAP structure studied is composed of a reservoir with 15/25 aggregate and 25 cm thick, a regularization layer with 5/15 aggregate and 9 cm thick and a 7 cm thick DLPA surface layer. Generally, the DLPA is composed of a porous asphalt with fine aggregates to the surface with 3 cm (PA1), followed by a porous asphalt with thick aggregates of 4 cm thickness (PA2). The set of PA1 and PA2 mixtures acts as a filter that makes it difficult to pass sediments to the next layers, leading to the clogging effect attenuation in PAP. The specific characteristics of DLPA can be verified in the study by Afonso et al. [30].
The study focused on the hydraulic conductivity of PAP characterization in the laboratory and in situ. In the laboratory, cores and test specimens were analysed using the falling head permeameter, in accordance with EN 12697-19 and NLT-327. In situ, two permeameters were used, one according to EN 12697-40 and one according to NLT-327. In order to carry out the in situ tests, a scheme was defined with the mark of 2 points in each parking space, in which 3 measurements were made at each point. The analysis of different points importance within a set of parking lots has been highlighted in previous studies in order to obtain the infiltration capacity behaviour of a car park with permeable pavements [23,31]. The methodology used is described in more detail in the following sections.
Laboratory hydraulic conductivity
The hydraulic conductivity characterization in the laboratory was performed according to the European Standard EN 12697-19 [24] and the Spanish Standard NLT-327 [25], which refer to different test devices, namely a constant head permeameter and a falling head permeameter, respectively. Both are based in Darcy's law. In this sense, a falling head device was used, similar to the LCS permeameter of Standard NLT-327 to measure K v and K h in cylindrical compacted specimens in the laboratory and in cores extracted from the field PAP.
The test method was based on the discharge time measurement at which a known amount of water (1.735 l) takes to drain into a sample. The K v test, which represents a unidirectional flow, parallel to the compaction direction, consisted of placing the sample inside a rubber sleeve that wrapped around the perimeter, leaving only the sample faces free. The K h test, perpendicular to the compaction direction, which represents the runoff conditions during a rainfall, consisted of placing paraffin on the underside of the sample and the sealing of the junction between the permeameter tube and the upper face. In both situations, K was determined based on Darcy's law, according to equation (1).
where K is the hydraulic conductivity (m/s), a is the area of the permeameter section (m 2 ), A is the section area of the specimen (m 2 ), L is the average height of the specimen (m), Δt is the time interval for the water level to pass from h 1 to h 2 (s), h 1 is the maximum hydraulic load above the underside of the specimen (m) and h 2 is the minimum hydraulic load above the underside of the specimen (m). The EN 12697-19 standard recommends intervals for K between 0.5 and 3.5x10 -3 m/s, but does not specify the type of mixtures covered.
The samples tested were divided into two groups, corresponding to the test specimens of the PA1 and PA2 mixtures produced in the laboratory and to the cores extracted from the field slabs for the same mixtures. In order to simulate the DLPA in the laboratory, the cores PA1 were overlaid on the cores PA2, thus constituting the PAP complete double surface layer. The number of samples tested corresponded to 4 test specimens with dimensions of about 100 mm in diameter and thicknesses of about 60 mm for the two mixtures. The four samples extracted in the field had dimensions of about 100 mm in diameter and thicknesses of about 30 and 40 mm for the mixtures PA1 and PA2, respectively, totalling in the set (DLPA) a thickness of 70 mm.
In situ hydraulic conductivity
The in situ hydraulic conductivity of the PAP surface was obtained through the use of two variable load test equipments: the LCS permeameter developed in Spain (Laboratorio de la Cátedra de Caminos de Santander) in accordance with Spanish Standard NLT-327 [25] and the falling head permeameter in accordance with European Standard EN 12697-40 [26].
NLT-327 describes the method for obtaining in situ hydraulic conductivity of porous asphalt used in pavement surfaces. The LCS permeameter described in the standard allows to measure the infiltration capacity as a function of the time that a certain surface needs to infiltrate a known water height. The procedure consists in timing the discharge time, in seconds, that the water level takes to percolate through the porous surface, corresponding to the volume between the maximum and minimum hydraulic load of the permeameter tube. The water volume drained by the pipe is 1.735 l, with a diameter of 94 mm and a height of 250 mm. The outflow time (t, in s) corresponds to the surface hydraulic conductivity (K, x10 -2 cm/s) or in situ permeability, determined with a logarithmic expression defined by equation (2). EN 12697-40 describes the method for determining the relative hydraulic conductivity in situ of a permeable pavement surface. The test consists of determining the time required for a water fixed volume (4 l) to drain through a permeable surface. This outflow time (corrected for the 20 °C temperature) corresponds to the recording, in seconds, of the measurements obtained between the hydraulic loads corresponding to 5 and 1 l, identified in the equipment itself with a diameter of 125 mm. permeameter diameter is 125 mm and the diameter of the water infiltration area in the pavement is 48 mm. The relative hydraulic conductivity (HC, s -1 ) of each test point was obtained by equation (3) referred to in the standard.
where t is the average outflow time (s) and r is the series resistance outflow time (s). The r parameter was determined by the average time recorded to drain the 4 l of water (10 replicates) when the permeameter was positioned with the free exit orifice, where r = 1.1 was obtained. The obtained HC results can't be compared to other methods that use the hydraulic conductivity as standard measure, since the norm does not allow to obtain K through Darcy's Law, so the obtained results must be compared to each other and over time. An important requirement for measuring in situ hydraulic conductivity is to ensure that there is no water between the pavement surface and the permeameter base. Both equipment must be fixed to the study surface, which must be clean and free from detritus. These tests are of falling and variable load, whose flow is considered as laminar.
Results and discussion
The results of K v and K h obtained in each specimen and in each core laboratory test are presented in figures 1 and 2, respectively. The figures show the values for the PA1 and PA2 mixtures individually according to the voids content obtained, as indicated in EN 12697-19. It can be seen in figure 1 that the samples of the PA1 mixture have lower K values than the PA2 mixture, so the voids content of this mixture is clearly higher, as expected due to its composition with mostly coarser aggregates. By analysing individually, the samples of the PA1 mixture, the values obtained are very close to each other, with a slight increase of K v and K h , depending on the voids content. With respect to the PA2 mixture, the results are shown to be wider, where the K increase with the voids content is more clearly distinguished. Considering the results obtained and the void content range of the two mixtures (from 16.1 to 23.2 %), it was possible to obtain two equations that relate the hydraulic conductivity (K v and K h ) with the voids content (V m ). These equations show strong correlations, above 95 %, as can be observed in the same figure. In this way, the results show that the mixtures show anisotropic behaviour, presenting different hydraulic conductivity values in both directions. In figure 2, in the case of slabs in situ, higher values of voids content (from 21.0 to 29.1 %) are found than in the specimens, leading to K values also higher. This fact occurred due to the difficulty of extracting the cores in the slabs with excessive voids and disaggregation of the base peripheries, which led to the existence of a greater number of openings through which the water had more promptness to percolate, as observed Maupin [27]. This way, the exceedance of the values range indicated by EN 12697-19 (0.5 and 3.5x10 -3 m/s) for K of the cores is justified, while the specimens practically meet these limits. The PA2 mixture, due to the higher void content, shows superior results relative to the PA1 mixture. The results obtained allowed us to define two equations that determine the hydraulic conductivity (K v and K h ) as a voids content function (V m ). In this case, the correlation was also strong, above 90 %, as can be observed in the figure. In the set of cores PA1 a decrease is noticed in the core conductivity with greater voids content, which would not be expected. This isolated point was due to a possible accumulation of binder located in the core bases that hindered the water passage and for that reason the hydraulic conductivity decreased. Thus, this point was not considered in the equation definition. It should be noted that the 4 cores tested from each mixture were drawn from a PA1 slab and a PA2 slab produced in central asphalt, however the results obtained in the 4 cores of the two slabs were different. This meets what Montes [32] described, noting that although the studied samples are taken from the same slab they can lead to different hydraulic conductivities. Therefore, the cores tested behaviour, such as the specimens, presents anisotropy of K, as a result of the values of K h being higher than the same values of K v .
Figure 2. K v and K h of cores extracted from field slabs
The results of K obtained for each sample, according to the production method, are shown in table 1. Differences between cores and specimens are observed, with K values significantly higher in situ cores, with mean percentage differences in PA1 mixture of 25 and 30 % and PA2 mixture of 34 and 39 % for K v and K h , respectively. In addition to the individual analysis of the PA mixtures, the DLPA applied in situ was also simulated through the field cores overlaying. As would be expected, the results obtained by the DLPA are close to those obtained with the cores PA1 and PA2, and the behaviour in the vertical and horizontal direction corresponds to that obtained in the cores PA1 and PA2, respectively. In order to assess the in situ behaviour of the DLPA in relation to the results obtained in the laboratory, the applied PAP was characterized by two permeameters, which measure the hydraulic conductivity as a combination of the flow in the vertical and horizontal directions. The results obtained by the two methods at each test point are shown in figure 3a). The figures presented refer to the two evaluations carried out after 4 and 7 months of parking lot construction. In both curves it is possible to observe a regular and coincident behaviour of the results obtained by K and HC. Due to this behaviour, and taking into account the short time between measurements over the pavement lifetime, the same it was not considered for the results analysis. Thus, the discharge times recorded are between 27.0 and 41.8 s and K varying between 2.4 and 1.3x10 -3 m/s. K in PAP decreases with increasing discharge time and differences between test sites are not significant, showing compliance in the results. The values of K in permeable PA pavements vary according to the mixtures composition, type of aggregates, binders and additives used, as well as their application in situ, namely by the type and compaction temperatures. Some studies obtained higher K in permeable pavements with PA, of 12.0 and 31.1x10 -3 m/s according to Sañudo-Fontaneda et al. [23] and Kumar et al. [14], while others presented values in the order of 4.8x10 -3 m/s [9]. It should be noted that the values of the mentioned studies correspond to the initial infiltration capacity and present a great variability of results. Therefore, the K values obtained in this study, despite being smaller than other studies, are shown to be feasible according to the short pavement age.
The results of HC were obtained with water temperatures of 6 and 10 ºC that led to the discharge times correction to the temperature of 20 ºC. Thus, HC presents values between 0.045 and 0.129 s -1 with discharge times between 23.2 and 10.0 s, showing again results overlay. Since the falling head permeameter has distinct characteristics with respect to the LCS permeameter, the discharge times were lower, due to the higher hydraulic load considered in this test. It is also seen that, increasing the discharge time leads to the decrease of HC, as in K. In view of the previously impossibility of relating HC with K, it was considered fundamental to analyse the possible relation between them. Figure 3 b) shows this analysis and does not allow to identify any tendency of K in relation to HC, making it appear that K = 1.87x10 -3 ± 0.0003 m/s is independent of HC. Comparing the laboratory and in situ results, and considering K a combination of K v and K h , it is verified that the samples PA1 and PA2 obtained very similar hydraulic conductivities to those obtained in the in situ evaluation. The results obtained by the cores PA1, PA2 and DLPA led to higher results already mentioned. Taking into account the differences in results, it is evident the need for on-site testing in order to characterize the PAP under real conditions.
Conclusions
The permeable pavements use, such as SuDS systems, has become a more sustainable alternative to rainwater management, reducing the effects of soil sealing to pre-urbanization levels. This study evaluated and compared the DLPA performance for hydraulic conductivity under laboratory conditions and its application to a PAP under field conditions. The hydraulic conductivity values obtained in the laboratory are evidently higher in the samples extracted from the field compared to the samples produced in the laboratory. The K v of the cores PA1 determines the DLPA behaviour in the vertical direction of the flow, while the K h of the cores PA2 determines its behaviour in the horizontal direction. Given the differences in results in a controlled environment, the in situ study searched a better understanding of the PAP performance under real conditions. Their characterization led to lower discharge times by the falling head permeameter due to a higher hydraulic load compared to the LCS permeameter. The analysis between K and HC, led to a mean value of PAP hydraulic conductivity characterization corresponding to the HC values obtained. From the research carried out, it is concluded that there is a significant difference between the hydraulic conductivity values obtained in the laboratory and in situ tests using the same measurement principles (variable and falling head permeameters). Since the study was carried out in a restricted area of a PAP, it is fundamental the study extension in the evaluation of the different permeable pavements behaviour in real conditions, both at the beginning of its construction and throughout the years of service. The tests carried out in this research may contribute to future developments and improvements of hydraulic conductivity specifications for porous asphalt.
|
2019-04-30T13:09:23.283Z
|
2019-02-23T00:00:00.000
|
{
"year": 2019,
"sha1": "eb100ed4b2c0de70f78ba7aec788d055e1ba5f62",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/471/2/022023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b7f933c0f049ffccf433067f60aa6b9c4d3a3ed8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science",
"Physics"
]
}
|
233285491
|
pes2o/s2orc
|
v3-fos-license
|
Role of Foreign Aid in Employment Generation; A Case Study of Pakistan
Article History: Received: August 30, 2019 Revised: October 27, 2019 Accepted: November 24, 2019 Available Online: December 31, 2019 Foreign aid is very important for the development process of a country. The main aim objective of this study was to empirically investigate the relationship between foreign aid and employment generation in Pakistan. Secondary data was collected from 1980-2015 for time series data analysis. World Development Indicators and the Pakistan Economic Survey were the main sources of data (World Bank, 2020). ARDL model used for the data analysis. Employment level was used as a dependent variable, and foreign aid, Economic Growth (GDP), Gross Fix Capital Formation (GFCF), government expenditure, participation rate, and exports were used as independent variables. Form the results of the study, and it was clear that foreign aid and employment level have a positive and significant association in Pakistan. The study concludes that foreign aid has positively contributed to employment generation.
Background of the Study The economic system of Pakistan is in large part relies on foreign capital inflow. It is extremely useful for employment generation. Foreign aid development in employment, measured as a very powerful factor for the country like Pakistan. Since independence Pakistan is facing issues in distinct forms like war, earthquake, flood, political setting and terrorism, and many others. These shocks make a disturbance to the commercial efficiency of the rustic with passaging time. As a result, Pakistan is dealing with such a lot of financial issues; financial stay low, prime inflation rate, and lack of overseas funding. To fill saving and funding hollow in developing world localities overseas aid has been a big supply of exterior financing over the past several a long time.
Generally, foreign aid is considered an infamous topic, especially when it is discussed about employment technology in the recipient country. When people discuss foreign aid's impact on the employment technology of the recipient country, then it is considered an infamous topic. The needs for which it is assigned, the phrases, conditions under which it is relocated stock of foreign aid and the time lag between overseas assist and its effect are essential. Unfamiliar kinds of overseas aid will impact on technology over weird classes. When Pakistan has signed mutual defence aid agreements, the foreign aid increased many folds at the time of cold war decade. We may also visualize the help inflow of the 1980s, which may additionally in the standpoint of Afghanistan battle. Aid inflow to Pakistan used to be once additional dropped after 1998 nuclear tests and armed forces buyout in 1999. Most contemporary aid incursion, effects from Pak-US ties after 9/11. Pakistan is still lagging in social indicators, and people have no health facilities. Foreign aid and government programs have contributed to general employment technology, and to put it on the market social and political signs. Pakistan, since its inception, has been depending on the overseas provision to make stronger its development projects. There are macroeconomic imbalances in developing countries, but investment increases the speed of growth. Recently, high investment and has helped Pakistan for economic growth and to improve the current account deficit. Several construction programs introduced through donor and multilateral investment agencies to make upper financial movements and reach the most sensible enlargement fees in recipient nations (Ellahi & Ahmad, 2011). Foreign aid is helping Pakistan to encounter the capacity development of institutions and to meet the public expenditures.
Every country considers infrastructure building and stimulates employment generation. The inflow of foreign aid towards Pakistan was always good, but the story is the same as in other developing countries. As a result, Pakistan becomes more aid-dependent due to no major reforms in the economic model. Foreign aid harms Pakistan, and it is difficult for the government to fulfil financial responsibility (Butt & Javid, 2013). Bhattacharjee (2015) discussed Pakistan and china's relationship, especially about CPEC that how it would end the energy crisis and joblessness in Pakistan. The researcher investigated the effectiveness of aid in his study and focused on the debate of aid policy and its effect on financial development. Consider that the labor pressure is rising every year; however, the alternatives for employment are declining at a fast fee. Male unemployment has larger within the remaining 20 years, whereas female unemployment has decreased.
Pakistan has used mostly to finance discrete investment projects like building schools, building roads, dams, power projects, and motorways. Chenery and MacEwan (1966) concluded that foreign aid played a role in domestic investment and savings. It provides the economy with a chance to become economically sovereign finally. However, foreign aid has reduced poverty but not contributed well to GDP growth but effected in a limited way (Ishfaq, 2004). Due to CPEC investment in power and infrastructure projects, the economy has performed well (SBP, 2018). The gap between rich and poor has not lessened yet. In future CPEC will boost Pakistan's economy and livelihood of people.
This study conducted with aiming to take a look at the effect of overseas assist on employment technology and to provide the policy suggestions on the base of the findings. More particularly, the paper will provide the focal point with the core query that 'how' and 'how far' world assist has affected the employment, GDP, public expenditure, participation rate, exports, GFCF in Pakistan. This study has provided a useful understanding of stable economic policies and foreign aid. Moreover, it will help the policymakers to cater to the issue of aid efficacy.
LITERATURE REVIEW
In the literature, foreign aid and employment association is studied and arranged with the situation of Pakistan and other countries' policies. Pohwani, Khoso, and Ahmed (2019) revealed in their research about the relevance of FDI and sustainable development that FDI is slightly optimistic with sustainable development as well as with its components. They maintained through their study that FDI has a significant impact on economic growth in short-run, while, insignificant relationship with growth in the long run. Shah, Hasnat, Cottrell, and Ahmad (2020) used the domain of electricity consumption an FDI relationship. They explained that there is a significant relationship among electricity consumption and foreign aid. Furthermore, electricity consumption will increase if the industry and production sector will use it, and it plays an important role in the economy of a country. Salman and Feng (2010), in their research, explained the three decades of foreign aid in Pakistan (1987Pakistan ( -2007. The research analysed that there was a good relationship between foreign aid and GDP and economic has a direct impact on their relationship in Pakistan. A study by Malik, Chaudhry, and Javed (2011) explained the relationship between globalization and employment. They have found that FDI, remittances, and economic size have created employment in the short run. But globalization negatively affected employment because of the imbalanced relationship between internal and external due to the political dimension. Siddiqui (2006) stated that aid benefits gained by East Asia but South Asia are lagging in this matter. Thus, due to foreign aid, government spending helped East Asia. Anwar (2004) analysed that Capital inflow, foreign aid, and remittances played a very important role in Pakistan's economy form . A paper by SARSOUR, NASER, and ATALLAH (2011) discussed the monetary and societal outcomes of foreign aid in Palestine. They analysed that sustainable growth is out of reach because international aid is eaten up and has not been invested. All aid was provided on the base of political consideration, and it was not for harmony and production. Bakare (2011) inspected foreign aid in Sub-Sahara Africa on the base of the macroeconomic effect. It is proved from results that foreign aid has not effectively used for the promotion of growth and investment. Corruption was the main reason which crowded out investment and capital formation. Research conducted by Hye, Shahbaz, and Hye (2010) revealed that in developing countries, FDI had played an important role to stimulate the economy. Mullick (2004) investigated the economic growth during the war on terrorism and USA financial aid to Pakistan. The log-log abnormal, least square method used for the aim. The effects confirmed that the economic cost of changing into a pioneer within the battle in opposition to terrorism had exceeded the benefits that extra monetary reimbursement in the type of financial assist was essential from the US to ignite higher financial expansion in Pakistan and supported Pakistan's financial growth is in the best interest for each nation.
Ali and Ahmad (2013) explained income disparities and foreign aid effects on different sectors. Income inequality and foreign aid have a negative association, while FDI has a positive impact. Shirazi, Mannap, and Ali (2009) analysed the effectiveness of international aid on a human building. Education, life expectancy, human development index variables used within the learning about. Vector Error Correction type used within the study. The results showed that economic growth induced official construction help and as far as schooling index, human building index, and lifestyles expectancy index involved. Brecher and Abbas (2005) discussed foreign, employment, external debt, and payment. Mahmood (1997) discussed the aid and growth periods of Pakistan. A study by Javid and Qayyum (2011) investigated the macroeconomic policies, and the findings of the study have supported that actual gross domestic product has an unfavourable relationship, while the aid policy interactive term and actual gross domestic product have an important positive dating. Khan and Ahmed (2007) analysed foreign aid either is a curse or a blessing. Their results showed that economic activities enhanced by investment and aid in Pakistan. Mallik (2008) incited about the consequence of international support on financial expansion. Per capita actual gross domestic product, support, investment, and trade openness variables used in this learn about. Findings showed that a long-run relationship exists between per capita real gross domestic product aid as a percentage of the gross home product, investment as a share of the gross home product, and openness.
Nowak-Lehmann, Martínez-Zarzoso, Herzer, Klasen, and Cardozo (2013) investigated the stabilization impacts of migrant remittances to Pakistan. Remittances, reliable development aid, and overseas direct investment variables used in the study. Structural vector autoregressive type used within the find out about. Results confirmed that remittances flow to Pakistan has positive consequences in the economy.
The connection has been explored by Shahzad, Ahmed, Khiliji, and Ahmed (2011) between the expenditure and loan taken by the government of Pakistan. Research work by Faridi, Chauhdhry, and Ansari (2012) has done on the fiscal decentralization and employment opportunities that can be generated as an alternate option. The ordinary, least square method used in the find out about decentralized expenditure system on employment growth whilst decentralization in the source of revenue is not appropriate for employment generation. Poverty and inequality are lowering employment in Pakistan, so fiscal autonomy is essential for developing more employment choices in Pakistan. Rashid, Anwar, and Torre (2014) evaluated the effect of in a foreign country improve, socioeconomic, and circle of relatives making plans techniques enter on fertility in Pakistan.
Foreign lend a hand for well-being according to capita, crude get started fee, shopper price index, family literacy rate, number of LHVs as a circle of relatives planning program inputs variables used throughout the know about. Results showed that foreign aid and literacy are negatively associated with fertility in the short run. Khan et al., (2019) worked to assess the donor agencies intervention in the educational sector, and they established the comment that proactive role of agencies is important for the success of projects.
DATA AND METHODOLOGY 3.1. Data Source and explanation of variables
In this chapter, a detailed discussion has been carried out for the employment generation and foreign aid. This has also checked the long and short-run effects of both. This study has covered data from the period of 1980-2015. This data was taken from the World Development Indicators, State Bank of Pakistan, and the Pakistan Economic Survey (World Bank, 2020).
This study used variable employment which played a role as a dependent, while explanatory variables are Foreign Aid (current US$), GDP, Government Expenditure, Participation Rate (% of total population age 15+national estimate), Gross Fixed Capital Formation (% of GDP) Exports (current US$).
Functional Form
Double log functional form is used in this thesis, which means that the econometric technique was applied to data but after taking their natural log.
Econometric Methodology
There are quite a lot of tactics for checking the Co-integration analysis. Econometric Literature has plentiful econometric techniques to research Co-integration Relationships among economic variables. The popular approaches are the well-Known Residual founded approach proposed via Engle and Granger in 1987 cited by Kanioura and Turner* (2005) and the Maximum Likelihood-based manner proposed by the use of Johansen and Julius in 1990 cited by (Silk & Joutz, 1997). In making use of the Co-integration method, we need to decide the order of integration for every variable. When there are over two I (1) variables within the instrument, the Maximum probability approach of Johansen and Julius has the convenience over the Residual-based approach of Engle and Granger. Alternatively, this approach requires the identical order of integration that variables have. To triumph over this downside, a research paper of (Pesaran & Smith, 1995) proposed a singular means, and they referred the ARDL for the Co-integration. Researchers accept the ARDL although it is or not in the order of integration that is I (0) and purely I (1) or it may mutually Co-integrate.
Unit Root Test
It is the fact that time series data has the quality to make information non-stationary, that why spurious regression issues rose after this quality. To make stationary, the data ADF is used for the level data.
Cointegration
We have examined the Co-integration via the ARDL Bounds test. The series of variables was in line with I(0) or I (1) in the presence of variables I (2). Pesaran, Shin, and Smith (2001) explained that in ARDL coefficients make long-run equilibrium, whereas ECM shows short-run and long-run equilibrium.
α1,α2……..α5 explain the short-run dynamics of the model. Parameters r showed the long-run relationship. So the null hypothesis is H0: It was clear that H0 rejection will show that there is the existence of cointegration. So after existence, we move to net stop that was testing of long and short-run equilibrium.
Dependent Variable 3.4.1. Employment Level
The proportion of the labor drive this is employed undertaken. The employment level is, without doubt, one of the financial indicators that economists examine to lend a hand to perceive the state of the financial system. Employment is a freelance during which one individual, the employee, agrees to accomplish paintings for any other, the employer.
Independent Variables 3.5.1. Foreign aid
From all over the world capital items, money, and other gifts sent for the welfare of the recipient country. Aid has many forms it could be in the form of economic, military, or any emergency condition.
Gross Domestic Product
Gross domestic product is termed as the "total cost of all completed goods and services produced the country in a stipulated period of time" (usually a year).
Gross fixed capital formation
"Gross fixed capital formation (formerly gross domestic fixed) improvements (fences, ditches, drains, and so on); plant machinery and equipment purchases; includes lands of roads, railways, and the like, including schools, offices, hospitals, private residential dwellings, and commercial and industrial buildings".
Participation Rate
It defines the "Labour Force Participation Rate" as the proportion of the nation's population 16 and over operating or searching for paintings. It is made up of our minds via demography, maximum notably the share of the grownup population of top operating age, normally 25 to 54.
Public Expenditure
"Public Expenditure" is defined as "spending made by the government of a country on collective needs and wants such as a pension, provision, infrastructure, etc.".
Exports
An export is "a function of international trade whereby goods produced in one country are shipped to another country for future sale or trade. The sale of such goods adds to the producing nation's gross output. If used for trade, exports are exchanged for other products or services".
RESULTS AND DISCUSSION
We have added estimation results of the association between foreign aid and employment generation in this chapter. In the first section unit root test was evaluated, and in the subsequent segment, there was a critique on ARDL for checking cointegration.
Unit Root Analysis
Though the ARDL bound test has no proper prerequisite to test the stationary sequence, the First table explained the ADF test results. LGOVT Note: The signs **and * indicated that the coefficient is significantly different from zero at 1% and 5% probability, respectively.
The ARDL bound test required three steps to verify. In the first step, long term co integrating existence is checked by seeing the number of variables in the equation. One thing needed to understand is that "if the estimated F statistics go higher than bound prices would be higher and thus null hypothesis of no cointegration is rejected". Whereas, if the estimated f statistics show smaller than the decrease of essential value and the null hypothesis of no cointegrated cannot be rejected, which shows that variables did not seem co-integrated. Moreover, Inconclusiveness of F-statistics decided when it falls between lower and upper bands. The second step is used to estimate the long run, and the third step is used to calculate the short-run association between variables. In table 1, the variables are found stationary at different levels. No variable is found at I(2), so we have no fear of spurious results. AID and EXPR are found stationary at a level whereas, the variables EMP, LF PARTI, GOVT EXP, GFCF and GDP are found stationary at first difference. This leads to the execution of the ARDL method of cointegration analysis to estimate the model.
Lag length Criteria
The lag length should be determined properly because it is needed by cointegration for estimation of long-run cointegration model to get efficient results. We determine the optimal lag length for the estimation process. The Lag length of I (2) is selected for estimation using the Akaike Information Criterion and Hannan-Quinn Information Criterion, respectively. In the ARDL model, we have observed that F-statistics is higher than the upper bound. We have the value for significance is 1%, 2.5 %, 5% and 10 %. So, by the above information of results, it is stated that the null hypothesis of no Co-integration is rejected because there is a relationship of variables in Co-integration. Table 3 showed the existence of cointegration between the variables.
Model Estimated
Model (EMP,AID,EXPR,GDP,GFCF,GOVT-EXPN,LLF-PARTI ) We have shown the long-term coefficients that are reported by the ARDL model. Foreign aid, Labour force participation, employment, exports, government expenditures, gross fixed capital formation confirmed by the long-term estimates they are comparable. The results showed that government expenditures, GDP, gross fixed capital formation, and labour force participation would be essential to generate employment. We shall reject the null hypothesis because of the foundations of the later one. We moved to the next step, and we got ECM. The results of the ECM are shown in table 4.
From the tables, we conclude that foreign aid, exports, gross domestic product, gross fixed capital formation, government expenditures, labour force participation are those variables that have a great impact in the long term and have a positive sign for employment generation. The variable, fixed capital formation, is showing the strongest relationship with the dependent variable employment generation, while the least strong relationship is found between government expenditure and employment. Likewise, the short-run coefficients for all the independent variables are exhibiting significant positive relation with the dependent variable 'employment generation'. The empirical results are in favour of the hypothesis developed about variables. The error correction time period is statistically important because it has a (-) sign. This is a desirable situation, which satisfies the short-run equilibrium condition of the model where the variables are stable and consistent. The ECM coefficient has value -0.645776. The p-value for JB normality test is large, so we accept the null hypothesis of residuals being normally distributed around mean, i.e. the data is normal for this research. The probability value for the correlation test is also satisfying that the model is free of serial correlation. So, it is suggested that in the end, all the investigative tests showed, model, satisfied all the described experiments.
CONCLUSION AND RECOMMENDATIONS 5.1. Conclusion
This research has focused on the key aim that was the impact of foreign aid on employment generation for the period of 1980-2015. The LFS has used four categories of employment. The four categories are employees, self-employed, unpaid workers of family, and participants in government-funded training schemes. We have implemented the ADF (Augmented Dicky Fuller) for checking the behaviour of variables, and autoregressive distributive lag fashion is used to research error correction time frames in the fashion. In this analysis, we used one dependent variable EMP. In our study, something clears it that employment has a positive association with foreign aid.
We concluded from the results that fiscal decentralization has the right effect on employment generation while it will not consider income decentralization suitable for employment generation. Poverty and inequality are decreasing the employment in Pakistan, so fiscal autonomy is essential for growing additional employment choices in Pakistan. The findings suggested that domestic funding and export expansion are essential folks in bettering economic enlargement in Pakistan. There is a pleasant relationship between employment and in line with capita GDP.
Recommendations
• There should be coverage that may build up male and female education so they can get more employment is undoubtedly associated with them according to capita GDP. • When more inhabitants have employment, it highly will increase per capita GDP.
• The Government of Pakistan should encourage foreign aid, and there is much need of a pleasant environment and good incentives to attract investors. • Ministry of overseas employment should make an association for developing jobs for our skilled and trained workers in the international labor markets to raise workers' remittances.
|
2021-04-17T16:14:58.503Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "205029b330148eb4e9bcf4cf1801a7f22c685da8",
"oa_license": "CCBY",
"oa_url": "https://journals.internationalrasd.org/index.php/joe/article/download/101/109",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "205029b330148eb4e9bcf4cf1801a7f22c685da8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
119169123
|
pes2o/s2orc
|
v3-fos-license
|
Effective Conductivity of Spiral and other Radial Symmetric Assemblages
Assemblies of circular inclusions with spiraling laminate structure inside them are studied, such as spirals with inner inclusions, spirals with shells, assemblies of"wheels"- structures from laminates with radially dependent volume fractions, complex axisymmetric three-dimensional micro-geometries called Connected Hubs and Spiky Balls. The described assemblages model structures met in rock mechanics, biology, etc. The classical effective medium theory coupled with hierarchical homogenization is used. It is found that fields in spiral assemblages satisfy a coupled system of two second order differential equations, rather than a single differential equation; a homogeneous external field applied to the assembly is transformed into a rotated homogeneous field inside of the inclusions. The effective conductivity of the two-dimensional Star assembly is equivalent to that of Hashin-Shtrikman coated circles, but the conductivity of analogous three-dimensional Spiky Ball is different from the conductivity of coated sphere geometry.
Introduction
Structures with explicitly computable effective properties play a special role in the theory of composites. They allow for testing, optimizing, and demonstrating of dependences on the structural parameters and material properties. These structures are also used for hierarchical modeling of more complicated structures and they permit explicitly computing fields inside the structure and track their dependence on structural parameters. There are several known classes of such structures: the Hashin-Shtrikman coated spheres structure [6] and Schulgasser's structures [11] are probably the most investigated geometries of composites. The scheme has been generalized to multiscale multi-coated spheres (see the discussion in [4,8]), coated ellipsoids [3], and the "wheel assembly", studied in [1]. Another popular class is laminate structures and derivatives of them, the laminate of a rank, which exploit a multiscale scheme in which a course scale laminate is made from smaller scale laminates in an iterative process. The limits of iterations of these structures yields to a differential scheme where an infinitesimal layer is added at each step, see [4,8].
In the present paper, we combine the idea of multi-rank laminates and coated spheres, introducing assemblies of circular inclusions with spiraling laminate structure inside them. Namely, we study the assemblages of spirals with inner inclusions, spirals with shells, and assemblies of "wheels"structures from laminates with radially dependent mass fractions. We also derive the effective conductivities of complex three-dimensional microgeometries, which we call Connected Hubs and Spiky Balls. The described structures model inhomogeneous materials met in rock mechanics, biology, etc. For calculating effective properties, we use the classical effective medium theory, see [6,8,9,10]) coupled with hierarchical homogenization. We study spiral assemblies with inclusions and observe an interesting phenomenon: a homogeneous external field applied to the assembly is transformed into a rotated homogeneous field inside of the inclusions. The fields in such structures satisfy a coupled system of two second order differential equations, rather than a single differential equation that is satisfied in Hashin-Shtrikman and Schulgasser's structures. We show that the effective conductivity of the two-dimensional Star is equivalent [1] to conductivity of Hashin-Shtrikman coated circles, but the conductivity of the analogous three-dimensional Spiky Ball is different from the coated spheres geometry.
Composite circular inclusion
Consider an infinite conducting plane with coordinates (x1, x2) and assume that a unit homogeneous electrical field e = (1, 0) T is applied to the plane at infinity, inducing a potential u, A structured circular inclusion of unit radius x ≤ 1 is inserted in a plane. It consists of a core inner circle of radius r0 and an enveloping annulus. The inner circle (nucleus) Ωi = {x : ||x|| < r0 ≤ 1}, is filled with an isotropic material of conductivity σi. The annulus Ωa = {x : r0 < ||x|| < 1} is filled with anisotropic material whose conductivity tensor Se(r) depends only on radius. The plane outside the inclusion is denoted Ω * , it is filled with an isotropic material of conductivity σ * .
Effective conductivity of such an assembly is computed by effective medium theory. Given the inclusion, we find the conductivity σ * so that the inclusion is cloaked, If this is the case, the outer conductivity σ * is called the effective conductivity of the inclusion. The inclusion is not seen by an outside observer, therefore the entire plane can be filled with such inclusions, according to effective medium theory [6,8,9,10]. We show that the field inside the nucleus Ωi has the representation where ρ, ψ are constant. An observer inside Ωi records a homogeneous field similar to the outside field, but rotated by an angle ψ.
The current j, electric potential u, and electric field e in a conducting medium are related by equations where K is a positively defined symmetric conductivity tensor that represents the material's properties. In our assemblage, Here, Id is the identity matrix. Equations (4) are combined as a conventional second order conductivity equation On the boundaries between different regions, tensor K is discontinuous. But the potential u, the normal current j · n, and the tangential potential e · t are continuous, u(r + , θ) − u(r − , θ).
Fields in a spiral assemblage
In this section we find the fields, currents, and effective conductivity of the described assemblage assuming that the eigenvectors of K in Ωa form a family of logarithmic spirals. We call the resulting structure the Spiral with Core. Single Inclusion. Rewrite the problem in polar coordinates r, θ. Solving conductivity equation (5), we find that the potential in the inner and outer isotropic regions is A, B are constants. The form of the solution in the outer domain reflects the effective medium condition: the inclusion is invisible, and field is unperturbed and agrees with condition (2).
Figure 1: Spiral with Core
Assume that the angle φ of orientation of the principle axes of K in Ωa to the radius is constant, so that the eigendirections form a family of logarithmic spirals. In Ωa, K has the form σ1 and σ2 are positive constant eigenvalues of Se, and φ is the angle of the spiral. The entries of K are: Equation (5) in the annulus becomes We separate variables and account for boundary conditions. The solution has the form u(r, θ) = U (r) cos(θ) + V (r) sin(θ). and U and V satisfy a system of ordinary differential equations where L1 and L2 are linear differential operators, Notice that equations in (9) are respectively the real and complex part of a complex-valued differential equation Write the current in the spiral as j = Ju cos(θ)+Jv sin(θ). We compute The jump conditions (6) at r = r0 and r = 1 have the form Potentials U, V in annulus Ωa are found by solving (9): where Determination of Constants. Jump conditions (13) at the outer boundary (r = 1) are yielding C2 = 0, C4 = 0. There remain five unknown constants, k * , C1, C3, A, and B. They appear to be overdetermined by the remaining six boundary conditions. However, the four boundary conditions (13) on the inner boundary at r0 can be reduced to just three conditions, allowing the system to solved. The potentials U and V in the spiral material are linked together, and knowing either one is sufficient to determine the other. Indeed, upon substituting C2 = 0, C4 = 0, we find Substituting this into (12), we see that the same relationship holds for the normal components of the current, so we have, in the spiral, Using (20), the current conditions (13) are redundant, as the two conditions on the current are simultaneously satisfied. The four boundary conditions can be rewritten as three, With this observation, the linear system can be solved for the remaining unknowns. Among them is the effective conductivity of the outer region, σ * , which is treated as an unknown. Notice that the annulus with spiraling material does not perturb the outside field, and contains a homogeneous inner field that is directed in a different direction than the outer homogeneous field.
The value of σ * is obtained by solving the system is the effective conductivity of the structure; it is given by an explicit formula where
Figure 2: Spiral Assemblage
Effective Medium Theory. The Spiral with Core structure, as described above, extends from the origin to a radius of one. But this radius is arbitrary in an infinite plane. The spiral can be scaled up or down with respect to the radius and will still solve the same problem. As the spiral leaves the outside field unperturbed, placing multiple spirals side by side will result in every object placed being rendered invisible to the field, with each having an inner homogeneous field directed in a different direction than the applied field. While none of the myriad inclusions are detectable by perturbations of the outside field, the field is differently directed almost everywhere.
Extreme Spiral Geometries
Extremal Spiral Angle. By design, the core of the spiral object contains a uniform electric field directed in a different direction than the outside field. The potential in inner region (nucleus) has the representation u(r, θ) = Ar cos(θ) + Br sin(θ).
The field inside will make an angle of Υ = tan −1 (B/A) with the uniform field outside the spiral. A and B depend on the spiral angle φ, so a substitution shows Υ = γ|σ2 − σ1|, where Let us find the angle φ which maximizes the rotation Υ. A straight- Figure 3: Changing Direction of the Electric Field forward calculation shows that for a given σ1 and σ2, Υ is maximized by choosing φ0 = arctan( σ1/σ2).
The maximal angle Υmax for a given σ1, σ2, and r0 is Here, r0 ≤ 1, the radius of inclusion. It's clear that the more anisotropic the spiral and the smaller the inner radius r0 are, the larger the resulting twist inside the spiral's core is.
Laminates. The conductivities σ1, σ2 in (25) describe the conductivities of the outer spiral material in the Spiral with Core structure. If this anisotropic material is a laminate made of materials with conductivities k1, k2 and volume fractions m1, m2, then σ1, σ2 can be written as the geometric and arithmetic means of the conductivities, [4] For given materials with conductivities k1, k2, we can optimize Υ with respect to m1, m2. The maximal angle of rotation is obtained for It is equal to In particular, Υ = π (and the current inside goes in the opposite direction) if For instance, if k1 = 1 and k2 = 100, the inner field will be directed opposite to the external field when r0 = 0.274. Values of Υ greater than 2π are also possible. The relative angle between the incident current and the inner current is the value of Υ taken mod 2π.
Derivative Assemblages
The parameters in the Spiral with Core structure can be modified to obtain effective conductivity of similar conducting assemblages.
Hashin-Shtrikman Coated Circles. The classical example of the Hashin-Shtrikman geometry [3] is an isotropic circle surrounded by an isotropic annulus It is obtained from the Spiral with Core by setting σ1 = σ2. The outer spiral layer becomes an isotropic shell, and the effective conductivity coincides with Hashin-Shtrikman's result Notice that m = r 2 0 is the volume fraction of the core material. Schulgasser Structure. Schulgasser [11] suggested another classical symmetric geometry . It is a radial laminate of two materials. Schulgasser's structure is obtained from the Spiral with Core by setting r0 = 0, φ = 0. The inner isotropic circle disappears and the logarithmic spiral laminate is straightened into a radial laminate. The effective conductivity agrees with [11].
Orange with Core. This object has an inner circle of isotropic material surrounded by an annulus made up of a radial laminate.
The Orange with Core is obtained from the Spiral with Core by setting φ = 0. The logarithmic spiral laminate is straightened into a radial laminate. The effective conductivity of the Orange with Core is Note that this formula can be written in terms of volume fractions by observing that the volume fraction mi of the inner material is mi = r 2 0 . Orange with Shell. This object contains an inner radial laminate surrounded by an isotropic material. Denote the conductivity of the outer isotropic shell by σ1, and the conductivity tensor (in polar coordinates) for the inner laminate by The effective conductivity can be found by a straight calculation. It is Indeed, the material in an inner circle is a Schulgasser radial laminate, which has effective property k * = √ σrσ θ . Thus, this inner material can be treated as an isotropic material with conductivity √ σrσ θ . The effective property of the Orange with Shell can be obtained by substituting σi = √ σrσ θ into the equation for the effective property of Hashin-Shtrikman coated circles.
Basic Spiral. This object is simply the spiral material centered around the origin. The Basic Spiral can be obtained from the Spiral with Core by setting r0 = 0. The effective conductivity of the spiral is As one might expect, it coincides with the effective conductivity of Schulgasser's geometry. Spirals with Shell and Core. The Spiral with Shell contains a circle of spiral laminate surrounded by an isotropic shell. As we mentioned, no more work is needed to be done to calculate the effective property of it. Since the effective property of the Basic Spiral is the same as the effective property of Schulgasser's Structure, the effective property of the Spiral with Shell is the same as the effective property of the Orange with Shell, ks. (33) For completeness, we also mention here again the Spiral with Core discussed in Section 3 that is an isotropic circle surrounded by a spiral laminate. It is a field rotator, and its effective property is given by (22).
Insulated Geometries. Consider the asymptotic case: assume that the spiral is a laminate made from two isotropic materials, and the conductivity of one material approaches zero.
If one of the conducting materials is replaced by an insulator, the conductivities of both Schulgasser structure and the spiral are zero. The spiral with shell structure can be insulated by replacing one of the materials in the spiral laminate with an insulator. The resulting structure should behave as an annulus; the inner spiral acts as an isotropic insulator, and all current should passes through the isotropic shell. Indeed, the resulting effective property is This structure has the effective property of an annulus with insulated nucleus, as expected.
If the conductivity of one of the materials in the spiral laminate is zero, the spiral with core structure has effective property (see (22)) .
The effective property of if the insulated orange with a core obtained from this one by setting φ = 0, Wheel. Insulators can be used in laminates to control the variation of the current material in the layers. Consider the following Wheel geometry. A central core is surrounded by an annulus that consists of conducting radial spikes surrounded by an insulator. The thickness of an individual spike is constant, and the volume fraction of conducting material in the annulus decreases with radius, because the circumference length linearly increases. The conductivity tensor is A spike conducts in the radial direction but does not conduct in the circumferential direction. Call this material a spoke material. To relate σ1 to the conductivity σ of the spoke material, we can write σµr0 = σ1. The parameter µ ∈ [0, 1] shows the relative thickness of the spikes at the radius r0. This time we are dealing with a material with anisotropic conductivity that varies with radius. The solution to the potential inside the spoke material satisfies (5). The current is constant inside each spoke, therefore the potential is u = (Ar + B) cos(θ) + (Cr + D) sin(θ). (36) The wheel inclusion has an inner isotropic core and an outer isotropic shell, connected by spokes. It conductivity depends on radius as follows Since the potentials in each region are known, a procedure similar to that in Section 3 gives that the effective conductivity of the Wheel: where When r1 → 1, the outer conducting layer disappears and we come to the structure that we call Star. Its effective conductivity is Star assemblages can model structures of hubs connected by conducting strands in an insulating space. If the conductor in inner circles and spoke is the same, σi = σ, we write the effective property σstar in terms of the volume fraction m of conducting material σ, , where m(r0) = r 2 0 + 2µr0(1 − r0). (40) If µ = 1 2 , then r0 = m and kstar coincides with the effective conductivity of coated circles, therefore it is an optimal structure for the resistivity minimization problem [1] along with the coated circles [6]. The value µ = 1 2 makes intuitive sense. One can see that if the spokes in the Star cover only half the circumference of the inner sphere at r0, then the current density in the isotropic region of the star will be half that in the spokes region of the star. If two orthogonal currents are separately applied to the Star structure, the sum of their energy density is constant throughout the structure. This satisfies a necessary and sufficient condition for minimization of resistivity [4] 6 3D radially symmetric assemblages The same approach can be applied to three-dimensional radially symmetric inclusions. Let us consider spherical inclusions, embedded in an infinite three-dimensional isotropic conducting material with undetermined conductivity. A uniform electric field is applied to the plane. The equation (5) is used to solve for the potential in each material. The boundary conditions (6) are similar to the two-dimensional case. Applying a uniform current along the z-axis results in the electric potential outside the inclusion. This solution is obtained by separation of variables in equation (5) in spherical coordinates. Hub. We construct an analogue of the two-dimensional spoke material, which consists of radial spokes of constant thickness. The relative proportion of conducting material is inversely proportional to the square of the radius, and the tensor in spherical coordinates is We choose µ ∈ [0, 1] so that the term µρ 2 0 is the surface area covered by the spikes at the inner radius ρ0.
The solution for the potential inside this material is Using this material, we construct a Hub structure. The Hub is the threedimensional analog of the two-dimensional Star structure. It model inclusions spread through an insulating space and connected by conducting strands. The Hub consists of an inner isotropic core of conductivity σi out to a radius of ρ, enveloped by a spoke material. The effective conductivity of the Hub is computed similarly to that of Star. It depends on two structural parameters, ρ0 and µ and is equal to The volume fraction of the Hub is If, by analogy with the star, we take µ = 1 3 , this relation gives r 2 0 = m, and the effective conductivity is Unlike the two-dimensional case, this conductivity is different from the conductivity of coated spheres. Spiky Ball. The Spiky Ball structure is a generalized hub, in which the spikes have variable cross-section. The tensor of this generalized material is The effective conductivity of this assemblage is k sb = σiσρ n 0 µ(n − 1) σi(1 − ρ n−1 0 ) + σµρ n−1 0 (n − 1) .
Notice that the Spiky Ball, Hub, and Star require that the current passes sequentially through insulated spokes and a homogeneous central circle (sphere). Because of this, their effective conductivity are of harmonic mean type, where a and b are constants depending only on the geometry.
Conclusion
In constructing the Spiral with Core structure, we constructed a field rotator, e.g. a conducting structure that rotates an incident uniform electric field while remaining undetectable. Optimization results show how to maximize the angle of rotation induced by this field rotator. By modifying the Spiral with Core, a large number of other conducting structures can be constructed and their effective conductivities computed. The analogous 3D Hub and 2D Star structures behave differently in the sense that the
|
2012-06-20T22:55:45.000Z
|
2012-06-15T00:00:00.000
|
{
"year": 2012,
"sha1": "e6d7fc03792ffba1526ecfd7e09f4b17e4451743",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1206.3604.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e6d7fc03792ffba1526ecfd7e09f4b17e4451743",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics",
"Mathematics"
]
}
|
231598195
|
pes2o/s2orc
|
v3-fos-license
|
Direct measurement of vagal tone in rats does not show correlation to HRV
The vagus nerve is the largest autonomic nerve, innervating nearly every organ in the body. “Vagal tone” is a clinical measure believed to indicate overall levels of vagal activity, but is measured indirectly through the heart rate variability (HRV). Abnormal HRV has been associated with many severe conditions such as diabetes, heart failure, and hypertension. However, vagal tone has never been directly measured, leading to disagreements in its interpretation and influencing the effectiveness of vagal therapies. Using custom carbon nanotube yarn electrodes, we were able to chronically record neural activity from the left cervical vagus in both anesthetized and non-anesthetized rats. Here we show that tonic vagal activity does not correlate with common HRV metrics with or without anesthesia. Although we found that average vagal activity is increased during inspiration compared to expiration, this respiratory-linked signal was not correlated with HRV either. These results represent a clear advance in neural recording technology but also point to the need for a re-interpretation of the link between HRV and “vagal tone”.
Results
Chronic recording in the vagus nerve. Chronic recording in the rat vagus nerve has proven challenging, with the majority of vagus nerve studies being conducted under acute or semi-chronic conditions. Signals have been recorded acutely from the vagus nerve using extraneural cuff electrodes; however, these electrodes tend to have low SNR due to the insulating perineurium attenuating the neural signals, and restricting recordings to the outer layers of axons. Additionally, vagal recordings have only been carried out under the influence of anesthesia, which is known to alter autonomic activity. Thus, we aimed to record intraneural vagal signals chronically in the rat using CNTY electrodes, both with and without anesthesia. We have previously described implantation of these electrodes by winding the CNTY around a tungsten microneurography needle for anesthetized recording; here, we describe a novel method which simplifies electrode preparation and implantation while increasing successful surgical implants. The stability of these electrodes paired with an improved recording setup also allow for the first ever recordings of vagal activity from awake, behaving rats. Figure 1 shows implantation of the electrodes using this novel suture method, and Fig. 2a shows the chronic awake recording system. Detailed schematic of the de-insulated CNTY and the fisherman's knot which secures the electrode to the suture. (c) CNTY, suture, and de-insulated portion of the electrode. Scale bar represents 100 µm. A small section of parylene-C is removed by laser. Arrow shows the fisherman's knot which secures the CNTY to the suture, and helps to keep the electrode in place after implantation. (d) Inset image shows the removal of ~ 200 µm parylene-C insulation. (e) Implanted CNTY. Scale bar represents 500 µm. Arrow shows the fisherman's knot. After implantation of two electrodes, the electrodes and nerve are covered with fibrin glue to secure the electrodes in place.
Scientific Reports
| (2021) 11:1210 | https://doi.org/10.1038/s41598-020-79808-8 www.nature.com/scientificreports/ Vagal activity was recorded in anesthetized and awake, behaving rats for up to 11 weeks using custom-ultralow noise electronics and connector headcap (see Fig. 2a). The recorded activity varied in both awake and anesthetized rats, and included large spontaneous spiking activity (individual spikes as well as bursting). Figure 2b shows an example of baseline activity; the red box represents estimated peak-to-peak noise based on electrode impedance and input amplifier noise 34 . Spiking activity occurred both with and without anesthesia, but was more common in awake animals. Figure 2c,d show examples of short and long spike bursts, while Fig. 2e shows large spiking activity, including individual spikes and short bursts with > 100 µV peak-to-peak voltage. Spikes were detected via thresholding (6 times the baseline RMS), and peak-to-peak spike amplitude was compared to baseline RMS to calculate SNR. Average spike SNR was found to be 16.9 ± 5.4. Figure 2f shows an example of the simultaneously recorded ECG, with Fig. 2g showing the associated heart rate. All of these activity types were observed both with and without anesthesia, though spiking, spike bursting, and large spikes were more common without anesthesia, while quiet baseline was more common with anesthesia. Due to the many organs innervated by the vagus nerve, decoding the function of individual spikes or groups of spikes is a significant challenge. However, average vagal activity can be calculated to directly compare measured vagal tone to HRV.
Tonic vagal activity is not correlated with HRV. While there is some controversy surrounding the physiological significance of HRV measurements, they are accepted by many as a viable clinical method to estimate parasympathetic (or vagal) and/or sympathetic tone. However, there is very little understanding about how well HRV actually represents overall vagal activity. Using intraneural CNTY electrodes, we measured tonic vagal activity in the left cervical vagus chronically. Vagal ENG and ECG were recorded simultaneously for 10 min while animals were maintained at 2% isoflurane. Electrode noise was estimated for each recording from electrode impedance (Johnson noise) and input amplifier noise. This electrode noise was subtracted from the average RMS for each 10-min recording to obtain a measurement of vagal tone. Across all animals, vagal tone under isoflurane anesthesia was 1.03 ± 0.48 µV RMS (63 recordings from 6 animals). Commonly used HRV metrics were calculated from the ECG recording using ADInstruments' Labchart HRV module. HRV metrics used were standard deviation of the R-R interval (SDRR), standard deviation of the heart rate (SD Rate), the root mean square of successive differences (RMSSD), high frequency (HF) power, HF power as a percentage of total power (HF%), and low frequency to high frequency power ratio (LF/HF). The high frequency range was defined as 0.5-2 Hz, while the low frequency range was 0.2-0.5 Hz. These 6 metrics were chosen as they are the most common measures in both time and frequency domains. Other candidate metrics (such as nonlinear metrics) were not used due to their high correlation with one of the 6 measures used. Measurements were taken from 1-11 weeks after implantation in 6 rats, and the Pearson correlation and significance between tonic neural Awake recording setup and sample data. Red bars show electrode noise levels (calculated from electrode impedance and input amplifier noise). (a) Chronic non-anesthetized recording setup. The rat has a connector mounted to the skull headcap, where a custom amplifier board (inset image) is connected. The cable is routed to a commutator which allows the rat to move around without the cable becoming tangled or twisted. A metal spring is used to protect the cable from being chewed on. This setup can be used for 24/7 continuous recording (purple glow is glare from an infrared-equipped camera). (b) Quiet baseline, where there are few/ no spikes significantly above the baseline. This is the most common type of activity observed, especially under anesthesia. (c) Intermittent spiking, where spikes are observed but appear and disappear quickly. (d) Spike bursting, where long bursts of spikes (of varying amplitude and firing rate) occur for periods of 30 s or longer. (e) Large intermittent spiking, where very large spikes (> 100 µV pk-pk) are observed in short bursts. This is most common without anesthesia but occurs with anesthesia as well. (f) Simultaneously recorded ECG. (g). Heart rate calculated from ECG. Table 1 shows correlation coefficients and correlation p values for each animal separately, and Supplemental Fig. 1 shows a scatterplot of the tonic vagal activity with each HRV metric investigated for one animal. None of the HRV metrics were found to have a correlation significantly different from zero across all 6 animals. Anesthesia is known to significantly change HRV, with isoflurane generally causing a decrease in HRV metrics associated with vagal activity. Previous studies have shown that generalized vagal activity is decreased by anesthesia; at the same time, individual fiber types display varied behavior, with both increases and decreases in firing. Due to the ambiguity of the relationship between vagal activity, HRV, and isoflurane anesthesia, we investigated the correlation between HRV and tonic vagal activity without anesthesia. Non-anesthetized recordings were collected from 4 animals, repeating the measurements described above to determine if tonic vagal activity was correlated with HRV under these conditions. In this case, vagal RMS was averaged over 3-10 min periods, with recordings conducted for 1-4 h. Across all animals, vagal tone without anesthesia was 2.38 ± 2.081 µV RMS (358 recordings from 4 animals) -a significant increase over anesthetized vagal tone (two-sample t-test, p = 5.3E−7). Hedge's g was used to measure the effect size of increased vagal tone without isoflurane, yielding a medium effect size with g = 0.70. P values and average correlation coefficients between vagal tone and HRV for are shown in Table 2. Supplemental Table 2 shows correlation coefficients and correlation p values for each animal separately, and Supplemental Fig. 2 shows a scatterplot of the tonic vagal activity with each HRV metric investigated for one animal. Once again, none of the HRV metrics were found to have a correlation significantly different from zero across all 4 animals, though rat #6 did have significant correlations for several HRV measures (SDRR, CVRR, and LH/HF-positive correlation, HF%-negative correlation). From these measurements, we conclude that tonic vagal activity in both awake and in anesthetized animals has no consistent correlation with any HRV metrics, suggesting that HRV is not a valid estimate of cervical vagal tone, with or without anesthesia.
Average vagal activity is increased during inspiration compared to expiration. The vagal/parasympathetic components of HRV are thought to arise from respiratory sinus arrhythmia, whereby inspiration and expiration cause natural changes in heart rate (heart rate typically increases during inspiration and decreases during expiration). Thus, HRV may not be a measure of tonic vagal activity, but rather phasic activity of the vagus nerve that is modulated with respiration. Coherent averaging is a technique which can be applied to increase SNR of a periodic signal by averaging the recording power based on a specific control trigger. Here, we use a trigger based on respiration to detect changes in vagal activity during different phases of respiration. An accelerometer attached to the torso of the animal was used to measure respiration under anesthesia while simultaneously recording ENG and ECG. Figure 3a shows an example of average vagal RMS (50 ms bin size) alongside the average respiratory trace recorded by the accelerometer for a 10-min recording period (497 breaths). Under isoflurane anesthesia, we observed a reversal of normal RSA. This effect is shown in Fig. 3b, where heart rate decreases during inspiration and increases during expiration. Additionally, we found that average vagal RMS was significantly increased during inspiration compared to expiration (Fig. 3c), with paired t-test of 61 recordings from 4 animals yielding a p value of 8.4E−7, with a medium effect size of Cohen's d = 0.67. The average RMS during expiration was subtracted from the average during inspiration to obtain a quantitative measure of the phasic respiratory signal. This respiratory vagal difference (RVD), is consistent with predicted and measured vagal activity for lung afferents, and may also include the effect of respiration on other vagal fiber types. www.nature.com/scientificreports/ Respiratory vagal difference can be estimated from the ECG. Due to the presence of respiratory sinus arrythmia, it is possible to obtain a measure of respiration from the ECG. We investigated whether RVD could be measured using only the ECG and vagal ENG. Under normal conditions, HR decreases during expiration and increases during inspiration; this is reversed with isoflurane anesthesia. Figure 4 shows how heart rate changes were used to average ENG signal during respiration and estimate RVD both with and without anesthesia. Vagal RMS was calculated in 50 ms bins, and the bins were averaged during periods of heart rate increase (at least 0.25 s long) and heart rate decrease (at least 0.25 s long). Sample data are shown in Fig. 4g,h, while Fig. 5 shows that during inspiration there was an overall increase in vagal activity compared to expiration both with and without anesthesia (paired t-tests, p = 1.8E−5, 63 recordings in 6 animals with anesthesia, p = 5.4E−6, 358 recordings in 4 animals without anesthesia). Paired sample effect size, Cohen's d, was measured for RVD with and without anesthesia. Under isoflurane anesthesia, RVD has a medium effect size, with d = 0.59, while without anesthesia, there is only a small effect size, with d = 0.24. RVD measured with both the accelerometer and heart rate methods (4 animals, 46 total recordings) showed high correlation between the accelerometer and heart rate methods (R = 0.96, p = 2.3E−26; see Supplementary Fig. 3). Thus, RVD can be estimated from the heart rate.
Respiratory vagal difference does not correlate with HRV. Variability in heart rate is closely related to respiration, with certain HRV metrics specifically measuring respiratory variations. The high frequency components (HF and HF%) specifically measure the power of HRV at respiratory frequencies and are thought to be an accurate representation of vagal tone. Due to this connection between the respiratory cycle and HRV, we hypothesized that the magnitude of RVD (which measures the changes in vagal activity during respiration) could be a better predictor of "vagal tone" (HRV), as compared to tonic activity. RVD, both with and without anesthesia, was compared to average HRV metrics. Tables 3 and 4 show p values and average correlations for all animals. Supplemental Tables 3 and 4 show correlations and correlation p values for each animal separately, and Supplemental Figs. 4 and 5 show sample scatterplots of the RVD with each HRV metric investigated with and without anesthesia. There are only two animals which show a significant correlation with an individual metric under anesthesia-rat #3, which is negatively correlated with RMSSD, and rat #4, which is positively correlated with HF%. No animals showed a significant correlation with any HRV measures without anesthesia. These data show that there is a clear change in vagal activity during respiration, as indicated by the RVD measurements. However, RVD is not correlated with any HRV metrics, suggesting that HRV is not a valid measure of overall phasic respiratory activity in the vagus. With anesthesia: Average RMS and 95% confidence interval in blue, RVD and 95% confidence interval in grey. Average RVD, calculated for 63 total recordings from 6 animals, was 0.0828 ± 0.14 µV RMS, which is significantly different from zero (p = 1.8E−5). (b) Without anesthesia: Average RMS and 95% confidence interval in blue, RVD and 95% confidence interval in grey. Average RVD, calculated for 63 total recordings from 6 animals, was 0.15 ± 0.60 µV RMS, which is significantly different from zero (p = 5.4E−6).
Discussion
Bioelectronic medicine and autonomic therapy are rapidly growing fields, and technologies to interface with small autonomic nerves will be crucial to further the research in these disciplines. While previous approaches have had limited success with long-term recording, CNTY electrodes provide a stable, high-SNR interface for chronic peripheral nerve interfacing 33 . With novel techniques for electrode manufacture and preparation described here, CNTYs are now easier to use and more reliable to implant. Anesthesia is also known to alter HRV and individual fiber dynamics [35][36][37][38][39][40][41][42][43] ; yet, vagal recordings have not been obtained under chronic conditions without anesthesia. For the first time, we have shown that vagal tone can be recorded using CNTY electrodes in awake, behaving rats, greatly expanding the scope of possible autonomic studies. Of particular interest to physiologists is the relationship between vagal activity, respiratory sinus arrhythmia and HRV. Respiratory sinus arrhythmia is a well-known phenomenon present in mammals and other animals that links HRV to respiration 44 . The firing rate of cardiac vagal efferents has been shown to be inversely correlated with heart rate in acute experiments, and can be used to predict changes in heart rate associated with respiratory sinus arrhythmia 45,46 . Vagal activity is also linked to HRV by a correlation between the parasympathetic control of heart rate (measured by the degree of HR increase occurring after vagal blockade) and the peak-topeak variations in heart rate caused by respiration 25,47 . Initially, studies focused on the cardiac component of the vagus nerve, but since, the measurements of HRV have been used to represent overall vagal or parasympathetic activity. This concept was likely strengthened by studies linking altered HRV and non-cardiac diseases. For example, patients with diabetes have lowered high-frequency HRV 48 . Another study showed decreases in SDRR and normalized high frequency HRV for patients with COPD 20 and concluded that an increase in vagal activity and lack of sympathetic response to stimulus may contribute to airway obstruction. Multiple studies have reported decreased HRV in patients in critical care, and the recovery of normal HRV is associated with survival and general improvement in children and adults 21 . On the other hand, epileptic patients show an increase in respiratory HRV during interictal periods 12 , demonstrating a potential link to hyperactivity in the brain. As a result, HRV is thought to be representative of overall vagal activity 19 . While it is possible that changes in cardiac vagal activity correspond to changes in the activity of other fibers, our data show that HRV is not representative of total vagal activity, even when averaging across the respiratory cycle.
Vagus nerve stimulation (VNS) is an electronic stimulation therapy used for a variety of diseases, most famously for treatment of drug-resistant (refractory) epilepsy 49 . Despite the assorted successes of VNS, the mechanisms of action are largely unknown. One concept is that stimulating the vagus nerve could increase vagal tone, thus offsetting some negative effects seen in patients. However, conflicting results on the efficacy of VNS in both the right and left vagus to increase HRV have emerged 9,50 . Recently, three major clinical trials were completed with the goal of utilizing VNS to correct autonomic imbalance and improve patient outcomes in heart failure. NECTAR-HF (Boston Scientific), INOVATE-HF (BioControl Medical), and ANTHEM-HF (Cyberonics) all discussed the importance of altered HRV in the study rationale [51][52][53] , and ANTHEM-HF and NECTAR-HF both measured the effect of VNS on HRV. While both of these studies observed an increase in some of the HRV metrics examined, all three of the clinical trials failed to show clear benefits to patients with VNS treatment 50,54,55 , despite previously promising results in dogs, rats, and humans [56][57][58] . The ANTHEM-HF trial is the only study which yielded overall positive results, such as improvements in left ventricular ejection fraction. Even so, the authors concede that the placebo effect may have affected their results, and further investigation is necessary. There is a great unmet need to increase our knowledge of the vagus nerve signals, and to answer questions such as: "What is vagal tone?", "How does vagal activity, or true vagal tone, relate to the clinical measures of HRV?", Table 3. Correlations of anesthetized respiratory vagal difference with HRV. Data shown are the averages for six animals (63 total recordings). None of the six HRV metrics have correlations significantly different from zero (Bonferroni corrected significant level of p = 0.0083). Supplemental Table 3 shows individual correlations and p values for each animal. www.nature.com/scientificreports/ and "How can VNS be used to restore autonomic function?". Direct recording of vagal tone is therefore crucial to improving our ability to study autonomic therapies and evaluating their effectiveness. The vagus contains a variety of fiber types, targets, and sources, with most being afferent fibers 2 . Thus, tonic vagal activity is likely to contain both afferent and efferent firing which would vary greatly depending on physiological status. We have shown that direct measurement of vagal tone is not correlated with common measures of HRV in rats with and without anesthesia. HRV is used as a predictor for vagal tone in both rodents and humans 59 , therefore this result could be extended to humans and suggests that clinical measures of HRV are not representative of overall vagal activity, and/or that the term "vagal tone" is misleading. Since the effects on vagal blockade and vagotomy on HRV are well-established, it is likely that HRV is associated with the activity of a subset of vagal fibers which modulate their activity in response to physiological changes. Variations in HRV could also occur due to changes in downstream receptors or neurotransmitters, independent of changes in neural activity. Recording signals from the cardiac branches of the vagus would allow for a more thorough investigation of the relationship between cardiac vagal activity and HRV. However, the cardiac branches in the rat are prohibitively small and difficult to access, and the study would likely need to utilize a larger mammalian model. These results do provide a potential explanation for the relative lack of success and consistency of VNS stimulation for heart failure, since stimulation in these studies was a fixed on/off cycle, based on the assumption that changing overall vagal activity was equivalent to increasing HRV. Our results do not explain why VNS has been effective in treating a wide array of other chronic conditions, but they do suggest that these actions are likely taking place through the activation of afferent fibers in the vagus nerve.
Single-unit recordings show that cardiac vagal efferents are highly correlated with respiratory sinus arrhythmia [45][46][47] . Additionally, pulmonary vagal afferents are correlated with respiration 60 . However, such experiments inherently capture only a small subset of vagal activity, whereas the data presented in this study come from many different locations within the nerve. Acute whole-nerve recordings in the mouse vagus have also shown phasic activity synchronized with respiration 61-63 , a trend which is clearly replicated in these data from the rat vagus. Using coherence averaging to increase the SNR revealed that vagal activity is increased during inspiration relative to expiration. This technique has previously been used with R-peak triggered averaging (based on ECG) to detect changes in vagal activity occurring before acutely-induced seizures 64 . The respiratory vagal difference, RVD, is the average activity of many fibers and indicated a strong correlation between vagal firing and respiration. However, RVD did not significantly correlate with HRV, which further demonstrates that HRV is not an accurate measure of vagal tone. RVD signals originate from many types of vagal fibers, such as cardiac vagal efferents, lung afferents (which may be part of the HRV reflex), and fibers that detect changes in blood pressure. Thus, while phasic activity is the likely driver of HRV, it contains other fibers which are unrelated to cardiac control, explaining why RVD does not correlate directly with HRV.
As the first direct study of vagal tone, this work is somewhat limited in its scope. First, HRV is often measured under non-normal physiological conditions, such as chronic illness. The experiments described here could be repeated using models of chronic disease, such as heart failure, high blood pressure, or diabetes, to determine if vagal tone is altered under those conditions. Physiological interventions, such as exercise, could also be used to illicit known changes in HRV that could be compared to vagal recordings. Pharmacological and electrical stimulation-based therapies should also be investigated, providing new information on how these treatments effect vagal activity acutely and long-term. Although many drugs known to alter vagal tone, such as atropine, primarily work on the receptors rather than the vagal fibers themselves, they may have secondary effects on vagal activity. Furthermore, heart rate is controlled not only by the vagus nerve, but by the sympathetic nervous system as well. There is some evidence that the sympathetic tone can be measured by low frequency HRV, and the sympathovagal balance measured by the LF/HF ratio 65 , though this is a controversial topic 66,67 . The application of chronic recording with CNTY electrodes for both sympathetic and vagal tone could be used to greatly improve our understanding of how the sympathetic and vagal systems interact to control HRV, and how these systems are affected by pathophysiological conditions and stimuli.
Overall, our results indicate that it now possible to study the vagus activity in a chronic animal model and to ask new questions about autonomic physiology. By directly measuring vagal activity, we investigated two measures of "true vagal tone": (1) average vagal activity, which is the most fitting measure of "tone", and (2) phasic respiratory activity, which is more closely related to the variations in heart rate. Since neither measure showed significant correlation with HRV metrics, regardless of the presence of isoflurane, we conclude that HRV is not an accurate measure of vagal tone in rats. These results can pave the way for future studies on the exact nature of HRV, and can be used as a basis for investigations of novel autonomic therapy paradigms. Additionally, improvements to the CNTY electrodes and recording setup greatly broadens the research that can be conducted in the peripheral nervous system and can lead to new breakthroughs in the study of the vagus nerve.
Methods
Electrode manufacture. CNT yarns were manufactured at Case Western Reserve University as described previously 33 . CNTYs were then mated to stainless steel 35NLT-DFT wire (Fort Wayne Metals) with conductive epoxy resin (H20E, EPO-TEK). Dacron mesh and silicone elastomer (MED-4211/MED-4011, NuSil Silicone Technology) were added to further secure and insulate the CNT-DFT junction. The free end of the CNTY was tied to the end of an 11-0 nylon suture (S&T 5V33) using a fisherman's knot. The DFT-CNTY-suture assembly was coated with 5 µm parylene C (vapor deposition coating, SMART Microsystems), and then a small section (~ 200 µm long) of insulation was removed using a laser welder (Kelanc Laser). Laser settings were set to 1A current, 0.3 ms pulse width, and 300 µm diameter. Parylene-C removal was confirmed by measuring impedance of the recording site in saline before and after de-insulation (typically ~ 10MΩ before de-insulation, and ~ 10kΩ after). Figure 1a,b show a schematic of the electrode assembly and the de-insulated recording site. Figure 1c, www.nature.com/scientificreports/ show close-ups of the CNTY-suture knot and the de-insulated recording site, and Fig. 1e shows an electrode implanted in the rat vagus nerve.
Surgery. All surgical and experimental procedures were done with the approval and oversight of the Case
Western Reserve University Institutional Animal Care and Use Committee (protocol number 2016-0328) to ensure compliance with all federal, state and local animal welfare laws and regulations. Electrodes were implanted in male Sprague-Dawley rats between 7-12 weeks of age. The left cervical vagus nerve was exposed through a midline incision along the neck. Muscles on the left side of the neck were separated to expose the vagus nerve and carotid artery, which are normally mated together. The vagus nerve was separated from the artery and held in slight tension on a glass hook. Two CNTY electrodes were implanted in the nerve by sewing the suture-CNTY electrode through ~ 1 to 2 mm of the nerve, as shown in Fig. 1e. Electrodes were implanted approximately 2 mm apart. The knot was pulled through the nerve, and then pulled back such that the knot rested against the epineurium and the recording site remained inside the nerve. After implantation, the nerve and surrounding tissue were covered with ~ 1 mL of fibrin glue (Tisseel, Baxter International Inc.) to secure the electrodes in place. The DFT ends of the electrode were tunneled to the top of the skull, and then soldered to a connector (Omnetics Connector Corporation MCP-5-SS) which was fixed to the top of the skull using dental cement. The amplifier ground was connected to a screw placed in the skull. Electrodes were implanted for chronic recording in 8 animals, with implant duration varying from two to eleven weeks. Early termination of the experiment sometimes occurred as a result of skin erosion exposing tunneled leads, or from damage to the headcap connector causing pain or discomfort to the animal.
Implantation of the ECG telemeters was performed as described by Kaha Sciences 68 . Telemeters were implanted in the abdomen and fixed to the abdominal muscle, and ECG leads were tunneled to the chest, with one lead placed on the xyphoid process, and the other placed near the bottom of the sternohyoid muscle. Vagus nerve CNTY implants and ECG telemeter implants were performed during a single surgery to minimize impact on the animal.
Recording.
Recordings were carried out in awake, behaving animals, and in animals anesthetized with isoflurane gas. Four animals underwent recordings under anesthesia only, two animals were recorded without anesthesia only, and two animals were recorded both with and without anesthesia (total of 6 animals with anesthesia and 4 without). For anesthetized recording, animals were anesthetized with 4% isoflurane and maintained at 2% isoflurane with 100% oxygen. A quarter-sized mini-board (shown inset in Fig. 2a) was connected to the headcap. This board amplifies and digitizes the signal (via Intan RHD2216 recording chip) before sending the signal to be displayed and saved on a laptop computer. 8-channel hardware averaging was utilized to further increase SNR 34 . Neural recordings were sampled at 20 kHz with a 5 kHz low pass filter. ECG was simultaneously recorded with a 1 kHz sampling rate using a TR50B Biopotential telemeter (Kaha Sciences) via ADInstruments Powerlab and LabChart software. In some cases, an accelerometer (Adafruit ADXL335) was fixed to the animals' torso to detect movement that occurred during respiration. For awake recordings, the amplifier board was secured to the headcap connector using a custom-made 3D printed pin-locked mechanism (Supplemental Fig. 6a), and attached to a PlasticsOne commutator which allowed the rat to move freely around the cage, as shown in Fig. 2a. This recording setup is also robust against motion of the animal and the board, as shown in Supplemental Fig. 6b. During recordings, the telemeter charging field was turned off as much as possible due to significant interference with the ENG signal. Periods where the telemeter charging field was active, or where ECG signal was poor (due to packet loss from the telemetry, or EMG contamination), were excluded from the analysis.
Signal processing. ENG, ECG, and accelerometer data were imported into MATLAB, where they were further processed. ENG was band pass filtered from 800-5000 Hz to minimize interference from EMG, ECG, or other possible sources; ECG was low-pass filtered at 300 Hz. Electrode noise was estimated for each recording from electrode impedance (Johnson noise) and input amplifier noise, as described previously 34 , and this noise estimate was subtracted from average recorded RMS to estimate the vagal tone RMS. ECG data was low-pass filtered with a cutoff of 300 Hz. Instantaneous heart rate was calculated by taking the inverse of successive R-R intervals, and was interpolated to generate a signal with a fixed sampling rate of 1 kHz. HRV metrics were calculated using the Labchart HRV module. Beat detection typically including beats with 150-250 ms RR interval and 0-1.8 complexity (though these values were adjusted slightly between animals and recording days as needed). Ectopic beats were excluded from analysis, and recordings with more than 10% ectopic beats or artifacts were excluded, with artifacts and high noise (present only in non-anesthetized recordings) being the most common reasons for exclusion. Data were analyzed in continuous sections between 3 and 10 min long, or approximately 1000-3000 RR intervals.
Statistical methods.
Results in text are reported as mean ± standard deviation, while error bars and shaded areas represent the 95% confidence interval (mean ± 1.96*SEM). T-tests were used to measure significance of correlations, with a Bonferroni corrected significance level of 0.0083; paired t-tests were used to measure significance of respiratory vagal difference (comparing inspiration to expiration), and a two-sample t-test was used to compare the average vagal tone for anesthetized to non-anesthetized recording. All reported p values are two-tailed.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request. Some custom analysis code was created to calculate and measure RVD and is available from the corresponding author upon reasonable request.
|
2021-01-14T14:25:31.604Z
|
2021-01-13T00:00:00.000
|
{
"year": 2021,
"sha1": "d067d967ef93e5bbb74adb004b6ff0ec03d77da1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-79808-8.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4867b64f18e4693e5c7a357a8b9cca089c4c631",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13796497
|
pes2o/s2orc
|
v3-fos-license
|
Transverse vs torsional ultrasound: prospective randomized contralaterally controlled study comparing two phacoemulsification-system handpieces
Purpose To compare surgical efficiency and multiple early clinical outcome variables in eyes undergoing phacoemulsification using either transversal or torsional ultrasound systems. Setting Assil Eye Institute, Beverly Hills, CA, USA. Design Prospective, randomized, clinician-masked, contralaterally controlled single-center evaluation. Patients and methods Patients seeking cataract removal in both eyes with implantation of multifocal intraocular lenses were randomly assigned to one of two treatment rooms for phacoemulsification with either a transverse ultrasound system or torsional handpiece system. The contralateral eye was treated at a later date with the alternate device. A total of 54 eyes of 27 patients having similar degrees of cataract, astigmatism, and visual potential were included. All operative data were collected for analysis, and patients were followed for 3 months after surgery. Results Similar visual acuity was reported at all postoperative visits between the two groups. Mean phacoemulsification time and total power required were both significantly lower with the transverse system than with the torsional technique (P<0.05 for both). Similarly, mean total balanced salt solution used was significantly less with the transverse system vs torsional (P<0.05). Postoperative safety demonstrated significantly lower endothelial cell loss at 1 day and 1 month (P<0.05) with transverse vs torsional. Macular swelling was less at 1 week, 1 month, and 3 months with transverse vs torsional, although the difference did not achieve significance (P=0.1) at any single time point. Clinically detectable corneal edema was reported less frequently at all postoperative time points with the transverse system. Conclusion The transverse ultrasound system was found to be possibly associated with less balanced salt-solution use, less phacoemulsification time, and less power required than the torsional phaco system. Postoperative data suggested that improved phaco efficiency may translate to a better overall safety profile for the patient.
Introduction
Over 20 million Americans 40 years of age or older are affected by cataracts. Removal of the cataract by phacoemulsification, together with implantation of an intraocular lens (IOL), continues to be the most commonly performed surgical procedure in the USA, with the number of cataract procedures performed each year having grown to over 1,500,000. 1 Improvements in surgical technique, phacoemulsification technology, viscoelastic substances, and IOLs now enable over 97% of patients to recover from cataract surgery with no major complications. 2 Despite these promising results, enhanced efficiency and safety remain the top priority for continued research and product development. Surgeons continue to seek advances in phacoemulsification cutting efficiency, reduced time, and lower power settings, in order to further prevent complications, such as incision burn, endothelial cell loss, and pseudophakic cystoid macular edema. 3,4 Among the technological innovations that are undergoing research and development in cataract extraction, overall reduction of phacoemulsification energy and time continue to be major objectives for future improvements. 5 Research and development to continue to improve all aspects of phacoemulsification equipment and procedure have resulted in enhancements of vacuum and flow settings, reduction in the amount of energy delivered, and reduced phacoemulsification time. 6,7 With conventional phacoemulsification, the ultrasound power used to emulsify the lens originates from the longitudinal movement of the phacoemulsification needle, with the tip of the handpiece moving at a high frequency in an advancing and retreating motion. 8 The ultrasound mode can produce a repulsion effect, because the phaco tip pushes the nucleus away with each stroke as it advances. As a result, the ultrasonic impact upon the lens fragments can be interrupted and efficacy compromised. Transverse ultrasound-power modulation is a technological development that minimizes chatter from the nuclear fragments that are being emulsified. 9 This type of modulation results in the ability of a "side-toside" movement that may reduce the frictional heat generation noted with traditional longitudinal-only movement. The torsional oscillation effect is another developmental improvement that is noted to reduce the amount of phacoemulsification energy and increase the efficiency required to remove the cataractous nucleus by fragmenting the cataract via shearing, in place of the conventional jackhammer effect. 9 These developments in oscillation, together with the advancement of micropulse-energy delivery, have resulted in reduced repulsion of the lens material during aspiration and improved "followability", which is the beneficial tendency of nuclear fragments to be aspirated in a relatively feathery and continuous fashion. 6 Two of the newer phacoemulsification systems incorporating these advances are a transverse ultrasound system (WhiteStar Signature™ system with Ellips ® FX Transversal Ultrasound; Abbott Medical Optics Inc, Santa Ana, CA, USA) and a torsional system (Infiniti with the Ozil ® torsional handpiece; Alcon Laboratories Inc, Fort Worth, TX, USA). The Infiniti Ozil uses torsional tip technology integrated into the Infiniti phacoemulsification system, producing an oscillatory movement along the longitudinal axis of the tip (eg, side-to-side movement), as opposed to the to-and-fro movement of traditional phaco tips. Torsional movement keeps the tip in contact with the cataract, reducing the chance of nuclear material repelling from the tip and decreasing heat production. 4 With side-to-side torsional movement, emulsification is optimized when using a bent phaco tip. The WhiteStar Signature Ellips FX uses a combination of longitudinal and transversal phaco, which can be delivered with either a straight or bent phaco tip. Micropulses of power are delivered, generating side-to-side or longitudinal movement. The system also incorporates both peristaltic and Venturi fluidics. The fusion of different technologies into one system is purported to offer flexibility for the surgeon and possibly to improve efficiency.
A prospective, randomized study was conducted comparing the two phaco systems -the WhiteStar Signature system with Ellips FX transverse ultrasound handpiece and the Infiniti with Ozil torsional handpiece -to evaluate and compare intraoperative efficiency and postoperative outcomes, primarily with regard to visual acuity, ocular (corneal and macular) edema, and inflammation. The outcome variables were measured preoperatively and postoperatively.
Patients and methods
This single-site study was designed as a prospective, clinicianmasked, contralaterally controlled evaluation of 54 eyes of 27 patients. For their first eye, patients were randomly assigned to one of two treatment rooms to undergo bilateral phacoemulsification with implantation of a multifocal IOL, according to a randomization schedule. One treatment room was equipped with the Ellips FX transverse system, and the second room was equipped with the Infiniti with Ozil torsional system. Table 1 presents the specific settings used by the surgeon on each phacoemulsification machine with regard to sculpt, chop, and epinucleus parameters. Within 6 weeks following the first eye surgery, the contralateral (second) eye underwent cataract surgery using the alternate phacoemulsification device. Patients in the study were implanted with either the Sensar ® monofocal IOL with OptiEdge™ or the Tecnis ® multifocal IOL (both Abbott Medical Optics). Patients received the same IOL in both eyes. The investigator and surgeon (KA) is an established board-certified ophthalmologist who is experienced in using both phacoemulsification systems Clinical Ophthalmology 2015:9 submit your manuscript | www.dovepress.com
1407
Transverse vs torsional Us and handpieces. KA had used the phaco systems interchangeably for the 6 months prior to initiation of the study, in order to optimize the settings for each system.
Patients were enrolled in the trial if they were at least 21 years of age, anticipated to undergo phacoemulsification in both eyes with IOL implantation, and were willing, available, and of sufficient cognitive awareness to comply with the examination procedures. Patients were offered the option of a monofocal or multifocal IOL, and were informed that the same IOL would be implanted in each eye. Both eyes were required to have similar degrees of cataract and astigmatism, including topographic cylinder difference of less than 2.0 D and globe axial length difference of less than 0.5 mm, as measured by the IOLMaster ® (Carl Zeiss Meditec AG, Jena, Germany). Patients were excluded if the anticipated visual potential of either eye was worse than 20/25, were undergoing asymmetrical use of ocular medications, or were noted to have the presence of glaucoma, macular disease, or corneal disease in either eye.
Following institutional review board approval and acceptance of the written informed consent by each subject, patients underwent a standard preoperative exam. All steps in this study were conducted in an ethical manner in accordance with the Declaration of Helsinki. Manifest refraction, visual acuity, and grading of the cataract (according to the nuclear color and opalescence grades of the Lens Opacities Classification System III) was carried out and recorded. Also, the standard tests of intraocular pressure, biometry, and slit-lamp evaluation were performed. In order to assess comparative outcomes in both randomized groups, additional testing was performed, including corneal pachymetry (Orbscan II scanning slit topography; Bausch & Lomb Inc, Bridgewater, NJ, USA), specular microscopy (CellChek; Konan Medical Inc, Nishinomiya, Japan) and assessment of macular thickness with optical coherence tomography (OCT; Cirrus high-definition SD-OCT; Carl Zeiss). All of these tests were performed at the preoperative and postoperative visits.
Phacoemulsification and implantation of the IOL was performed using the surgeon's standard procedure, including a perilimbal clear corneal temporal incision measuring 2.5 mm. The surgeon performed central grading with horizontal nuclear chopping using a bimanual technique, followed by coaxial, irrigation/aspiration, and capsular polishing. Operative outcome measures included notation of total phacoemulsification time, total volume of balanced salt solution (BSS; Alcon) used, time required for hydration-technique wound closure, and phacoemulsification-power settings. The occurrence of any operative complications or adverse events was also noted.
Postoperatively, patients returned for evaluation at 1 day, 1 week, 1 month, and 3 months. At these visits, measurements included uncorrected visual acuity (LogMAR), specular microscopy, corneal pachymetry, intraocular pressure, and macular OCT. A comprehensive slit-lamp exam was also performed at all visits to grade corneal epithelial punctate staining and degree of stromal edema; edema was graded using a 5-point scale. The clinical evaluator was trained by the investigator for consistency, and was masked with respect to the specific phacoemulsification device used. At the 3-month visit, mandatory measurements consisted only of specular microscopy and macular thickness. Adverse postoperative events were defined as any untoward medical occurrence that did or did not necessarily have a causal relationship with this treatment, including a worsening of any previously identified condition.
Patient demographics
Cataract surgery was performed by a single surgeon (KA) on 54 eyes of 27 patients between July and September 2011. Baseline characteristics between the two randomly assigned groups (Ellips FX and Infiniti Ozil) were comparable. The mean age of the patient population at the time of surgery was 69 years ±7.8 (standard deviation); 65% of patients were female. Mean preoperative manifest refraction spherical equivalent for the overall population was -0.16 D. All patients completed the prospective study follow-up.
Operative outcomes
The surgeon's standard phacoemulsification technique was performed in all cases in both the Ellips FX and Infiniti Ozil groups. With regard to surgical efficiency, the effective phaco time for Ellips FX was 31.97 seconds shorter than with Infiniti Ozil, and was statistically significant (P=0.0019). The confidence interval (-50.43 to -13.52) indicated that at best, Ozil took 13.52 seconds longer than Ellips, and at worst, effective phaco time was 50.43 seconds longer with Ozil than Ellips. The mean required phacoemulsification-power setting was lower with the Ellips FX system (45.2 mJ) than the Infiniti Ozil (62.5 mJ; P=0.208) (Figure 1). The mean difference in power was 14.17 mJ (higher for Infiniti), which was not statistically significant.
The mean volume of BSS used during surgery was significantly less with the Ellips FX (313 cc) than the Infiniti Ozil handpiece (350 cc) (P=0.037) (Figure 2
Postoperative outcomes
Statistically significant differences in postoperative findings were noted at various visits between the two treatment groups for endothelial cell loss and change in corneal pachymetry. Postoperative endothelial cell loss was significantly less at both the 1-day and 1-month follow-up periods in the Ellips FX group. In the FX group, the mean reduction in endothelial cells measured 45.1 cells at 1 day and 127.3 cells at 1 month, compared to Ozil, in which the reduced cell counts measured 170.7 at 1 day and 382.3 at 1 month (P,0.05 for both time points). This difference became diminished, and did not achieve significance at 3 months. (Figure 3).
The mean change in corneal pachymetry for each visit during the 3-month follow-up is illustrated in Figure 4. The change in corneal thickness at 1 day postsurgery was statistically significantly less with the Ellips FX (P=0.046) than the Infiniti Ozil. At day 1 with FX, mean change in corneal pachymetry was 30.3 μm vs 51.0 μm with Ozil.
Slit-lamp exam observations confirmed less frequent corneal stromal edema in the Ellips FX group at postoperative day 1; no stromal edema was noted in 77.7% of Ellips FX eyes compared to only 30.3% of Infiniti Ozil eyes. No moderate or severe corneal edema was seen in the Ellips FX eyes compared to the Infiniti Ozil group, with three eyes (11%) having moderate or severe corneal edema reported ( Figure 5). No corneal stromal edema was observed in either group at 1 week or later after surgery. Macular swelling as measured by mean change in OCTmeasured central macular thickness post-cataract removal was observed to be less in the Ellips FX group compared to the Infiniti Ozil group. This data only demonstrated a significant difference at the 1-week time point ( Figure 6). However, when all time points were pooled, this also resulted in a significant difference between the two groups. The difference in mean pooled macular edema between the two groups was -17.70 (P=0.016). No postoperative adverse events were reported in any eyes enrolled in the study at any time during the follow-up period.
Uncorrected visual acuity (LogMAR) outcomes were similar between the two groups at all postoperative visits. There was no significant difference in uncorrected visual acuity between the Ellips FX-treated or Infiniti Ozil-treated groups at any follow-up visit throughout the 3-month period (Figure 7).
At the conclusion of each surgical procedure, the surgeon completed a standardized questionnaire to assess overall cutting efficiency, chamber stability, clogging of the phaco tip, and overall satisfaction with the performance of the handpiece and phaco machine used. In general, the analysis of the questionnaire data indicated similar outcomes with regard to handpiece performance and satisfaction with the equipment. The surgeon noted, however, that in two cases with the Infiniti Ozil, the tip clogged during the procedure, whereas there were no instances of clogging found with the Ellips FX.
Discussion
Numerous studies have confirmed that advances in phacoemulsification technology in recent years have ultimately provided a reduction in overall ultrasound energy used and time needed to remove the cataract, translating into improved efficiency and safety for the patient. We conducted Infiniti Ozil utilizes a torsional handpiece, with a side-toside, nonlongitudinal movement of the phaco tip. The Ellips FX handpiece blends two movements, incorporating both lateral and longitudinal components. The evaluation of these two devices was undertaken via a randomized, match-paired clinical randomized controlled trial, as this approach is believed to minimize allocation bias in the assignment of treatments. In addition to randomization of patients' eyes into one of two treatment groups, we sought to diminish outcome bias further by having all treatments performed by the same surgeon, utilizing corresponding equipment in each eye, conducting postoperative examinations utilizing uniform testing, and performing patient examinations by a single masked observer. The match-paired approach of comparing the two eyes of the same patient was further intended to minimize the impacts of demographic and interindividual variability.
Results demonstrated excellent intraoperative safety, efficiency, and overall performance with both systems. The transverse Ellips FX handpiece group, however, demonstrated a significantly lower phaco time (P,0.05) and significantly less usage of BSS (P,0.05) during the procedure vs the Infiniti Ozil. This may represent a clinically significant difference, as it has previously been reported that the single most important predictor of good vision and a clear cornea postoperatively is the total amount of phaco energy delivered into the eye. 6 The benefit of less BSS usage during the procedure may suggest better surgeon control of the handpiece, better control of the nuclear fragments, and may also translate into a clinical reduction of corneal edema. Clinically observable corneal edema had resolved in all patients in both groups by the 1-week visit, although a difference in endothelial cell loss (favoring Ellips FX) was still observed at 3 months. Less phaco time and power may also correlate with the difference in corneal thickness measurement by corneal pachymetry at 1 day, which was statistically significantly less with the Ellips FX, thus possibly suggesting a more rapid early postoperative recovery.
Reducing endothelial trauma is also an anticipated positive outcome of less phaco energy expended during cataract extraction. Minimizing endothelial damage is important during phaco treatment, as iatrogenic cell loss may occur with excessive intraocular manipulation, ultrasonic vibration, and heat generated by the phaco tip. 10 In our study, postoperative endothelial cell loss was less at all follow-up time points, and statistically significantly less at both the 1-day and 1-month follow-up periods in the Ellips FX group vs Infiniti Ozil. These findings may suggest a correlation between the statistically significant lower phaco time and power setting noted with the Ellips FX vs Infiniti Ozil and the consequent differences in endothelial cell loss.
The study also reported findings of less macular swelling with FX than with Ozil at 1 week, 1 month, and 3 months. Macular edema as measured by mean change in OCT central macular thickness was reported to be less in the Ellips FX group at all postoperative visits. A larger study with longer follow-up might determine if these reduced macular edema findings would translate into clinically significant outcome differences, such as reduction of early cystoid macular edema or the reduction of subsequent epiretinal membrane formation in this group of patients.
While the focus of this study was to evaluate the outcomes as a function of different handpieces, it is possible that the difference in endothelial cell loss and macular edema between the two groups can be explained by other factors. We note that the differential bottle height, aspiration, and vacuum settings needed to optimize chamber stability between the two groups may also have had an effect on outcomes. These differences may have led to differential intraocular pressure fluctuation and fluidics with associated turbulence, which may have influenced intraocular dynamics, resulting in endothelial cell loss or macular swelling.
Two cases in which Ozil was used were noted to have "clogging" of the phaco tip during removal of the cataract. Clogging of the tip may lead to the concern of heat buildup in and around the area of removal. This was further noted by Schmutz and Olson, who described that metal stress in the proximal phaco needle shaft can create substantial heat
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/clinical-ophthalmology-journal Clinical Ophthalmology is an international, peer-reviewed journal covering all subspecialties within ophthalmology. Key topics include: Optometry; Visual science; Pharmacology and drug therapy in eye diseases; Basic Sciences; Primary and Secondary eye care; Patient Safety and Quality of Care Improvements. This journal is indexed on PubMed Central and CAS, and is the official journal of The Society of Clinical Ophthalmology (SCO). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
1411
Transverse vs torsional Us buildup with transverse ultrasound, especially with Ozil. 11 In their paper, Ellips was found to be significantly less hot than Ozil with all test scenarios, especially in the clinically relevant scenario where the tip is occluded and there is no or little cooling flow through the tip.
Conclusion
In summary, this prospective, match-paired randomized, observer-masked study of two of the newer, advanced phacoemulsification machines available demonstrated excellent operative efficiency, with patients experiencing fast visual rehabilitation with no surgical or postoperative complications. The Ellips FX transverse ultrasound system allowed for a faster, possibly more efficient surgical procedure, with less BSS, less phacoemulsification time, and less power required than the Infiniti with the Ozil torsional handpiece. The improved surgical efficiency noted with FX may translate into improved safety, and is supported by the observed postoperative results in this study noting less corneal edema, less endothelial cell loss, and less macular edema.
A limitation of this study was the small subject population. A larger, multicenter study may be beneficial in providing greater correlation between enhanced surgical efficiency and postoperative recovery and safety.
|
2018-04-03T06:08:02.961Z
|
2015-08-03T00:00:00.000
|
{
"year": 2015,
"sha1": "2522cded2cdedcb561a69f999bebf4946090c74c",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=26274",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "119b1e3270c89ed0bba41f898d4aed00c7ef67f9",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
44080457
|
pes2o/s2orc
|
v3-fos-license
|
The Clinical Implications of Death Domain-Associated Protein (DAXX) Expression
Background Death domain-associated protein (DAXX), originally identified as a pro-apoptotic protein, is now understood to be either a pro-apoptotic or an anti-apoptotic factor with a chromatin remodeler, depending on the cell type and context. This study evaluated DAXX expression and its clinical implications in squamous cell carcinoma of the esophagus. Methods Paraffin-embedded tissues from 60 cases of esophageal squamous carcinoma were analyzed immunohistochemically. An immune reaction with more than 10% of tumor cells was interpreted as positive. Positive reactions were sorted into 2 groups: reactions in 11%–50% of tumor cells and reactions in more than 51% of tumor cells, and the correlations between expression and survival and clinical prognosticators were analyzed. Results Forty-three of the 60 cases (71.7%) showed strong nuclear DAXX expression, among which 19 cases showed a positive reaction (31.7%) in 11%–50% of tumor cells, and 24 cases (40.0%) showed a positive reaction in more than 51% of tumor cells. A negative reaction was found in 17 cases (28.3%). These patterns of immunostaining were significantly associated with the N stage (p=0.005) and American Joint Committee on Cancer stage (p=0.001), but overall survival showed no significant difference. There were no correlations of DAXX expression with age, gender, or T stage. However, in stage IIB (p=0.046) and stage IV (p=0.014) disease, DAXX expression was significantly correlated with survival. Conclusion This investigation found upregulation of DAXX in esophageal cancer, with a 71.7% expression rate. DAXX immunostaining could be used in clinical practice to predict aggressive tumors with lymph node metastasis in advanced-stage disease, especially in stages IIB and IV.
Introduction
Esophageal cancer is uncommon; however, its prognosis is poor [1]. The 5-year overall survival rate for esophageal squamous cell carcinoma ranges from 20% to 30% [1]. Early metastasis and late diagnosis are responsible for the poor prognosis of esophageal cancer [1,2]. Surgical resection has tradi-tionally been the mainstay treatment for localized esophageal cancers [2]. However, survival after surgery alone for advanced esophageal cancer is not satisfactory. Therefore, the identification of biomarkers that enable early diagnosis, identify disease aggressiveness, and predict the prognosis of patients undergoing treatment is important.
In some tumors, several candidates for biological markers of aggressiveness have been suggested, including p53 [3,4]. However, the results are inconclusive, and the clinical impact of these candidate markers remains low. Gene-expression patterns associated with various clinicopathological parameters during tumor development and progression have been proposed [3,4]. To enable these kinds of studies, clinics routinely preserve and archive tissue samples, mainly through formalin fixation and paraffin embedding. To retrieve information at the protein level from these invaluable materials, which are associated with long and extensive patient follow-up data, immunohistochemistry is an ideal approach [5].
Death domain-associated protein (DAXX), a predominantly nuclear protein, has been reported to play important roles in carcinogenesis, transcriptional regulation, resistance to viral infection, and apoptosis [6][7][8][9]. DAXX can shuttle between the nucleus and cytoplasm or other cellular substructures, suggesting that it might have different functions in different cellular compartments and stages of the cell cycle [6][7][8][9]. The aim of this study was to evaluate DAXX expression and its clinical implications as a biological regulator of aggressiveness in esophageal carcinoma.
1) Patient characteristics
This study considered 60 cases of esophageal squamous cell carcinoma. Formalin-fixed and paraffin-embedded tissues from 60 patients who under-went esophagectomy from 2003 to 2006 were obtained from the Pathology Services Department of Kosin University Gospel Hospital (Table 1). Enrolled patients were followed up at 3-month intervals for 60 months, starting at 3 months after surgery. The mean follow-up duration was 32 months, and 2 cases were lost to follow-up for unknown reasons.
Radical lymphadenectomy dissection, consisting of regional lymph node resections, was performed over the middle or lower mediastinal, the superior mediastinal, the perigastric, and celiac axis areas. On average, 15-20 regional lymph node dissections were performed. Seventeen patients with stage IV disease received palliative surgery due to severe dysphagia, which was possible because they did not have unresectable or advanced cancer with comorbidities.
Before processing for this study, all tissues were fixed in 10% neutral buffered formalin for approximately 24 hours. The microscopic slides with routine hematoxylin and eosin staining were retrieved from the archives and reviewed by a specialized pathologist. The original diagnosis was confirmed in all cases.
The Institutional Review Board of Kosin University Gospel Hospital approved the present study (IRB approval no., 2014-10-143). And the requirement for informed patient consent was waived due to the retrospective nature of the study.
2) Immunohistochemistry
The paraffin blocks were cut at a 4-micron thickness, dewaxed in xylene, and rehydrated through − 189 − graded percentages of ethanol. Microwave treatment for antigen retrieval was performed for 20 minutes at 98°C using 0.01 M citric acid buffer (pH 6.0). Endogenous peroxidase activity was quenched by incubating the sections in 3% hydrogen peroxide for 10 minutes at room temperature. To block the non-specific binding sites, the slides were pre-incubated with 5% normal goat serum in phosphatebuffered saline for 10 minutes at room temperature. DAXX immunostaining was performed using a polyclonal antibody (1:200; Sigma-Aldrich, St. Louis, MO, USA). Subsequently, the antigen-antibody complex was visualized using a peroxidase/DAB Envision Detection System kit (DAKO, Glustrop, Denmark) and counterstained with hematoxylin. Negative controls were used for the tested antibodies; the primary antibody was replaced by either mouse or rabbit non-immune serum, as appropriate.
An appropriate slide including the deepest invasion of the tumor was selected in each case, and immunostaining was performed on the paraffin-embedded tissue. The microscopic slide with immunostaining was interpreted by a specialized molecular and tumor pathologist using consistent criteria. All stained sections were evaluated in a blinded manner without prior knowledge of patient data.
3) Immunohistochemistry evaluation
An immune reaction from more than 10% of tumor cells was interpreted as positive, and positive reactions were classified into 2 groups: a reaction in 11%-50% of tumor cells and a reaction in more than 51% of tumor cells. The correlations between expression and clinical prognosticators and survival were analyzed. The intensity of positive staining was initially scored according to mean optical density into 4 groups: 0, no staining; 1, weak staining (light yellow); 2, moderate staining (yellow brown); and 3, strong staining (brown). Staining intensity was found not to be discriminative in a preliminary study. Therefore, positive tumor cell staining was assigned a score using a semiquantitative 3-category grading system: 0, <10% positive cells; 1, 11%-50% positive cells; 2, >51% positive cells.
4) Statistical analysis
For analyses of esophageal cancer-specific mortality, death as a result of cancer was the primary end point, and deaths from other causes were censored. To adjust for potential confounding, age and year of diagnosis were used as continuous variables, and all other covariates were used as categorical variables. The correlation between each variable and expression was determined using the Kaplan-Meier method and the log-rank test. The chi-square test was used to examine associations between categorical variables. The t-test assuming unequal variance was performed to compare mean ages. All analyses used SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA), and all p-values were 2-sided. Significance was assigned at p<0.05.
1) Clinical characteristics
The mean age of patients was 62.5±12.3 years. Six patients (10.0%) were female, and 54 patients (90.0%) were male. By tumor invasion depth, 9 cases (15.5%) were T1a, 6 cases (10.0%) were T1b, 16 cases (26.7%) were T2, and 29 cases (48.3%) were T3. By pN stage, 30 cases (50.0%) were N0, 25 cases (41.7%) were N1, and 5 cases (8.3%) were N2. Seventeen of the 60 cases (28.3%) showed distant metastasis. Using the seventh edition of the American Joint Committee on Cancer (AJCC) staging system, 11 cases (18.3%) were stage IA, 3 cases (5.0%) were stage IB, 12 cases (20.0%) were stage IIA, 7 cases (11.7%) were stage IIB, 10 cases (16.7%) were stage III, and 17 cases (26.3%) were stage IV. The 5-year survival rate was 33.3% (20 of 40). Forty-three patients without distant metastasis received adjuvant chemotherapy, whereas 17 patients with stage IV disease with distant metastasis received only palliative surgery and postoperative care. Patients diagnosed with pathologic stage I, II, or III disease were treated with adjuvant chemotherapy, which was administered following assessment by independent clinicians. These patients received adjuvant chemotherapy within 2 months after surgery. Patients received vinorelbine ditartrate (25 mg/m 2 /day, administered for only 1 day per month)+cisplatin (25 mg/m 2 /day, administered for only 1 day per month). Patients with stage I or II disease received 3 cycles of chemotherapy and a follow-up study was performed. We decided to add 3 cycles of chemotherapy if there was no evidence of metastatic lesions. Patients with stage III disease received chemotherapy with the same regimen for 9 cycles. No patients received neoadjuvant chemotherapy or radiation therapy, and no patients received postoperative radiation therapy. The survival period ranged from 2 months to 128 months, with a mean of 45.2±5.2 months.
2) Results from immunostaining and correlations with clinicopathological prognostic factors Non-neoplastic squamous epithelium adjacent to the cancer showed negative or weak staining. A negative reaction to DAXX (score of 0) was found in 17 (28.3%) of the 60 cases of esophageal squamous cell carcinoma ( Table 2, Fig. 1). In this study, strong nuclear staining was predominant (Table 2, Fig. 2).
Forty-three of the 60 cases (71.7%) showed strong DAXX expression. Of those, 19 cases (31.7%) showed a positive reaction in 11%-50% of tumor cells (score of 1), and 24 cases (40.0%) showed a positive re-action in more than 51% of tumor cells (score of 2). Twelve of the 30 cases (40.0%) without lymph node metastasis showed a negative reaction, whereas cases with lymph node metastasis were negative in only 5 of the 30 cases (16.7%).
Of the 17 cases with distant metastasis, 16 (94.1%) showed significant DAXX expression, including 11 cases classified with a score of 2 (positive reaction in more than 51% of tumor cells). By AJCC stage, DAXX expression was found in 5 of the 11 stage IA cases (45.5%), 2 of the 3 stage IB cases (66.7%), 7 of the 13 stage IIA cases (61.5%), 4 of the 7 stage IIB cases (57.1%), 9 of the 10 stage III cases (90.0%), and 16 of the 17 stage IV cases (94.1%). DAXX was expressed in 31 (77.5%) of the 40 patients who died within 5 years of disease occurrence. Among them, 17 cases showed a positive reaction in more than 51% of tumor cells.
These immunostaining patterns were significantly associated with N stage (p=0.005), AJCC stage (p=0.001), and distant metastasis (p=0.004), but the overall survival analysis showed no significant correlations. There were no correlations of DAXX expression with age, gender, or T stage ( Table 2).
The Kaplan-Meier survival analysis found no significant difference in survival according to DAXX expression (Fig. 3). However, a significant correlation of DAXX expression with survival was demonstrated in patients with stage IIB (p=0.046) or IV (p=0.014) disease using the log-rank Mantel-Cox test (Fig. 4).
Discussion
DAXX, an enigmatic protein, was originally suggested to be a proapoptotic protein, and is now known to be a multifunctional protein that regulates a wide range of cellular signaling pathways involved in both cell survival and apoptosis [6][7][8][9]. Because of these characteristics, DAXX has been considered to play important roles in the onset and worsening of cancer. Several authors have investigated potential relationships between DAXX expression and various cancers, such as bladder cancer, ovarian cancer, and pancreatic neuroendocrine tumors [4,10,11].
Very few studies have been conducted on the relationship between DAXX and esophageal cancer or the role of DAXX in esophageal cancer. Therefore, we planned a study to analyze the role of DAXX as a biomarker using the paraffin-embedded tissue of 60 esophageal cancer patients.
In this study, DAXX was expressed in 71.7% of esophageal squamous cell carcinoma patients and was significantly correlated with lymph node metastasis and stage. Its expression increased as stage advanced.
Under multiple influences, the subcellular localization of DAXX can be changed by modification or interaction with other proteins [7][8][9]12,13]. Tang et al. [9] studied the distribution and location of DAXX in cervical epithelial cells that were positive for high-risk human papillomavirus. They reported that DAXX was distributed in the nuclei of normal cervical epithelial cells and intensively distributed in the cytoplasm and cell membranes in cervical intraepithelial neoplasia (CIN) II, CIN III, and cervical cancer cells [9]. However, in this study, DAXX was expressed predominantly in the nuclei and barely het- erogeneously expressed in the hyperplastic dysplastic squamous epithelium. Changing the location of DAXX can affect the cell cycle, including antiviral, pro-apoptotic, and anti-apoptotic activities, and might play a role in transcriptional regulation [7][8][9]. A recent study of brain tissue infected by reoviridae found that upregulation of DAXX through the interferon type I mechanism might depend on DAXX positioning in the cytoplasm or nucleus and could play a role in cell apoptosis [14]. It also showed that DAXX ori-− 193 − entation was related to apoptosis of the host cell after viral infection. Key evidence of a possible anti-apoptotic function of DAXX is derived from DAXX knockout mouse embryos, which displayed increased global apoptosis [15]. In contrast, evidence for the pro-apoptotic functions of DAXX has been obtained from tumor cells or transformed cells treated with various stimuli, including ultraviolet light, transforming growth factor beta, hydrogen peroxide, interferon gamma, and arsenic trioxide [13]. Thus, it has been suggested that DAXX exerts an anti-apoptotic influence in unstressed primary cells, whereas it is a pro-apoptotic factor in tumor cells or transformed cells exposed to various types of stress. Pan et al. [10] provided new evidence that, in both normal ovarian cells and highly transformed ovarian cancer cells, DAXX promotes cell proliferation and represses DNA damage responses during X-ray irradiation and chemotherapy, meaning that it might function as an anti-apoptotic factor [12,13]. In this study, the expression of DAXX in esophageal squamous cell carcinoma cells was 71.7%, and it was significantly correlated with lymph node metastasis and stage, with expression increasing at advanced stages.
Therefore, the esophageal carcinomas used in this study might not have been associated with viral infection. This result suggests that DAXX plays a role as an oncoprotein or anti-apoptotic factor and does not act as a pro-apoptotic factor in esophageal squamous carcinoma. In this study, the DAXX expression level was relatively low in normal cells, while DAXX overexpression resulted in tumorigenic transformation in this tumor cell type. This result is supported by the report of Hollenbach et al. [8] that DAXX might promote chromosomal instability during cancer development.
Tsourlakis et al. [16] reported that 80.6% of prostate cancers showed DAXX expression, with predominant nuclear staining, and that stronger DAXX staining was associated with a higher Gleason grade and advanced T stage. They suggested DAXX as a novel independent prognosticator. The results of this study are consistent with those of Tsourlakis et al. [16], but no evidence of DAXX as an independent prognostic factor was found. A limitation of this study is the small number of cases. Because of the small sample size, the results may have been statistically significant only in certain stages of the disease, and dif-ferent findings may be obtained if a larger patient group is analyzed. Therefore, further larger-scale studies are needed to verify DAXX as an independent prognostic factor.
In conclusion, these results for DAXX expression in esophageal carcinoma have clinical and therapeutic implications. DAXX was expressed in 71.7% of these esophageal cancer tissues and was associated with lymph node metastasis and advanced stages. Therefore, DAXX immunostaining could be used in clinical practice as a biologic marker indicating tumor aggressiveness, especially in stages IIB and IV. Inhibiting DAXX activity or DAXX accumulation could be a promising therapeutic strategy to enhance the effects of radiotherapy and chemotherapy in esophageal cancer patients.
|
2018-06-05T03:17:49.263Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "277969b4160ccf277b2c69f5bc27d095ca4952ed",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5090/kjtcs.2018.51.3.187",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "277969b4160ccf277b2c69f5bc27d095ca4952ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4912102
|
pes2o/s2orc
|
v3-fos-license
|
Nivolumab versus everolimus in advanced renal cell carcinoma: Japanese subgroup analysis from the CheckMate 025 study
With >2 years of follow-up, Japanese patients from the international phase III CheckMate 025 study had a higher response rate with nivolumab versus everolimus and a favorable safety profile.
Observed differences in efficacy and safety of therapies for aRCC in Asian patients may be the result of environmental and/or genetic differences that necessitate specific investigation of agents in this population (3)(4)(5)(6). Additionally, treatment patterns in Asian countries differ from those in Western countries; for example, cytokine therapy is still widely used for first-line treatment in Japan (6,7). These different treatment patterns potentially add a confounding factor in clinical trials of second-line agents. Here, we present efficacy and safety data from the global population as well as the Japanese subgroup of patients treated with nivolumab or everolimus from CheckMate 025, with a minimum follow-up of at least 26 months.
Study design and treatment
This was a Phase III, randomized open-label study of nivolumab versus everolimus. The detailed study design was described previously (2). Patients were randomized 1:1 to receive nivolumab 3 mg/kg intravenously over 60 min every 2 weeks or everolimus 10-mg tablet orally once daily. Randomization was stratified according to region (United States or Canada, Western Europe and the rest of the world), Memorial Sloan Kettering Cancer Center prognostic risk group and number of prior anti-angiogenic therapies (one or two) for aRCC. Japanese patients were included as part of the 'rest of the world' stratification group.
Patients
Adults with histological confirmation of aRCC with a clear-cell component were eligible. Patients had to have received one or two prior anti-angiogenic therapies and had to have progression within 6 months before study enrollment and Karnofsky performance status (KPS) of at least 70 at study entry. Additional eligibility criteria were reported previously (2). Analyses are based on data collected with the use of a case report form.
Endpoints and assessments
The primary endpoint was OS, defined as time from randomization to death. The key secondary endpoints were investigator-assessed ORR, defined as the number of patients with complete response or partial response divided by the number of randomized patients, and progression-free survival (PFS). Disease assessments (per Response Evaluation Criteria in Solid Tumors [RECIST] v1.1) (8) were performed using computed tomography or magnetic resonance imaging at baseline and every 8 weeks after randomization for the first year, then every 12 weeks until progression or treatment discontinuation. Safety was assessed at each clinic visit. Quality of life was assessed using the Functional Assessment of Cancer Therapy Kidney Symptom Index-Disease-Related Symptoms (FKSI-DRS) scoring algorithm (9). The questionnaire consisted of nine symptom-specific questions, as previously reported (10). The summary score ranged from 0 to 36, with 36 as the best possible score (9,10). A change of at least 2 points was considered a clinically meaningful change.
Study oversight
This study was approved by the institutional review board or independent ethics committee at each center and conducted in accordance with Good Clinical Practice guidelines defined by the International Conference on Harmonisation. All patients provided written informed consent to participate based on the principles of the Declaration of Helsinki.
Statistical analyses
OS, PFS and duration of response were estimated using Kaplan-Meier methodology. OS medians and corresponding 95% CIs were determined using Brookmeyer and Crowley methodology (11). 95% CIs were constructed using log-log transformation. A stratified logrank test was performed for the global population only. Hazard ratios and CIs were obtained for OS and PFS for nivolumab versus everolimus by fitting an unstratified Cox model (stratified for the global population with the group variable as a single covariate). ORRs and the corresponding 95% CIs were based on the Clopper and Pearson method (12).
Patients
Of the 410 and 411 patients who were randomized to nivolumab and everolimus, respectively (hereafter referred to as the global population), 96 and 98, respectively, were stratified by the 'rest of the world' region, which included Japan. Thirty-seven of 410 patients (9%) and 26 of 411 patients (6%), respectively, were Japanese (hereafter referred to as the Japanese population). All Japanese patients who were randomized received treatment. Demographic and baseline characteristics of the global and Japanese populations were generally similar, except that a higher proportion of Japanese patients overall had baseline KPS of 100, and lower proportions of Japanese patients in the everolimus arm had ≥2 sites of metastases, liver metastases, and PD-1 ligand 1 (PD-L1) expression ≥1% ( Table 1). The distribution of prior treatment regimens in the metastatic setting differed between the global and Japanese populations. Higher proportions of Japanese patients versus the global population had prior treatment with axitinib (20/ ). At a minimum of 26 and 28 months of follow-up for the global and Japanese populations, respectively (median follow-up: 33. 6 and 33.2 months), 11% of the global population and 16% of the Japanese population continued to receive treatment with nivolumab (2% and 4% in the everolimus arm, respectively). The primary reason for discontinuation was disease progression with nivolumab or everolimus in both the global (74% versus 72%, respectively) and Japanese populations (62% versus 65%, respectively).
Safety
In the Japanese population, any-grade treatment-related AEs occurred in 78% of patients treated with nivolumab and 100% of patients treated with everolimus (Table 3). Results were similar in the global population (79% versus 88%, respectively). The most common treatment-related AEs in Japanese patients treated with nivolumab was diarrhea (19%) and the most common with everolimus was stomatitis (77%) ( Table 3). The most common treatmentrelated AE in the global population was fatigue for both nivolumab and everolimus (34% versus 34%). Grade 3 or 4 treatment-related AEs occurred in 19% of Japanese patients treated with nivolumab and 58% of Japanese patients treated with everolimus ( Table 3). The most common Grade 3 or 4 treatment-related AEs in Japanese patients treated with nivolumab was anemia (5%) and with everolimus was hypertriglyceridemia (12%). Grade 3 or 4 treatmentrelated AEs were experienced by 20% versus 37% of patients in the global population, respectively. The most common in the global population were fatigue (2%) and anemia (2%) with nivolumab and anemia (9%) with everolimus. Any-grade treatment-related AEs leading to discontinuation were observed in 16% (Grade 3 or 4, 3%) and 23% (Grade 3 or 4, 12%) of Japanese patients in the nivolumab and everolimus arms, respectively. Among Japanese patients, 15 (41%) and 11
Quality of life
Among Japanese patients, the FKSI-DRS quality-of-life survey completion rate exceeded 90% through 1 year of the study, with exception of weeks 8 (76%) and 36 (83%) in the everolimus arm (similar to the global population completion rate, as previously reported (2)). The median FKSI-DRS score at baseline was 34.0 in the nivolumab arm and 33.5 in the everolimus arm, higher than the median for the global population (31.0 for both arms, as previously reported (2)). The mean change in scores for the FKSI-DRS, assessed every 4 weeks, were generally equal to or slightly exceeded baseline values at every assessment in the nivolumab arm (Fig. 4). Scores in the everolimus arm were lower at every assessment with 5 assessments at least 2 points below baseline, considered a meaningful change in score (Fig. 4). These results are generally consistent with results at 1 year for the global population, as previously reported (2), except in the global population the decrease in scores with everolimus were not as striking (2,10).
644
Nivolumab vs everolimus in Japanese patients
Discussion
CheckMate 025 continued to demonstrate superior OS and higher ORR with nivolumab versus everolimus in the global study population, with more than 2 years of follow-up. OS was higher with nivolumab in the Japanese population than in the global population and was similar between nivolumab and everolimus in Japanese patients, with medians not reached. The higher KPS and differences in prior and potentially subsequent therapies in Japanese patients compared with the global population may have contributed to this result. Importantly, the small sample size in the Japanese population limits interpretation of OS. Additionally, the imbalance in prognostic factors at baseline between nivolumab and everolimus arms in Japanese patients, such as fewer patients with ≥1% PD-L1 expression, >2 sites of metastases and liver metastases in the everolimus arm, may have contributed to the similar OS noted in both arms. Consistent with the global study findings at 14 months, ORR was higher for nivolumab versus everolimus in the Japanese population. ORR was substantially higher for nivolumab in the Japanese population than for the global population and the difference between arms was more notable in the Japanese population. Differences in prior therapies and the higher baseline KPS in Japanese patients versus the global population may have contributed to this result as well.
In previous studies of targeted therapies, the safety profile has in some cases differed in Japanese patients compared with Western patients (13)(14)(15)). An understanding of whether differences are observed in Japanese patientsa historically under-represented population in clinical trialsand what these differences are may improve management of AEs, which in turn may improve overall outcomes in these patients. In the current study, safety in Japanese patients was generally consistent with the global population, with the exception of decreased incidence of Grade 3 or 4 treatmentrelated AEs with nivolumab in Japanese patients and increased incidence of Grade 3 or 4 treatment-related AEs with everolimus. Consistent with prior reports, the incidence of treatment-related AEs with nivolumab was lower than with everolimus, including treatment-related select AEs with immune-mediated etiology, except for endocrine and gastrointestinal AEs. Incidence of stomatitis was higher in Japanese patients in both arms compared with previous reports from the global population in this study (2) and consistent with an independent study of everolimus in Japanese and non-Japanese patients (16). In the everolimus arm but not the nivolumab arm, incidence of rash, thrombocytopenia and proteinuria was high in Japanese patients, a result also observed in studies of axitinib and everolimus (16,17). Quality of life among Japanese patients was assessed only through 1 year, given the small sample size in the second year of the study. Compared with the global population, Japanese patients overall had higher baseline FKSI-DRS quality of life scores and modest improvement with nivolumab, not surprising given that the baseline score was only 2 points lower than the best possible score. Conversely, the decrease in quality of life over time with everolimus was more pronounced compared with the global population (2,10).
To our knowledge, this is the first report of an analysis of Japanese patients treated with nivolumab for RCC. A number of large global trials in recent years have performed subgroup analyses of efficacy and/or safety of TKIs and mTOR inhibitors in Japanese patients (16,(18)(19)(20)(21). Notably, improved efficacy in Japanese patients compared with the global population was seen in most studies, although Japanese patients generally had more favorable baseline disease characteristics, as was the case in this study (16,(18)(19)(20)(21). Randomization and stratification of Japanese patients specifically, not as part of a larger group, may help to circumvent this issue. In Japanese patients with metastatic RCC treated with sunitinib, there was a trend in greater antitumor activity and higher incidence of hematological AEs compared with historical results observed in Caucasian patients (20). In a Phase III study that compared pazopanib and sunitinib in treatment-naïve patients, PFS was similar among Asian, North American and European populations, with varying incidence of some AEs across groups (21). In a small Japanese subgroup analysis from the Phase III RECORD-1 study of everolimus versus placebo in previously treated patients with metastatic RCC, Japanese patients experienced similar or better efficacy than the overall study population with similar types and higher incidences of AEs (16). In Phase II and III studies of axitinib in patients with RCC, efficacy with axitinib, particularly in patients with prior cytokine therapy, was higher in Japanese patients compared with the global population, though Japanese patients had more favorable baseline disease characteristics than non-Japanese patients and much higher rates of cytokine pretreatment (18,19).
There are several limitations to this analysis. The small sample size of Japanese patients and the different sample size between arms due to stratification as part of a larger regional group (that included non-Japanese patients) may have affected outcomes. Additionally, there is an unknown effect of prior therapies on the efficacy of nivolumab. Japanese patients are often treated with different therapies than are Western patients, so comparisons with the global CheckMate 025 population should be made with caution. In a retrospective analysis of 110 Japanese patients treated with sorafenib, those who had previous cytokine treatment had significantly higher OS (P = 0.002) and PFS (P = 0.017) than did patients without prior cytokine treatment (22).
The results from this study support the recent approval of nivolumab for previously treated patients in Japan. A multinational study examining the combination of nivolumab with ipilimumab in first-line RCC is ongoing and includes Japanese patients (CheckMate 214). Given that Asian patients with RCC have, in some cases, had different outcomes than patients in Western countries, future global studies should include additional Asian patients and include univariate and multivariate analyses of potential predictive factors to better examine the efficacy and safety of novel therapies in this patient population.
Authors' contributions
Yoshihiko Tomita had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Collection and assembly of data: McHenry, Berghorn. Data analysis and interpretation: All authors. Drafting of the manuscript: Tomita. Critical revision of the manuscript for important intellectual content: All authors.
Final approval of manuscript: All authors.
Funding
This work was sponsored by Bristol-Myers Squibb and Ono Pharmaceutical Company Limited. Authors received no financial support or compensation for publication of this manuscript. The funders contributed to the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, and approval of the manuscript in collaboration with the investigators and authors of this report.
|
2018-04-03T02:00:37.678Z
|
2017-04-13T00:00:00.000
|
{
"year": 2017,
"sha1": "f57a3d1c58c36d4e6c3ae4893c17ef7961c8e736",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jjco/article-pdf/47/7/639/26647203/hyx049.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f57a3d1c58c36d4e6c3ae4893c17ef7961c8e736",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236954938
|
pes2o/s2orc
|
v3-fos-license
|
Small RNAs Asserting Big Roles in Mycobacteria
Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis (Mtb), with 10.4 million new cases per year reported in the human population. Recent studies on the Mtb transcriptome have revealed the abundance of noncoding RNAs expressed at various phases of mycobacteria growth, in culture, in infected mammalian cells, and in patients. Among these noncoding RNAs are both small RNAs (sRNAs) between 50 and 350 nts in length and smaller RNAs (sncRNA) < 50 nts. In this review, we provide an up-to-date synopsis of the identification, designation, and function of these Mtb-encoded sRNAs and sncRNAs. The methodological advances including RNA sequencing strategies, small RNA antagonists, and locked nucleic acid sequence-specific RNA probes advancing the studies on these small RNA are described. Initial insights into the regulation of the small RNA expression and putative processing enzymes required for their synthesis and function are discussed. There are many open questions remaining about the biological and pathogenic roles of these small non-coding RNAs, and potential research directions needed to define the role of these mycobacterial noncoding RNAs are summarized.
Introduction
Mycobacterium tuberculosis (Mtb) remains one of the leading infectious causes of human mortality, supplanted only in 2020 by the COVID-19 pandemic triggered by the SARS-CoV-2 virus. Mtb evolved from an ancestral smooth tubercule bacillus (e.g., M. canettii, M. pseudotuberculosis), acquiring virulence elements to attain its preferred pathogenicity towards humans [1]. The acquisition of these virulence elements coincided with Mtb undergoing a genomic downsizing relative to the 100 different smooth tubercule bacilli species characterized [1,2]. Despite this downsizing, a core genome is evident among the pathogenic strains of mycobacteria. Several decades of research efforts have been devoted to understanding how the~4000 protein-coding elements evident in the Mtb genome contribute to growth, survival, and pathogenic processes [3][4][5][6][7]. Recent technical advances in deciphering the complex nature of Mtb and related mycobacterial genomes, including improved large-scale RNA-sequencing strategies, have revealed an abundance of small RNAs (sRNA). First, described as ranging in size from 50 to 350 nucleotides (nts) [8][9][10][11][12], these small RNAs now include some as small as 18 nts [13]. The sRNAs, originally selected with sequences > 100 nts in length, were found to represent~11% of the intergenic transcripts (IGRs) identified from the exponential phase cultures. In addition to the sRNAs, IGRs include 5 and 3 UTRs, tRNAs, and antisense RNAs. Based on the normalized read counts for sense, antisense, and intergenic noncoding RNAs, the antisense and intergenic noncoding RNAs made up roughly 25% of the transcripts mapping outside of ribosomal RNA genes [10]. The sRNAs are detected in both exponential and/or stationary phase cultures, in infected eukaryotic cells, and in patients with tuberculosis (TB), suggesting
While most of the RNA screens with Mtb focused on RNAs > 50 nts, many smaller RNAs < 50 nts have been characterized in non-mycobacterial species [33][34][35][36]. For example, Salmonella expresses a small RNA called Sal-1, which is generated from the 5 end of a ribosomal RNA by the eukaryotic miRNA processing enzymes [33]. Sal-1 targets the inducible nitric oxide synthase, with the pathogenic role for this sRNA established by the increased killing of Sal-1 deficient Salmonella in infected epithelial cells [33]. Sal-1 resembles Non-coding RNA 2021, 7, 69 4 of 17 eukaryotic miRNAs, which are small noncoding RNAs (20-22 nts) that use 6-7 nucleotide seed sequences to mediate the degradation of mRNAs [37,38]. The first screen for such miRNA-like sRNAs in mycobacteria was undertaken with Mycobacterium marinum [39]. In this screen, a single 23-nt RNA was discovered, with features characteristic of a eukaryotic miRNA including the requisite interaction with the Argonaute protein, part of the eukaryotic RNA-induced silencing complex [39]. To date, the M. marinum sRNA has no ascribed functions or targets. In a screen for miRNA-like sRNAs in TB-infected patients, six distinct Mtb-encoded miRNA-like sRNAs were discovered in the serum [40]. While all of these had 22-nt lengths consistent with the size of miRNAs, their extremely high GC content was unusual (86%-100%). In a broader screen for miRNA-like sequences using comprehensive miRNA selection criteria with the annotations from miRbase, Rfam, and Repbase, where plant small RNAs are also considered, a set of 35 smaller RNAs (<50 nts) were identified in Mtb-infected macrophages [13]. Except for one of these sRNAs, most were only detected in infected macrophages, with their levels increasing over a 6-day infection period. Termed smaller noncoding RNAs (sncRNAs), the sizes of these ranged from 18 to 30 nts. The 35 sncRNAs had an average GC content of 50%. In a technical advance to determine the levels of these sncRNAs, a miRNA-based quantitative RT-PCR was developed. This assay incorporates locked-nucleic acid technologies to provide extremely high specificity and selectivity for short RNAs. The expression changes of three of these Mtb-encoded sncRNAs, sncRNA-1, sncRNA-6, and sncRNA-8, were verified with this technique [13].
To summarize, diverse mycobacterial species produce small RNA transcripts ranging in size from 18 to 350 nts. The secondary structure of representative examples of such sRNAs is shown in Figure 1. The mycobacterial sRNAs have diverse sizes and extensive predicted secondary structures that lack commonality. Only a handful of these sRNAs have been functionally characterized. Some are more abundant in infected cell lines and in patients, implying roles in pathogenesis. Scientists are beginning to explore their targets, production and processing requirements, and contributions to pathogenesis. We describe next the current state of knowledge of some of these sRNAs. Non-Coding RNA 2021, 7, x 5 of 17 The structures were obtained using the RNAFold web server RNA prediction software. While predicting the structure of ncRv11846/MrsI, 6 nts at the 5′ ends were omitted for simplicity.
Functional Roles of Mycobacterial sRNAs and sncRNAs
A key step in identifying putative biological roles for the sRNAs relates to what stages in a mycobacterial growth cycle they are expressed [41]. Additional insights have come from the environmental conditions that affect sRNA expression. Among the conditions are oxidative stress, nutrient deprivation, DNA damage, antibiotic exposure, and/or acidic environments, the latter occurring in the phagolysosome formed in macrophages and dendritic cells. Putative functional roles for the numerous sRNAs need also to consider the stability of the sRNA, affected by both the relative GC content and secondary
Functional Roles of Mycobacterial sRNAs and sncRNAs
A key step in identifying putative biological roles for the sRNAs relates to what stages in a mycobacterial growth cycle they are expressed [41]. Additional insights have come from the environmental conditions that affect sRNA expression. Among the conditions are oxidative stress, nutrient deprivation, DNA damage, antibiotic exposure, and/or acidic environments, the latter occurring in the phagolysosome formed in macrophages and dendritic cells. Putative functional roles for the numerous sRNAs need also to consider the stability of the sRNA, affected by both the relative GC content and secondary RNA structures. Examples of several better characterized sRNAs are B11/6C, MTS1338/DrrS, Ms1, MTS0097/Mcr11, ncRv11846/MrsI, Mcr7, and sncRNA-1 ( Figure 2, Table 1). Then, it undergoes post transcriptional processing to yield MTS1338/DrrS. MTS1338/DrrS promotes the expression of three operons (rv0079-rv0081, rv0082-rv0087, and rv1620c-rv1622c), which cause defects in Mtb growth and promote persistence. The mechanism of this MTS1338/DrrS mediated regulation has not been characterized. (C) The expression of MTS0997/Mcr11 is regulated by AbmR, an ATPbound transcription factor. After transcription, MTS0997/Mcr11 undergoes processing at the 3′ end and then regulates the expression of genes (lipB, fadA3, and accD5) involved in fatty acid production in a site-specific manner. This is negatively regulated by fatty acids. (D) In iron-restricted environments, the iron-responsive transcription factor IdeR induces the expression of ncRv11846/MrsI, which in return hinders the translation of nonessential iron storing proteins (hypF, bfrA, and fprA). This increases the level of free iron that can be used for essential functions. (E) sncRNA-1 is induced in infected macrophages and gets processed to yield 25 nts RNA. The processed sncRNA-1 enhances the expression of rv0242c and rv1094, two genes involved in oleic acid production. This regulatory network then promotes Mtb growth and survival inside macrophages. (F) Ms1 sequesters RNA polymerase (RNAP) at the stationary phase. Upon entrance to the outgrowth phase, Ms1 is degraded by PNPase, and some other RNases not yet identified, which release RNAP to promote global transcription. (G) PhoP induces the expression of Mcr7, which abrogates the translation of tatC. tatC encodes TatC, which is involved in the protein secretion pathway.
sRNA B11 (93 nts), later named 6C owing to its similarity to a small RNA found in other bacterial species, forms two stem-loops via six conserved cytosines [42,43]. Target sequence searches have suggested that B11/6C regulates Mtb transcripts coupled to DNA replication and protein secretion. In mechanistic studies in M. smegmatis, 6C was found to sigA sigA sRNA B11 (93 nts), later named 6C owing to its similarity to a small RNA found in other bacterial species, forms two stem-loops via six conserved cytosines [42,43]. Target sequence searches have suggested that B11/6C regulates Mtb transcripts coupled to DNA replication and protein secretion. In mechanistic studies in M. smegmatis, 6C was found to Non-coding RNA 2021, 7, 69 7 of 17 interact with two mRNA targets, panD and dnaB ( Figure 2A, Table 1). Moreover, overexpression of 6C inhibited M. smegmatis growth. Several groups have used mycobacterial RNA over-expression vectors to further understand how the various sRNAs function. Over-expression of MTS1338 (117 nts) prevents Mtb replication, suggesting it targets key genes needed for mycobacterial growth ( Figure 2B, Table 1) [19]. Later named as DosR regulated sRNA (DrrS), MTS1338 is induced by DosR. High levels of MTS2823 (300 nts) also inhibit Mtb growth, with transcriptome analysis using microarrays revealing that many transcripts involved in metabolism are downregulated ( Figure 2B, Table 1) [10].
MTS0997 (131 nts), later named Mcr11, upregulates several genes required for Mtb fatty acid production ( Figure 2C, Table 1) [30]. This sRNA positively regulates rv3282, fadA3, and lipB translation by binding a 7-11 nucleotide region upstream of the start codon. Supplementing fatty acids in the mycobacterial cultures overrides this regulatory process, revealing a feedback loop to control metabolic functions in Mtb. The regulatory role for sRNAs in Mtb metabolism is also revealed with ncRv11846 (106 nts). An ortholog of the E. coli sRNA RhyB, ncRv11846 is termed mycobacterial regulatory sRNA in iron (MrsI) ( Figure 2D, Table 1) [12]. ncRv11846/MrsI is expressed following iron starvation [16]. This sRNA contains a six-nucleotide seed sequence that targets and negatively regulate the transcripts hypF and bfrA, which encode for nonessential iron-containing proteins. This translational roadblock increases the levels of free iron available. There are additional sRNAs identified that are reduced in expression in response to iron starvation [12]. The role of these sRNAs remains an open question.
Among the diverse sncRNAs, sncRNA-1 remains the best characterized ( Figure 2E, Table 1) [13]. This non-coding RNA is present in the RD1 pathogenicity locus, in between esxA and espI. Over-expressing sncRNA-1 alters the Mtb transcriptome, with multiple genes required for fatty acid biogenesis increased in expression. Screening putative targets of sncRNA-1 by seed-sequence complementarity searches reveals two targets of this sRNA, rv0242c, and rv1094. These encode two proteins involved in the oleic acid biogenesis pathway. Both genes have putative sncRNA-1 binding sites within their 5 UTRs. Substituting selected nucleotides involved in Watson-Crick base pairing, either within the 5 UTR or in the sncRNA seed sequence, eliminated the positive regulation. One novel approach for studying microRNA functions is the use of locked nucleic acid power inhibitors (LNA-PIs). These have modified RNA sequences that prevent their cleavage by RNA processing enzymes. They also have chemical modifications to enable uptake into cells without any transfection or liposome-based carrier needs [44]. They hybridize with target miRNAs with extremely high specificity, antagonizing the function of the miRNA. These LNA-PIs were tested in mycobacteria, which are inherently difficult to electroporate or transfect with liposome-based technologies. Notably, such LNAs are easily incorporated into mycobacteria and can antagonize sncRNAs in Mtb [13,45]. Incubation of Mtb with an LNA-PI selectively targeting sncRNA-1 abolished the upregulation of the rv0242c [13]. This LNA treatment reduced Mtb survival in infected macrophages, revealing a key pathogenic contribution of this sncRNA. The functions of sncRNA-6 and sncRNA-8 remain unexplored [13].
MTS2823 is termed Ms1 as it was functionally characterized in M. smegmatis and has homology to the 6S sRNA [26]. Best defined in E. coli, 6S sRNA has a secondary RNA structure that resembles an open promoter. The sigma factor bound RNA polymerase (RNAP) holoenzyme has a high affinity for this RNA structure [46]. The 6S sRNA complexes the RNAP, competitively reducing transcriptional activity [46]. Studies in M. smegmatis suggest that Ms1 competes with the sigma factor for binding to RNAP, hence suppressing transcriptional activity. Given the complexity of defining RNA-protein complexes, a revised model is proposed in which Ms1 sequesters the RNAP ( Figure 2F, Table 1) [27]. Another negative regulatory sRNA that has been characterized is Mcr7 [31]. This sRNA interferes with the translation of tatC mRNA, which encodes twin arginine translocation C (TatC) ( Figure 2G, Table 1). TatC is a part of a protein export pathway that is also involved in Mtb pathogenesis [47]. All told, accumulating findings reveal a critical role for sRNAs and sncRNAs in Mtb pathogenicity.
Regulation of Mycobacterial sRNAs'/sncRNAs' Expression
As more sRNAs/sncRNAs are discovered in mycobacteria, regulatory elements controlling their expression and processing are slowly being identified and characterized. This includes the identification of key cis-and trans-regulatory factors. In mycobacteria, sigA is the primary transcription factor, which is a member of the sigma70 family [48]. SigA recognizes the consensus cis regulatory sequence, the TTGCGA-N 18 -TANNNT hexamer that is present at −35 and −10 region upstream of the transcription start site ( Figure 3A) [49,50]. SigA binding enables RNA polymerase to transcribe at promoter sites responsible for the expression of housekeeping regulons and for mycobacterial growth [48,51]. Miotto et al. developed computational predictions to identify sigA-regulated sRNAs [52]. Of the sRNAs identified in the screen, 46.9% had the consensus SigA promoter sequence in the upstream of the 5 end, with 8.5% containing an intrinsic or factor-independent terminator sequence in the downstream or 3 end. While 13.6% of the genes encoding sRNAs had both 5 and 3 motifs, their presence and impact on transcription requires further study. The remaining 31.0% of the sRNA encoding genes had neither defined motif, suggesting the involvement of other regulatory factors. For example, the gene encoding Ms1 contains a −10 element, starting five-nucleotide upstream of +1 position along with a distinct −35 element, suggesting that a distinct sigma factor regulates its expression ( Figure 3B). Ms1 contains different regulatory elements (−491/+9 region) that contribute to its expression [27].
Processing of Mycobacterial sRNAs and sncRNAs
Many sRNAs are generated as full-length mature transcripts with no obvious processing steps. Yet, several of the smaller species do undergo some form of processing from larger single-stranded (ssRNA) precursors [27,28,30]. Among these are Ms1, Coupled with the cis-regulatory elements, novel trans-regulatory elements are being identified that control sRNAs/sncRNA expression. Among these are alternate transcription factors or sigma factors. For instance, sRNA ncRv11846/MrsI has an IdeR binding site in its promoter region. IdeR is an iron-responsive master regulator of genes coupled to iron metabolism, including the sRNA ncRv11846/MrsI ( Figure 3C) [12,53]. Mcr7 expression is regulated by PhoP, which is a part of the two-component system PhoP/PhoR [31,54]. Direct binding assays with chromatin immuno-precipitation of PhoP revealed that it binds to the promoter region of Mcr7 to induce its expression in exponential phase Mtb cultures ( Figure 3D). sRNA MTS0997/Mcr11 resides between two protein-coding genes rv1264 and rv1265, with the protein products of these two genes involved in the metabolism of cAMP [14]. rv1264 encodes an adenylyl cyclase, which catalyzes ATP to cAMP. rv1265 is a transcription factor that binds to both ATP and DNA. DNA binding studies have shown that rv1265 induces MTS0997/Mcr11 expression ( Figure 3E). rv1265 is now termed AbmR for ATP binding Mcr11 regulator. Mapping studies of the 5 end of MTS0997/Mcr11 revealed that its −35 element coincides with the promoter regions of AmbR, which is oriented in the opposite direction ( Figure 3E) [10]. MTS1338/DrrS is also transcribed in the opposite direction to its neighboring gene called rv1733c, but mapping of the TSS of rv1733c revealed that it is separated by 190 nucleotides from the TSS of MTS1338/DrrS [55]. rv1733c encodes a protein involved in cell wall biogenesis and is a component of the DosR regulon [10]. The DosR regulon, induced by nitric oxide (NO), is the primary mediator of the hypoxic stress response [56]. MTS1338/DrrS is also upregulated in response to NO, and the MTS1338/DrrS promoter is activated by DosR, established with b-galactosidase reporter assays ( Figure 3F) [28].
In summary, identification of the cis-and trans-acting factors is revealing many diverse types or regulatory elements involved in the sRNA expression. Little is known about the regulation of the sncRNAs.
Processing of Mycobacterial sRNAs and sncRNAs
Many sRNAs are generated as full-length mature transcripts with no obvious processing steps. Yet, several of the smaller species do undergo some form of processing from larger single-stranded (ssRNA) precursors [27,28,30]. Among these are Ms1, MTS0997/Mcr11, MTS1338/DrrS, sncRNA-1, and sncRNA-6. Ms1 is a 300 nt transcript detected in both exponential and stationary phase cultures. Notably, it also exists as a 250 nt transcript in stationary phase, suggesting some form of processing [10]. MTS1338/DrrS is transcribed as a precursor transcript of >400 nts (referred to as DrrS+) that is cleaved at the 3 end to yield the mature 108 nts form [28]. MTS0997/Mcr11 has a 3 end that varies in size by 3-14 nts, implying that a 3 RNA processing occurs like that for MTS133/DrrS [30].
Both sncRNA-1 and sncRNA-6, which have final sizes of 25 nts and 21 nts, respectively, require processing enzymes for their generation [13]. These sncRNAs were predicted to exist as precursor transcripts >115 nts that have defined RNA structures involving doublestranded RNAs (dsRNA) segments that form hairpin loops. To identify the putative processing requirements needed for the generation of sncRNA-1, nucleotide substitutions were created within the hairpin loop and antisense complementarity strand of the precursor form of this sncRNA. This caused the formation of multiple intermediate size-transcripts (40-115 nts), detected by Northern blotting [13]. Thus, the processing of the longer RNA transcript depends on both the formation of the hairpin loop and the specific nucleotides at a putative cleavage site needed to form sncRNA-1 [13]. Notably, the expression of the precursor sncRNA-1 transcript, containing sncRNA-1 that was no longer processed into the 25 nt species because of the introduction of nucleotide substitutions, was unable to regulate gene expression. SncRNA-6 also undergoes a sequence-specific processing from a longer RNA transcript. Like sncRNA-1, mutations that disrupt the hairpin loop in which sncRNA-6 resides or the mutations at the cleavage site of sncRNA-6 prevent its processing. Taken together, multiple experiments establish the existence of a small RNA processing system in mycobacteria. These findings do not exclude the possibility that some of the Mtb sRNAs could be generated by miRNA processing enzymes when the mycobacteria are propagating in eukaryotic cells during infections [57].
Several candidate RNA processing enzymes have been reported to date. Among these are ribonuclease E (RNase E), polynucleotide phosphorylase (PNPase or GpsI), ribonuclease J (RNase J), and the ATP-dependent RNA helicase RhlE (Figure 4) [58]. All are components of the RNA degradosome. Except RhlE, all are essential for in vitro growth, determined by identifying key genes through a transposon mutagenesis screen (Himar1 transposon libraries) [7]. Mechanistically, RNase E recognizes the 5 phosphate of the transcript and then cuts at an A/U rich sequence of the ssRNA [59]. PNPase and RNase J are 3 and 5 specific exonucleases, respectively, that stop upon the presence of a dsRNA sequence [60]. Many research teams have made use of CRISPR interference mediated knock down of the RNA processing enzymes to study their role in the generation of specific sRNAs [27,58,61]. Sikova et al. has investigated the contribution of the core RNase enzymes in the processing of Ms1 [27]. Knockdown of PNPase increased the levels of Ms1~30%, while the targeting of RNase E and RNase J had no effect on this sRNA, revealing some target specificity. These findings further suggest that the processing of Ms1 likely involves additional RNA processing enzymes. Another possibility is that residual protein levels of PNPase were still resulting in some processing of the longer RNA transcript. Taken together, the limited number of studies on the RNA processing enzymes leave open many questions about how Mtb produces sRNAs from longer transcripts. longer processed into the 25 nt species because of the introduction of nucleotide substitutions, was unable to regulate gene expression. SncRNA-6 also undergoes a sequence-specific processing from a longer RNA transcript. Like sncRNA-1, mutations that disrupt the hairpin loop in which sncRNA-6 resides or the mutations at the cleavage site of sncRNA-6 prevent its processing. Taken together, multiple experiments establish the existence of a small RNA processing system in mycobacteria. These findings do not exclude the possibility that some of the Mtb sRNAs could be generated by miRNA processing enzymes when the mycobacteria are propagating in eukaryotic cells during infections [57].
Several candidate RNA processing enzymes have been reported to date. Among these are ribonuclease E (RNase E), polynucleotide phosphorylase (PNPase or GpsI), ribonuclease J (RNase J), and the ATP-dependent RNA helicase RhlE (Figure 4) [58]. All are components of the RNA degradosome. Except RhlE, all are essential for in vitro growth, determined by identifying key genes through a transposon mutagenesis screen (Himar1 transposon libraries) [7]. Mechanistically, RNase E recognizes the 5′ phosphate of the transcript and then cuts at an A/U rich sequence of the ssRNA [59]. PNPase and RNase J are 3′ and 5′ specific exonucleases, respectively, that stop upon the presence of a dsRNA sequence [60]. Many research teams have made use of CRISPR interference mediated knock down of the RNA processing enzymes to study their role in the generation of specific sRNAs [27,58,61]. Sikova et al. has investigated the contribution of the core RNase enzymes in the processing of Ms1 [27]. Knockdown of PNPase increased the levels of Ms1 ~30%, while the targeting of RNase E and RNase J had no effect on this sRNA, revealing some target specificity. These findings further suggest that the processing of Ms1 likely involves additional RNA processing enzymes. Another possibility is that residual protein levels of PNPase were still resulting in some processing of the longer RNA transcript. Taken together, the limited number of studies on the RNA processing enzymes leave open many questions about how Mtb produces sRNAs from longer transcripts.
tRNA Processing Enzymes as Potential Players for sRNA Maturation
Transfer RNAs (tRNAs) share some common features with small RNAs, being relatively short and highly structured non-coding RNA molecules. tRNA maturation involves several steps, with both 3′ and 5′ ends being extensively processed in an orchestrated, sequential order. Besides the core RNA degradosome components, tRNA processing enzymes likely play roles in maturation and turnover of certain sRNA species. In eukaryotes, many RNP complexes involved in tRNA biology participate in the generation and subcellular trafficking of other small structured RNAs such as snRNA, snoRNA, 5S RNA, and
tRNA Processing Enzymes as Potential Players for sRNA Maturation
Transfer RNAs (tRNAs) share some common features with small RNAs, being relatively short and highly structured non-coding RNA molecules. tRNA maturation involves several steps, with both 3 and 5 ends being extensively processed in an orchestrated, sequential order. Besides the core RNA degradosome components, tRNA processing enzymes likely play roles in maturation and turnover of certain sRNA species. In eukaryotes, many RNP complexes involved in tRNA biology participate in the generation and subcellular trafficking of other small structured RNAs such as snRNA, snoRNA, 5S RNA, and others [62]. In many organisms, transcripts encoding tRNAs are also a source of regulatory small RNAs, namely tRNA-derived small RNAs [63]. The mechanisms of tRNA maturation in Mtb are not well characterized and require future studies. Compared with E. coli and B. subtilis, used as model bacteria for RNA processing, Mtb encodes for RNase P, which is involved in the initial processing of the 5 end of tRNA molecules [64]. The suite of 3 end processing enzymes includes RNase PH, RNase Z, the oligoribonuclease [65], RNase D (rv2681), and a divergent functional and structural ortholog of RNase T (rv2179c) [66].
As a large proportion of mycobacterial tRNAs require an enzymatic addition of the CCA sequence at their 3 end, they are likely additionally processed by the Poly(A) polymerase and/or PNPase. All the ribonucleases described above have the potential to be involved in the processing of small non-coding RNAs other than tRNA. In E. coli, RNase PH is implicated in degradation of structured RNAs, which accumulate in the mutant lacking this RNase [67]. In the same model organism, the 3 exoribonucleolytic trimming is required for the final maturation of multiple small, stable RNA species, and this is carried out primarily by the RNase PH and RNase T [68].
tRNA cleavage is seen in all kingdoms of life as a regulatory mechanism, adding another layer to the complexity of gene regulation mechanisms. This has been observed in Streptomyces coelicolor, another actinomycetes related to Mycobacteria [69]. tRNA cleavage has recently been reported in Mtb [70]. Tuberculosis encodes numerous toxin-antitoxin systems, with many requiring a ribonuclease component. The VapC11 ribonuclease of the virulence-associated TA system, VapBC, specifically cleaves two tRNA species, tRNA Gln32-CUG and tRNA Leu3-CAG [70]. Mtb encodes for about 50 VapC ribonuclease toxins, with these having the potential to directly target noncoding RNAs. It is also likely that these would cleave tRNAs to yield tRNA derived functional sRNAs in Mtb, as they seem to have in higher eukaryotes [71]. Direct or indirect interplay between VapC toxins and sRNA is inevitable. In fact, overexpression of the MTS2823 restricts expression of at least five VapC homologues [10], but the exact mechanism remains unexplored.
The Hunt for the Mycobacterial Hfq Equivalent
In the majority of bacteria species, trans-encoded sRNAs require RNA chaperones, either Hfq or ProQ, to ensure appropriate sRNA/mRNA base pairing [72]. Taking advantage of the 6C sRNA inhibiting M. smegmatis growth when overexpressed, a screen for RNA chaperons that mediate the interaction between 6C, and its targets was developed [32]. In experiments where M. smegmatis clones overexpressing 6C were exposed to saturation mutagenesis, there was some growth. However, the few colonies that recovered were those that had mutations in the overexpression cassette, indicating the lethality of 6C. No other genetic mutations were observed in the colonies from the saturation mutagenesis library. Presuming a chaperon was targeted, the growth inhibition by B11/6C was not overcome, suggesting a chaperone protein was not involved [43]. It remains possible, however, that the chaperone protein had other essential functions, leaving open a role for as yet unidentified chaperons.
CsrA, a conserved small RNA binding protein, was recently shown to assist in a complex between the sRNA and its mRNA targets in Bacillus subtilis [73]. The RNA chaperones closely cooperate and interact with the core RNA degradosome to ensure efficient regulation of gene expression [74]. However, experiments to identify the orthologues of Hfq, ProQ, or CsrA chaperones in Mtb have failed, which suggested that Mtb must exploit alternative proteins or mechanisms for efficient sRNA/mRNA interactions. Such interactions in Mtb were proposed to involve direct Watson-Crick base pairing involving GC-rich sequences of the sRNA and the target [75]. While this may apply to certain sR-NAs, it is likely that most Mtb sRNAs target the mRNAs or DNA through unidentified accessory proteins.
Recent studies from E. coli and S. aureus have revealed that cold shock domain containing proteins (CSPs), involved in binding and melting RNA species [76,77], also interact with several sRNAs. CSPs typically respond to stress and could aid in a coordinated response to external stimuli that is not limited to cold sensing [78]. Hence, these proteins may likely be involved in sRNA-mediated regulation of gene expression in Mtb. To corroborate this notion, CspA and CspB both associate with the core RNA degradosome in Mtb [58]. Future studies will likely reveal their relevance to the functionalities of the RNA degrading machinery. Interestingly, the mycobacterial CspA gene itself is co-expressed with the sRNA molecule, ncRv3648c. Exploiting active RNA structure unwinding, with the help of ATP-dependent RNA helicases, could theoretically support sRNA folding in the absence of passive unwinding mechanisms provided by Hfq-like chaperones. A previous study from E. coli has reported the requirement of the CsdA DEAD-box helicase for low temperature riboregulation of rpoS mRNA via sRNA-mediated mechanism, where the activity of Hfq was not sufficient for translational activation of rpoS expression [79].
Intriguingly, M. smegmatis, M. dioxanotrophicus, and M. goodie encode a eukaryoticlike protein with a full length TROVE domain (KEGG database search www.genome.jp (accessed on 27 May 2021)), sharing over 35% identity with the human 60 kDa SS-A/Ro ribonucleoprotein ortholog (SIM analysis results [80]). The 60 kDa SS-A/Ro ribonucleoprotein binds to misfolded small RNAs and pre-5S rRNA in eukaryotes [81]. It is thought to function as an RNA chaperone that stabilizes small RNAs of the Y family and protects them from enzymatic degradation [82]. In mycobacteria, the protein coding element was likely acquired by horizontal gene transfer from a mycobacteriophage, with a similar gene identified in the mycobacteriophage Sparky (KEGG database ortholog search, www.genome.jp (accessed on 27 May 2021)). It remains unknown whether the TROVE protein has acquired some functions related to sRNA metabolism in fast growing mycobacterial species or whether it is simply a useless remnant of a previous bacteriophage infection.
Concluding Remarks
Mtb-encoded small RNAs are emerging as new regulators of mycobacterial growth, survival, and pathogenesis. To date, the functions for only a handful of these sRNAs have been described. Given their distinct sizes, their functional contributions likely differ when comparing those <50 nt and those between 50 and 350 nt. The recently developed CRISPR interference-based assays hold great potential for studying the function of sRNAs and sncRNAs [12,26,61]. In addition, LNA power inhibitors recently validated on sncRNA-1 are a very tractable system for blocking sRNA functions [13]. Another question that remains unexplored is the putative role of such sRNAs in enhancing the mycobacterial resistance to antibiotics. Future studies may explore if these sRNAs are induced in response to antibiotics, which also opens another question on the type and sequence specificity of the regulatory proteins controlling sRNA expression. The lack of a comprehensive study to identify the regulatory factors has limited the identification of additional sRNAs. Lastly, many studies suggest that the assorted sRNAs are processed after transcription, and this adds another complication owing to the lack of elaborate techniques to define exact processing events. The processing may be growth phase or stress-dependent, altering the function of the RNA in selected physiological conditions. Moreover, the mycobacterial proteome involved in RNA processing is still poorly annotated. Using mycobrowser, candidate genes involved in RNA processing in Mtb are evident (Table 2). However, more studies are needed to investigate the role of these putative RNA binding/processing proteins, as most have been identified via computational predictions. As summarized, the last decade has identified many distinct Mtb-encoded RNAs. The next decade will likely address the open questions mentioned throughout this review. Identification of new sRNAs/sncRNAs involved in pathogenesis and their regulatory mechanisms will enhance our understanding of tools that Mtb utilizes to escape macrophage killing, which will eventually help eradicate the TB. Essential for efficient processing of 16S rRNA. Probably part of the 30S subunit prior to or during the final step in the processing of 16S free 30S ribosomal subunits. It could be some accessory protein needed for efficient assembly of the 30S subunit.
|
2021-08-09T08:10:40.214Z
|
2021-07-02T00:00:00.000
|
{
"year": 2021,
"sha1": "7d5338ed4923bea9a77fc2a15e1aeeff000a9052",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2311-553X/7/4/69/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3127d01abe03115f592389fecbb2bf7a2b08dc78",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
248136725
|
pes2o/s2orc
|
v3-fos-license
|
Agent-Based Modeling of a Multiagent Multilayer Endogenous Financial Network and Numerical Simulations
Based on a realistic correlated behavior mechanism and a connected balance sheet relationship among firms, banks, households, and the government, we construct a multiagent multilayer endogenous financial network that includes an interbank network, an investment network, a deposit network, a business credit network, and a loan network. During the construction process, behaviors are endogenized such that an endogenous financial network is constructed. The simulation results show that the interbank network, the investment network, and the business credit network all obey the power-law distribution; the deposit network and the loan network exhibit a tendency for large banks to have larger degree distributions and small banks to have smaller degree distributions.
Introduction
With the continuous development of the modern financial industry, the financial market has become increasingly sophisticated, gradually evolving into a complex financial system that includes participation by multiple actors, including governments, financial institutions, enterprises, and households. In such a complex financial system, financial risks not only lead to dysfunction in the financial system but also have a serious impact on the real economy [1]. Additionally, the global economy has been negatively affected by COVID-19, which has led to serious debt crises and an increase in bad bank loans. erefore, there is an urgent need to study the interconnections between real economic risks and financial risks.
However, most existing studies pertaining to financial risk contagion have only focused on interbank networks, a focus which is far from sufficient to study financial contagion as a whole given the reality of a complex and multisubject financial system network. Additionally, existing studies concerning complex and multisubject networks have been more inclined to use endogenous macroeconomic network models to construct artificial microeconomic networks as a means of investigating topics such as economic growth, income distribution, and the effects of government policy [2][3][4], and so the focus of these network models has deviated from an investigation into financial correlation. erefore, this paper systematically reviews and summarizes previous studies with the aim of constructing a multiagent multilayer endogenous financial network model that includes an interbank network, an investment network, a deposit network, a business credit network, and a loan network consisting of banks, firms, households, and the government to lay the foundation for the further study of financial risk contagion within a more comprehensive framework.
In existing research concerning the network structure of real financial systems using empirical data, many universally accepted rules have been found. Among empirical studies of interbank networks, there are primarily three such rules: interbank networks obey small-world network characteristics [5][6][7], they obey scale-free network characteristics [8,9], and they have core-periphery network characteristics with respect to money centers [10][11][12]. Furthermore, a crisis in the banking sector impacts the real economy through the financial accelerator [13], so scholars have also studied the characteristics of the economic network structure. In existing research pertaining to networks composed of the banking sector and enterprises in the real economy, most scholars have found that the network is a power-law type and that the degree distribution is a power-law distribution with a thick tail [14][15][16].
Based on the rules of real financial networks mentioned above, many studies have attempted to construct financial networks using three different methods: network construction based on empirical data [17][18][19], exogenous network construction [20][21][22][23][24][25], and endogenous network construction [30][31][32][33][34][35][36][37][38][39][40][41][42][43]. However, for network construction based on empirical data, it is difficult to obtain all the data pertaining to the actual correlations among subjects in reality. Additionally, in terms of exogenous network construction, there is no consensus regarding exactly what network structure should be used to portray the associations among subjects in reality [26]. Furthermore, exogenous networks are static and homogeneous with respect to individuals, and the dynamic evolution of the network and heterogeneous behavior among individuals cannot be studied.
In recent years, with the rapid development of interdisciplinarity, certain methods from the fields of physics and engineering have been used in economics research, such as the use of chaotic systems to understand the complex behavior of real financial markets [27,28] and that of agentbased models (ABMs) to characterize the complex behavior of real financial agents. ABMs offer an easier way to achieve greater heterogeneity, allowing researchers to study heterogeneity, networks, and crisis dynamics in a macroeconomic context [29]. erefore, in terms of endogenous network construction, scholars have mainly used agentbased models and computational simulations to portray individual behavioral mechanisms and form complex networks. Scholars have conducted more research concerning single-agent, single-layer endogenous networks, such as interbank endogenous networks [30], supply chain endogenous networks [31,32], and credit endogenous networks [33], and the models constructed in this context are more mature. However, for systemic financial risk contagion, considering only a single agent and a single layer of the network cannot accurately reflect systemic financial risk and can significantly underestimate such risk [34,35]. erefore, scholars have invested more effort into the construction of multiagent and multilayer endogenous financial networks based on the business behavior mechanism among institutions in reality. ese models can be broadly divided into two categories. e first category focuses only on the impact of the banking sector on the real economy, and this model can better reflect the movement of the economic cycle, but the network model ignores the interbank market as an important risk contagion channel [36][37][38][39]. e other category focuses mainly on the banking sector and constructs models mainly to simulate monetary policy effects, thus lacking an interlinkage mechanism to the real economy [2,[40][41][42][43].
Most current ABM-based modeling in finance has focused on modeling a single market. e modeling of networks of economic systems with multiple agents has also focused on the endogenization of interfirm generative relationships, interbank lending, household consumption, and government policies. Few studies have also incorporated the endogenization of investment relationship networks among subjects, which in reality is an important channel for financial risk transmission and is the focus of this paper. In this paper, we integrate different financial agents at the micro-and macrolevels into the same organic whole by employing a financial system perspective, and we establish an endogenous, complex financial network consisting of different markets among agents, which makes the model highly relevant to the real financial system and allows it to serve as a basis for studying the formation and evolution of the financial system and to lay a foundation for the subsequent study of financial risk contagion. It should be noted that the model constructed in this paper is an endogenous financial network model in a general sense and is not specific to any one country or region. e contributions of this paper are as follows. First, we construct a four-sector model of financial system agent behavior mechanisms, including the government, banks, firms, and households, and generate a complex financial system network (including an interbank network, an investment network, a deposit network, a business credit network, and a loan network). Second, based on ABMs, the individual behavioral mechanism fully accounts for behavioral differences among individuals and allows them to interact and evolve in different financial networks. ird, the complex network can evolve toward a relatively stable state with few periodic fluctuations, and its characteristics are consistent with the findings of empirical research. e simulation results show that the model can converge to a stable state after endogenous evolutionary adjustment. In the steady state, the interbank network, investment network, and business credit network all obey the power-law distribution; the deposit network and loan network exhibit a tendency for large banks to have larger degree distributions and small banks to have smaller degree distributions. e model can still evolve toward a stable state within 100 periods after changing certain parameter values. After changing some main parameters, that is, the counterparty replacement parameter, the firm production target profit rate, or the initial firm financing multiplier, the robustness test shows that the financial network evolves to have the same network characteristics with different stable values. e remainder of the paper is structured as follows. Section 2 describes the model. e results of the simulation experiments are reported in Section 3. Robustness tests of the model are performed in Section 4. Finally, Section 5 concludes the paper.
Overview.
In the endogenous financial network model, we consider four types of agents: banks, firms, households, and the government. We build a complex, dynamic endogenous financial system network based on the behaviors (e.g., firm production and operation, household consumption and investment, or bank lending) and linkages among various agents. As shown in Figure 1, in the interbank market, banks lend or borrow money according to their liquidity. In the investment market, firms and households hold equity in firms, which constitutes the equity investment market; banks and households hold firm bonds and government bonds, which constitute the bond investment market. Households deposit money in banks in the deposit market. Business credit relations among firms constitute the business credit market. Banks lend money to firms and households in the loan market.
Firms are divided into supplier firms and terminal firms. Supplier firms and terminal firms form supply chain relationships with each other, and terminal firms form merchandising relationships with households. Firms develop a production plan, budget their capital, borrow from banks, finance themselves via the investment market, buy raw materials and pay their wage bills, hire workers, and sell their output in the supply chain market and the commodity market. If firms have sufficient funds, they invest in the equity investment market, and the profit is used to pay dividends.
Banks provide loans to firms and households. Each bank has to forecast its liquidity needs and take in or take out funds via the interbank market to meet its liquidity needs and to ensure that sufficient funds flow to the real economy. If banks have sufficient liquidity, they hold firm bonds and government bonds. Banks' liquidity is mainly the result of deposits in the household sector.
Households provide labor, receive income from wages paid by firms and financial assets, and use their income to buy consumer goods, invest, and save. If households' income does not cover their current consumption needs, they take out loans from banks.
In this paper, the government is assumed to act only as an issuer of government bonds. Since the focus of our model is on the financial correlations among individuals in each sector, the role of the government is simplified here.
Event Timeline.
e various types of agents are linked together by the multiple nonlinear feedback described above and evolve over a finite time horizon. With t � 1, . . ., T is indexed. In each period t, the following sequence of events occurs: (1) Firms produce and operate, and each firm determines the current period's production based on its own equity and past sales. (2) Firms purchase raw materials and hire labor. Firms purchase raw materials from upstream firms and hire labor from the household sector according to their output. is step establishes links among firms and between the firm and household sectors. (3) Firm investment and financing: if a firm faces a capital shortage, it takes out a loan from a bank or issues bonds to finance itself. If a firm has sufficient funds, it invests in the stocks of other firms. is step establishes a lending relationship between the firm and the bank and investment relationships among firms. (4) Households receive income from wages and financial investments. (5) Household consumption: households make consumption decisions based on their own consumption tendencies and purchase products from terminal firms. is step establishes a purchasing link between the household and firm sectors. (6) Household loans and investments: after a household has consumed, if there is a surplus of funds, this surplus is distributed among bank deposits, firm securities, firm bonds, and government bonds. If there is a shortage of funds, deposits are withdrawn from the bank, investments are recovered, and a loan is taken from a bank. is step creates a deposit link between households and banks, an investment link between households and firms, an investment link between households and the government, and a loan link between households and banks.
Firm Agents.
Suppose that there are N SF supplier firms and N TF terminal firms. Each supplier firm purchases production factors from upstream firms, hires labor, and sells its products to downstream firms after production; each terminal firm purchases raw materials from supplier firms, hires labor, and sells its products to the household sector after production. For each firm, if it has insufficient capital, it finances itself by borrowing from banks or issuing bonds; if it has sufficient liquidity, it holds equity in other firms for equity investment. e behavioral mechanisms of firms mainly include the following.
First, the firm produces and operates. Referring to Ishikawa et al. [44], the Cobb-Douglas production function is chosen such that, in period t, the desired output of firm j, product f j,t , depends on equity E f j,t and labor lab f j,t : (1)
Discrete Dynamics in Nature and Society
Under the constraint that the firm's equity is determined, the desired output then depends on the level of labor, so it is further assumed that product f j,t � lab f j,t /δ, where δ > 0. en, the labor required for the desired output under the equity constraint satisfies the following: In reality, there is more than one supplier for a core manufacturer, and the supplier is not only the supplier for a certain core manufacturer, so the supply chain relationship is actually a network model [45]. Supplier firm SF i randomly selects a certain percentage of all supplier firms as upstream firms and allocates their purchase volume according to the size of each upstream firm. Terminal firms TF j randomly select a certain percentage of all supplier firms as upstream firms and allocate their purchase volume according to the size of each upstream firm. At period t, at firm j's desired output, assuming that its target profit rate is r f j , the product it needs to buy from upstream firm BUY ff j,t can be expressed as follows: where wage denotes the unit price of labor. In the course of the transaction, a firm whose supply exceeds demand is in a buyer's market, such that a buyer's firm has a high market position and forms its payables YL, while a seller's firm forms its receivables YA at the same time. In contrast, when a firm's demand exceeds supply, it is in a seller's market, such that a seller has a high market position and forms its deposit received PYL, and a buyer forms prepayments PYA. It is assumed that the firm prefers to change to a supplier with abundant supply to reduce purchase costs. erefore, in each period, for reasons related to transaction costs, there is a certain probability that the connection to the old counterparty will be changed, and a new counterparty will be established [37]. In addition, we account for the heterogeneity of the product, and a new target supplier is chosen from other suppliers of other firms that are downstream of the original supplier. is approach ensures that similar products can be purchased.
is probability P j can be expressed as follows: where λ > 0; Ds new and Ds old are the supply and demand situations in which the old and new potential counterparties are located, respectively, so represents the total sales of firm i in period t-1, and Supply t−1 i represents the total supply of firm i in period t-1.
e second mechanism is firm financing. Firms need to purchase raw materials and hire labor to achieve their desired output in production and thus may face a funding gap. Firms mainly meet this funding gap by issuing bonds and by bank loans. In this paper, it is assumed that only the top 10% of firms in terms of size can meet part of the funding gap by issuing bonds and that financing demand is 50% each for bond financing and bank borrowing, while the funding gaps of the remaining firms are all met by bank loans. In period t, the financing demand of firm jFD f j,t can be expressed as follows: IN f j,t−1 represents the inventory of firm j in period t-1. See subsequent sections for the bank loan matching mechanism and bond market matching mechanism. e third mechanism is firm outbound investment. In period t, firm j engages in external investments if it has surplus liquidity after production and operation. In addition, its investment amount I f j,t satisfies the following: Since downstream companies holding shares in upstream companies in the supply chain is a relatively common form of strategic equity alliance [46], this paper assumes that firm j prioritizes a certain percentage of its upstream firms for equity investment and randomly selects other firms for equity investment if excess liquidity is still available. e funds are invested in the invested firms on a priority basis until the holding reaches 50% or more to achieve the purpose of controlling the investee firm; subsequently, other downstream firms are randomly selected to invest in the same holding to reach 50% or more; then, other firms are selected for investment. In summary, under ideal firm production conditions, if the demand of downstream firms for the product is greater than ideal production, this situation indicates that supply is less than demand, and so the firm sells all its products and occupies a strong market position; thus, the profit rate is larger than the target profit rate. If the demand of downstream firms for the product is less than ideal production, this situation indicates that supply is greater than demand, and so the firm develops an inventory; if excess inventory backlog emerges, the firm goes out of business. After earning profits from production and investment income and paying interest on bonds and loans, the net profit is finally obtained. All net profits are distributed to shareholders as dividends.
Bank Agents.
e number of banks is denoted by N B . e main source of funds for banks is households in the deposit market. Banks regulate liquidity via the interbank lending market under the supply constraint of funds, mainly by providing loans to firms and households, while investing in firm bonds and government bonds for income if there is surplus liquidity, so the behaviors involved in the use of funds mainly include the following. e first is interbank lending. In period t, banks borrow or lend funds via the interbank market according to their liquidity levels, where SLIQ b i,t and LLIQ b i,t , respectively, denote the short-term and long-term liquidity level of bank i in period t. Each potential debtor bank j observes the interbank lending rate offered to it by all banks, and following Zhang et al. [47], the short-term and long-term interbank lending rates are denoted as follows: where r 0 represents the risk-free interest rate, αs bb and αl bb denote the sensitivity of short-term and long-term interbank lending rates to interbank risk, respectively (higher values indicate higher risk premiums), SLIQ b i,t /OL i,t is the shortterm liquidity ratio of potential creditor bank i, LLIQ b i,t /OL i,t is the long-term liquidity ratio of potential creditor bank i, and LIL b j,t + SIL b j,t /E b j,t is the debt leverage ratio of potential debtor bank j, that is, the ratio of interbank borrowing to equity. For potential debtor banks, borrowing requests are sent to potential creditor banks, and if sufficient funds are not available from the first creditor bank, borrowing requests continue to be sent to other potential creditor banks to meet the shortfall until funding needs are met or until there is no more excess liquidity in the banking system. If the creditor bank has sufficient liquidity, all interbank borrowing requests are approved, and the potential debtor bank is converted into a debtor bank. If the creditor bank does not have sufficient liquidity to meet all borrowing requests, liquidity is allocated in descending order of the potential debtor bank's equity until no excess liquidity is available.
Second, banks extend loans. e issuance of loans by banks to firms and households is similar to interbank lending in that when a potential debtor firm or debtor household j applies to bank I for a loan, the interest rate on the loan that bank i can offer is as follows: where r 0 represents the risk-free interest rate, α bf and α bh denote the sensitivity of bank loan rates to risk (higher values indicate higher risk premiums), LIQ b i,t is the new liquidity of the bank and equal to the new deposits of the bank, LIQ b i,t /OL i,t is the liquidity ratio of potential creditor bank i, and DL f j,t /E f j,t or DL h j,t /E h j,t is the debt leverage ratio of potential debtor firms or debtor households j, that is, the ratio of firm or household loans to firm equity or household net assets. For potential debtor firms and households, loan applications are sent to potential creditor banks, and if sufficient funds are not available from the first creditor bank, loan applications continue to be sent to other potential creditor banks to meet the shortfall until funding needs are met or until there is no more excess liquidity in the banking system. If the creditor bank has sufficient liquidity, all loan applications are approved, and the potential debtor firm (household) is converted into a debtor firm (household). If the creditor bank does not have sufficient liquidity to satisfy all loan applications, liquidity is allocated in descending order according to the equity of the potential debtor firm (net assets of the household) until no excess liquidity is available.
ird, the bank invests. In period t, if bank i has excess liquidity after satisfying all loan requests, bank i makes bond investments, at which point its investment amount I b i,t satisfies the following: where DA b i,t represents the total number of loans held by bank i in period t. Banks allocate their investments to firm bonds and government bonds. Specifically, it is assumed here that the investment amount is first allocated among firm and government bonds according to the relative proportion of firm-issued bonds and government-issued bonds. Next, a certain number of firm bonds are randomly selected, and the investment amount in firm bonds is allocated in proportion to the size of the bond issue.
In summary, the bank obtains interest on loans, interest on interbank loans, and investment income in each period and obtains net profit after paying interest on deposits and interbank loans.
Discrete Dynamics in Nature and Society
Household Agents.
ere are N H households in the network. Differences in the levels of wealth, income, and propensity to consume of these households lead to different levels of consumption. If current consumption exceeds income, the household takes out a loan from a bank to meet current consumption needs, and if current income exceeds consumption, the household provides the surplus funds to banks via the deposit market or to firms or the government via the financial investment market to earn interest in the future. Household income comes from wages paid by firms and the income generated by investing in financial assets, which leads to real income in the current period.
First, following Popoyan et al. [2], the consumption of household i in period t depends on its expected lasting income and wealth (net assets). e expected lasting income PIC h i,t in period t is adjusted according to the actual income IC h i,t in the current period and can be expressed as follows: where λ h is the adjustment speed parameter. e actual income IC h i,t includes both interest income from financial assets and labor income. Household i's consumption CP h i,t in period t is a fraction of expected lasting income: where v i represents the propensity to consume. In this paper, household consumption mainly refers to purchasing products from terminal firms. Similarly, household i randomly selects a certain percentage of terminal firms from which to purchase products. e consumption amount CP h i,t is allocated to each terminal enterprise in the selected set according to the size of the selected firm. Similarly, households have a certain probability of changing the terminal firm used for consumption in each period, and the probability of changing is also based on Equation (4).
If actual income for the period is less than consumption, a loan from a bank is required, DL h i,t , in the amount of If actual income for the period is greater than consumption, the remaining income is invested. e investment amount FA h i,t can be expressed as follows: where FA h i,t � SA h i,t + CA h i,t + OA h i,t . Next, households further allocate financial investment FA h i,t among equity SA h i,t , debt CA h i,t , and deposits OA h i,t based on their risk attitudes; here, equity and bonds are risky assets, and deposits are risk-free assets, which in turn simplifies the problem of portfolio selection among risk-free and risky assets for households. For simplicity, the risk attitude of households is assumed to be related to the size of their net worth. e larger their net worth is, the more the risk-loving households are and the larger the proportion of risky assets they hold.
Balance Sheet.
e total balance sheet of the endogenous financial network is shown in Table 1. e sum of each row of the associated term is zero, and the sum of all columns is also zero. Assets are denoted by "+," and equities and liabilities are denoted by "−." Households are denoted by H, firms are denoted by F, banks are denoted by B, and the government is denoted by G. E B , E F , and E H denote the equity of the firm, equity of the bank, and net worth of the household, respectively. OA H and OL B denote the deposit association between household and bank. DA B , DL H , and DL F denote loans issued by banks, loans borrowed by households from banks, and loans borrowed by firms from banks, respectively. CA H and CA B denote the total number of bonds held by households and banks, respectively. CL F and CL G denote the total number of bonds issued by firms and by the government, respectively. SA H and SA F denote the total stock holdings of households and firms, respectively. YA F and YL F represent a firm's accounts receivable and accounts payable, respectively. SIA B and LIA B denote a bank's short-term and long-term interbank assets, respectively. SIL B and LIL B denote a bank's short-term and longterm interbank liabilities, respectively. CH B and CH F denote cash assets held by banks and firms, respectively. IN F indicates the inventory held by a firm. I G denotes government investment by government departments.
Simulations
Before the start of the formal evolutionary simulation, we performed the initialization of the model, and the initialization phase was not counted as part of the evolutionary period. At the initial moment, the firm's equity E f i is assumed to obey the Pareto distribution (parameter α � 3.5 and both multiplied by 250 adjusted orders of magnitude). e net worth of household E h j is assumed to obey a lognormal distribution (parameters μ � 1.5, σ � 0.7). Assume that the initial moment interbank network is generated exogenously according to the Barratt-Barthelemy-Vespignani-based (BBV-based) directed weighted network evolution rules discussed in Li et al. [25], after which bank asset size is determined based on the numerical scaling relationship. In the formal evolution phase, the interbank network evolves endogenously alongside other networks. e main parameter settings in the paper follow the suggestions by Gatti et al. [37], Georg [22], Li et al. [48], and Ma et al. [49]. e main parameters are set as shown in Table 2.
3.1. Network Structure. Based on the above parameter settings, a 100-period simulation is conducted in this paper, and we record the degree distribution of each network in the system at t � 100. Figure 2 is a log-log plot of the cumulative distribution of degrees for each network.
As shown in Figure 2(a), in the supply chain network, only a few firms have a large degree distribution and many connections to other firms. However, most firms have a smaller degree distribution and are less connected to other firms. is finding is consistent with the fact that, in the supply chain network, only a few core firms have a large number of upstream and downstream firms, while most firms have relatively few connections to other firms. Our simulation results reflect the tendency of supply chain networks to obey a power-law distribution that has been observed in empirical research. Additionally, Figure 2(b) shows that the interfirm business credit network also has a similar tendency. Similarly, only a few firms have a large degree distribution, and most firms have a small degree distribution.
is result is due to the fact that accounts receivable and accounts payable in the context of business credit are directly determined by the purchasing relationship in the supply chain network and that the core firm node in the supply chain network is also the core node in the business credit network. us, both the interfirm business credit network and the supply chain network obey a powerlaw distribution. Figure 2(c) shows the indegree distribution of firms in the equity investment network, which shows that most firms have small indegrees. A few firms have a large indegree, indicating that they receive a large amount of equity investment. ere are only a few core firms in the network that are large and operate with a high degree of quality, and investors mostly choose these firms for investment, so these firms receive a large amount of equity investment. Similarly, in Figure 2(d), only a few firms that have access to bond financing have a higher indegree. e investment network in our results also exhibits a power-law distribution.
Additionally, in Figures 2(e) and 2(f ), the degree distribution of the interbank network is characterized by a power-law distribution, which is consistent with the findings of empirical studies. In contrast, the deposit network in Figure 2 is result conforms with the fact that larger banks have a stronger capacity to absorb deposits, while smaller banks have a weaker capacity to do so. Additionally, in the loan network in Figure 2(h), larger banks are larger and more liquid; therefore, their outdegrees are larger, and they engage in more external lending. In contrast, smaller banks are less liquid and therefore have smaller outdegrees and make fewer external loans. Discrete Dynamics in Nature and Society
Model Evolution.
e purpose of this paper is to establish a robust and stable endogenous financial network to lay a foundation for subsequent risk contagion research. In this section, we therefore document certain key indicators in the model in general, particularly the correlation items between sectors in the balance sheet. Our results show that the model evolution becomes stable within 100 periods.
As shown in Figure 3, the trend of total bonds held, total loans issued, and total deposits received by the banking sector as a whole over time is recorded. As the figure shows, each indicator is accompanied by an adjustment process and tends to become stable after a series of adjustments. e three indicators tend to be stable and reach a stable state at approximately t � 60.
As shown in Figure 4, the indicators for the firm sector also reach a steady state at t � 60 and show small periodic fluctuations, a result which is similar to the findings of Gurgone et al. [36]. Each indicator has a substantial adjustment process from t � 0 to 10, after which it gradually tends toward a stable state.
For the household sector in Figure 5, again, all indicators level off at t � 60 after an initial adjustment. In addition, this figure also shows small cyclical fluctuations. As the firm sector adjusts its planned output to meet the actual demand of the market, the output is reduced, and less labor is required. en, the household sector's wage income decreases, leading to a decrease in household consumption, financial assets held, and deposits. To meet excess consumer demand, bank loans in the household sector increase. erefore, the adjustment of bank loans for households shows an inverse movement to the adjustment of several other items.
As shown in Figure 6, the total amount of bonds issued by the government exhibits a trend of falling, then rising, then falling again, and finally leveling off. e trend is broadly in line with the trend of bond assets held by banks, which show an inverse adjustment process to that of loans, as banks' liquidity is mainly used to issue loans and invest in bonds. Additionally, total corporate debt and total bonds held by households level off after the adjustment, with only minor fluctuations. e total amount of bonds issued by the government is therefore affected by the amount of bonds held and loans granted by banks.
Robustness Test
In this section, to verify the robustness of the constructed model, we modify some of the main parameters in the model for robustness testing. We change the values of the parameters λ, r f , and F r on the basis of Section 3, perform separate simulations to record the variation in each index listed in Section 3.2 over time, and observe the final robustness of the model.
Change to the Counterparty Replacement Parameter.
We hold all remaining parameters constant and change only the value of λ. e simulation is performed at λ � 0.08. e results show that, by changing the value of λ, the model can still converge to a steady state after adjustment.
As Figure 7 shows, total bonds held, total loans issued, and total deposits received by banks in the banking sector stabilize after the adjustment process subsequent to changing the parameter values.
e figure shows small cyclical fluctuations and the value of each indicator that reaches the steady state increases. is result is mainly due to the fact that, with an increase in λ, market frictions are reduced, more firms exhibit levels of supply and demand close to equilibrium, the output of the firm sector increases, demand for firm financing increases, and firms issue more bonds. us, the total amount of bonds held by the banking sector increases, and the increase in firm output inevitably leads to an increase in household sector income and hence an increase in deposits received by the banking sector.
As shown in Figure 8, after changing the values of the parameters, the indicators of the business sector also reach a steady state at t � 60, and each indicator has a substantial adjustment process from t � 0 to 10, after which it gradually tends toward a steady state accompanied by small periodic fluctuations.
e steady-state level of each indicator has improved. Additionally, due to reduced market frictions, there are more firms with levels of supply and demand close to equilibrium, production in the firm sector increases, demand for firm financing increases, firms issue more bonds, and the overall size of the firm sector increases.
For the household sector in Figure 9, all indicators, after the initial adjustment, once again level off at t � 60, with the values of the indicators becoming larger at the time of leveling off. For the same reason, the adjustment of bank loans by households shows an inverse movement to the adjustment of several other items. Due to the increase in production in the firm sector, there is an increase in the need for labor. is situation leads to an increase in the income of the household sector, which in turn leads to an increase in consumption, an increase in financial investment, and an increase in deposits and net worth.
As Figure 10 shows, the total amount of bonds issued by the government likewise exhibits a trend of falling, then Discrete Dynamics in Nature and Society rising, then falling again, and finally leveling off, except that the total amount increases in each period. e total amount of bonds issued by the government sector increases mainly because of an increase in investment demand due to the increase in household sector income.
Change to the Firm Production Target Profit Rate.
Additionally, we hold all remaining parameters constant and perform simulations at r f � 0.1. e results show that, by changing the value of r f , the model can still converge to a steady state after tuning. As Figure 11 shows, total bonds held, total loans issued, and total deposits received by banks in the banking sector stabilize after the adjustment process subsequent to changing the parameter values, and they show small cyclical fluctuations. e increase in B leads to a decrease in the raw materials purchased by firms and thus total production in the firm sector. is change leads to a decrease in the capital requirements of firms and therefore a decrease in the loans granted by banks. In turn, banks use excess funds for bond investments. us, Figure 11 shows different total values for each indicator per period compared to Figure 3. As shown in Figure 12, after changing the parameter values, the indicators for the business sector also reach a steady state at t � 60, and each indicator has a substantial adjustment process from t � 0 to 10, after which it gradually tends toward the steady state with small periodic fluctuations. Moreover, the steady-state level of certain indicators decreases. is result is mainly due to the increase in r f , which leads to a decrease in the raw materials purchased by firms and thus the total production of the firm sector. is situation leads to a decrease in firms' capital requirements, a decrease in firms' levels of borrowing from banks, and a decrease in the bonds issued by firms. In turn, an increase in the firm production target profit rate r f leads to an increase in the overall size of the business sector and an increase in its holdings of stock assets.
For the household sector in Figure 13, all indicators level off once again at t � 60 after the initial adjustment, and the adjustment for household bank loans shows a movement opposite to that of the adjustment for several other items. As the output of the firm sector decreases, this situation leads to a decrease in the labor income of households and less consumption. However, due to the increase in the firm production target profit rate and consequently the increase in firm dividends, households' income from holding financial assets increases. erefore, households are more willing to hold financial assets for investment, and the total amount of financial assets held by the household sector increases. Due to higher returns on financial investments, certain households invest through bank loans; thus, the household sector's borrowing from the banking sector increases. As Figure 14 shows, the total amount of bonds issued by the government likewise exhibits a trend of falling, then rising, then falling again, and finally leveling off, except that the total amount increases in each period. As the demand for financial investments in the household and banking sectors increases, so does the issuance of government bonds.
Change to the Initial Firm Financing Multiplier.
We hold all remaining parameters constant and perform the simulation at F r � 2. e results show that, by changing the value of F r , the model can still converge to a steady state after tuning.
As Figure 15 shows, total bonds held, total loans issued, and total deposits received by banks in the banking sector stabilize after the adjustment process subsequent to changing the parameter values, and they show small cyclical fluctuations. Due to the increase in the initial firm financing multiplier, these factors are more often financed via the financial investment market. us, loans to banks decrease, and corporate bonds purchased by banks increase. erefore, the total amount of bonds held by the banking sector in Figure 15(a) increased compared to the amount in each period shown in Figure 3(a), while loans granted by banks in Figure 15(b) decrease compared to those shown in Figure 3 As shown in Figure 16, after changing the parameter values, the indicators for the business sector also reach a steady state at t � 60, and each indicator has a substantial adjustment process from t � 0 to 10, after which it gradually tends toward the steady state. Moreover, due to the increase in the initial firm financing multiplier, the value of the indicators of the firm production segment increases as its input capital at production increases. e values of product, labor, sales, accounts receivable, and accounts payable in Figure 16 increase in each period compared to those shown in Figure 4.
For the household sector in Figure 17, all indicators level off once again at t � 60 after the initial adjustment. For the same reason, the adjustment for household bank loans shows a movement opposite to that of the adjustment for several other items. As firms increase their financing in the financial investment market, households' holdings of equity and bond assets increase.
Conclusion
Based on ABMs, this paper portrays the real-world behavioral mechanisms of banks, firms, households, and the government and investigates the construction of a multiagent, multilayer endogenous financial network model. e simulation results show that the supply chain network, the business credit network, the equity investment network, the bond investment network, and the interbank network all obey the power-law distribution; the deposit network and the loan network exhibit a tendency for large banks to have larger degree distributions and small banks to have smaller degree distributions; and the network model can be robust after endogenous adjustment.
We also found in subsequent robustness tests that the model can still evolve to a stable state within 100 periods after changing the main parameters. After increasing the counterparty replacement parameter λ, market frictions are reduced, which in turn leads to a more balanced matching of supply and demand among suppliers and an increase in the overall output of the firm sector. After increasing the firm production target profit rate r f , firms purchase fewer raw materials, and the overall output of the firm sector decreases. e increase in the firm sector's production target profit rate leads to an increase in the firm sector's external dividends, thus attracting more capital to the financial investment market. e increase in the initial firm financing multiplier F r allows firms to invest more capital in production, so output, sales, and labor demand all increase, and the financing needs of the firm sector also increase. Income, consumption, and investment in the household sector also increase.
is paper can serve as a basis for the study of risk contagion in endogenous macrofinancial networks, but the paper needs to be improved in the following respects: (1) Different debt maturities. In this paper, for the sake of simplicity, all debts exhibit the same maturity structures. (2) Diversity of agent behavior. In this study, for the sake of simplicity, the only government behavior is the issuance of government debt, and other behaviors are not considered. For the sake of a closer approximation to reality, diverse agent behaviors need to be considered. (3) Diverse types of agents. Central banks and nonbank financial institutions are not considered in the study. ese agents also play an important role in financial risk contagion.
Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare no conflicts of interest.
|
2022-04-14T15:15:38.269Z
|
2022-04-11T00:00:00.000
|
{
"year": 2022,
"sha1": "9fabb33e09041750d60cc8faac9453a54859d056",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ddns/2022/6945286.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb385f4f8bb000348906c92db55ebeb3fe7f6475",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": []
}
|
32198758
|
pes2o/s2orc
|
v3-fos-license
|
Interaction of Lanthanide Ions with Bovine Factor X and Their Use in the Affinity Chromatography of the Venom Coagulant Protein of Vipera russeZZi*
SUMMARY The substitution of trivalent Ianthanide ions for Ca(I1) in the Ca(II)-dependent activation of bovine Factor X by the coagulant protein of Russell’s viper venom was studied at pH 6.8. Factor X contains two high affinity metal binding sites which bind Gd(III), Sm(III), and Yb(III) with a & of about 4 X 10P7 M and four to six lower affinity metal binding sites which bind Gd(III), Sm(III), and Yb(II1) with a & of about 1.5 X lo-” M. In comparison, 1 mol of Factor X binds 2 mol of Ca(H) with a & of 3 X 10e4 M and weakly binds many additional Ca(I1) ions. No binding of Gd(II1) to the venom protein was observed. Dy(III), Yb(III), Tb(III), Gd(III), Eu(III), La(III), and Nd(II1) cannot substitute for Ca(I1) in the Ca(II)-dependent
SUMMARY
The substitution of trivalent Ianthanide ions for Ca(I1) in the Ca(II)-dependent activation of bovine Factor X by the coagulant protein of Russell's viper venom was studied at pH 6.8. Factor X contains two high affinity metal binding sites which bind Gd(III), Sm(III), and Yb(III) with a & of about 4 X 10P7 M and four to six lower affinity metal binding sites which bind Gd(III), Sm(III), and Yb(II1) with a & of about 1.5 X lo-" M. In comparison, 1 mol of Factor X binds 2 mol of Ca(H) with a & of 3 X 10e4 M and weakly binds many additional Ca(I1) ions. No binding of Gd(II1) to the venom protein was observed.
Dy(III), Yb(III), Tb(III), Gd(III), Eu(III), La(III), and Nd(II1) cannot substitute for Ca(I1) in the Ca(II)-dependent activation of Factor X by the venom protein at pH 6.8. Kinetic data consistent with the models of competitive inhibition of Ca(I1) by Nd(II1) yielded a Ki of 1 to 4 X 1OV M. The substitution of lanthanide ions for Ca(I1) to promote protein complex formation of Factor X-metal-venom protein without the activation of Factor X facilitated the purification of the coagulant protein from crude venom by affinity chromatography.
Using a column containing Factor X covalently bound to agarose which was equilibrated in 10 mM Nd(III), Tb(III), Gd(III), or La(III), the coagulant protein was purified lo-fold in 40 % yield from crude venom and migrated as a single band on gel electrophoresis in sodium dodecyl sulfate. These data suggest that lanthanide ions compete with Ca(I1) for the metal binding sites of Factor X and facilitate the formation of a nonproductive ternary complex of venom protein-Factor X-metal. Tb (III) fluorescence, with emission maxima at 490 and 545 nm, is enhanced lO,OOO-fold in the presence of Factor X. The study of the participation of an energy donor intrinsic to Factor X in energy transfer to Tb(II1) may be useful in the characterization of the metal binding sites of Factor X. Bovine Factor X, a plasma glycoprotein with a molecular weight of 56,000, participates as a zymogen in an intermediate step during the initiation of blood coagulation (Z-5). Factor X may be activated physiologically by the intrinsic (6) or by the extrinsic pathway (7) of blood coagulation. Alternatively, Factor X may be activated by the coagulant protein of Russell's viper venom (8). Kinetic analyses of this reaction are consistent with the Michaelis-Menten model for enzymes in which the venom protein is an enzyme and Factor X is a substrate (9). The activation of Factor X by the venom protein has an absolute calcium requirement (10). As with many calcium-dependent reactions, the absence of suitable electronic and magnetic properties of calcium has limited the study of the interaction of calcium with Factor X and the venom protein.
For these reasons we have examined the effect of the substitution of lanthanide ions for calcium in the interaction of the venom coagulant protein and Factor X.
In this communication we demonstrate that lanthanide ions bind tightly to the metal binding sites of Factor X, competitively inhibit Ca(II)-dependentFactor X activation by the venom coagulant protein, and facilitate the metal-dependent binding of Factor X and coagulant protein.
We describe an approach for the purification of coagulant protein by affinity chromatography using lanthanide ions to inhibit Ca(II)-dependent catalysis and to facilitate metal-dependent protein complex formation. This method may have general application to the affinity purification of proteins involved in Ca(II)-dependent protein interactions such as those participating in blood coagulation.
AND MATERIALS
Bovine Factor X, purified from fresh bovine plasma by BaSOd adsorption and DEAE-SeDhadex chromatograDhv (22). appeared homogeneous by disc and sodium dodecyl s;fate gel'elkct;obhoresis. Factor X and the coagulant protein of Russell's viper venom activities were assayed as previously described (8). Protein concentration was estimated from the absorbance at 280 nm using an E% nm of 9.5 for Factor X (22) and 13.4 for the venom protein (5). Crude Russell's viper venom (Sigma), 20 mg. was dissolved in 1 ml of 25 mM imidazole, 0.5 M NaCl, pH 6.8,and dialyzed at 4" for 3 hours against 500 ml of the same buffer.
The solution. after clarification by centrifugation at 4,000 x g for 10 min in a Sorvall RC-2B refrigerated centrifuge, was adjusted to 10 mM NdC13 by the addition of 0.1 M NdC13 and incubated for 2 hours at 4". A fine white precipitate that formed was removed by centrifugation.
The supernatant (1 ml) was applied to a column of Sepharose-Factor X (0.7 X 3 cm) equilibrated with 25 mM imidazole, 0.5 M NaCl, 10 mM NdCl,, pH 6.8, at 4". The column was washed with the same buffer at a flow rate of 30 ml per hour until no further protein was eluted, as monitored by absorbance of the eluate at 280 nm. Bound protein was eluted from the column with 25 mM imidazole, 0.5 M NaCl, 10 mM EDTA, pH 6.8, and collected in l-ml fractions.
After the removal of EDTA by exhaustive dialysis against 25 mM imidazole, 0.15 M NaCl, pH 6.8, the fractions were analyzed for protein concentration and coagulant protein activity. When necessary, the venom protein was concentrated in an Amicon ultrafiltrator employing a PM 10 membrane and stored at -15".
Kinetics of Factor X Activation,-Substitution of lanthanide ions for Ca (II) in the activation of Factor X by the venom protein was studied qualitatively using 1 FM, 10 PM, 0.1 mM, or 1.0 mM lanthanide ions in place of 8 mM Ca(I1) in the Factor X assay (8). In other experiments, the kinetics of the Ca(II)-dependent activation of Factor X by the venom protein in the presence of Nd(II1) were studied employing a one-stage assay for activated Factor X (3).
Under the conditions employed, the develooment of activated Factor X from Factor X waslinear for 12.5 min in the presence of 8 mM Ca(I1).
The velocitv of the hvdrolvsis of Factor X by the coagulant.protein is expressed in units of activated Factor X activity generated per min. The reaction, containing 57 rg of Factor X, CaCl,, and NdCl, in 0.3 ml of 25 mM imidazole, pH 6.8 at 37", was initiated with the addition of crude venom (2 pg) in 0.1 ml of 25 mM imidazole, pH 6.8. After 5 or 10 min at 37", a O.l-ml aliquot of the reaction mixture was diluted into 0.4 ml of 15 mM Tris-HCl, 0.1 M NaCl, pH 7.5, at 37", and a 0.1.ml aliquot of this solution was added simultaneously with 0.1 ml of 25 mM CaCh, I5 mM Tris-HCI, 0.1 M NaCI, pH 7.5, to a preincubated mixture of 0.1 ml of pooled human plasma and 0.1 ml of phospholipid at 37". The clotting time was determined and the activated Factor X activity calculated using linear curves constructed from plots of the logarithm of the clotting time (s) versus the logarithm of act.ivated Factor X concentration.
The Ca(I1) and the lanthanide ion concentrations were varied as indicated. Binding of Metal Ions, Factor X, aud Venom Protein-The binding of lanthanide ions and calcium to Factor X or the venom coagulant protein was evaluated at 25", at pH 6.8, by the steady state dialysis method of Colowick and Womack (25) using radioactive lK"Gd(III), i"iSm(III), 169Yb(III), or '%a(II Tri-Carb (model 3390) liauid scintillation spectrometer.
Data were interpreted using al Scatchard plot (27).
In the graphical analysis r is the number of moles of Gd (III) bound per mol of Factor X; c is the molar concentration of unbound Gd(II1).
Linear plots of data describing the upper and lower limits of the slope were obtained by linear regression analysis using a Wang 500 calculator.
Fluorescence spectra were obtained on a Perkin-Elmer model MPF
Interaction of Lanthanide
Ions with Factor X-The binding of lanthanide ions to Factor X was examined by the rate dialysis method (25) using radioactive trivalent lanthanide ions. In experiments with 153Gd(III), the rate of dialysis of 1.6 PM iS3Gd-(111) across the dialysis membrane was 1 x 10L cpm per min in the absence of Factor X and 746 cpm per min in the presence of 19 PM Factor X. The subsequent stcpwise addition of unlabeled GdCL to concentrations ranging from 3.2 PM to 91 PM was associated with a stepwisc increase in the rate of i53Gd(III) dialysis.
At Gd(III) concentrations grcatcr than 25 PM, turbidity of the protein solution was noted in the dialysis cell. Using the analysis of Colowick and Womack (25) to determine the concentration of Gd(II1) free in solution and the concentration of Gd (III) bound to Factor X, these data were interpreted using a Scatchard plot (27) by rate dialysis at pH 6.8 and 25". These data, analyzed using a Scatchard plot, are presented in Fig. 1B with Factor X-The interaction of calcium (I1) with Factor X was studied at pH 6.8 and 25" by rate dialysis using 45Ca(II) in a solution containing 10 mg per ml of Factor X (1.6 x low4 M).
Although considerable scatter of the data points was noted in multiple experiments, the extrapolation of a line representing the best least mean square fit of the data suggested that 2 mol of Ca(I1) bind to 1 mol of Factor X with a Kd of 3.1 x 1OW M. Additionally, many Ca(I1) ions bind to the protein at higher Ca(I1) concentrations but measurement of these weak interactions was beyond the technical limits of the method.
A summary of the interaction of metals with Factor X is shown in Table I.
Kinetics-Substitution of lanthanide ions for the Ca(I1) ions required for the activat,ion of Factor X by the venom protein was examined.
The presence of Dy(III), Yb(III), Tb(III), Gd- Upper, A column of Sepharose-Factor X was equilibrated at 4" with 10 mM NdCla, 0.5 M NaCl, 25 mM imidazole, pH 6.8; the crude venom-Nd(II1) solution containing 20 mg of protein in 1 ml of buffer was applied and developed with about 15 ml of buffer; bound protein, containing coagulant activity, was eluted using 10 mM EDTA, 0.5 M NaCl, 25 mM imidazole, pH 6.8 (arrow).
Lower, identical experiment as described above except that NdC13 was deleted from the initial equilibration solution.
crude Russell's viper venom in 10 mM NdCl, was applied to a column of Sepharose-Factor X, most of the crude venom protein did not adhere to the derivatized Sepharose (Fig. 3, upper panel). The bound protein, eluted with 10 mM EDTA, exhibited a lofold increase (range was 8-to 15-fold) in the specific activity of the coagulant protein compared to crude venom. About 75% of the original coagulant protein activity applied to the column was recovered in either the bound or the unbound material; one-half of the coagulant protein activity was associated with the bound protein fraction.
Sodium dodecyl sulfate gel electrophoresis of this fraction yielded a major band representing greater than 95% purity and corresponding to a molecular weight of 62,000 (Fig. 4). This is in good agreement with the molecular weight of 60,000 for the venom coagulant protein obtained by sodium dodecyl sulfate gel electrophoresis for protein purified by DEAE-cellulose chromatography and gel filtration on Sephadex G-200 (30). When large quantities of protein were applied to the gels, some low molecular weight material could be identified which was thought to be due to nonspecific binding of protein to the Sepharose-Factor X column.
In control experiments, no protein adhered to the Sepharose-Factor X conjugate in the absence of metal ions ( Fig. 3; lanthanide solutions in columns at 4'. The use of 1 mM NdCl, yielded smaller quantities of bound protein representing 28% of the applied coagulant protein activity. No protein was bound to the Sepharose-Factor X conjugate in the presence of 0.1 mM NdCla. Affinity chromatography performed with 10 mM NdCl, at 25' consistently resulted in the leaching of bound coagulant protein from the Sepharose-Factor X column prior to elution with EDTA; at 4", recovery of the coagulant protein in the bound fraction was optimized. Maximal binding of the venom coagulant protein was observed when a large excess of crude venom was placed onto the column. When smaller quantities of venom were applied, the protein content of the bound fraction and the specific coagulant protein activity were decreased. Presumably, the binding constant describing the interaction of Factor X and coagulant protein in the presence of Nd(II1) is such that, given the fixed concentration of Factor X bound to the Sepharose, higher ccncentrations of venom coagulant protein increase the amount of Factor X-Nd(III)-venom coagulant protein complex. The specific removal of the venom protein from the Sepharose-Factor X conjugate was facilitated by the chelation of lanthanide ions by A solution containing Factor X (2.8 PM), 0.1 M NaCl, 2.2 PM TbCls at pH 6.8 at 25', was irradiated at 280 nm and the emission spectrum recorded. The slit width of the excitation beam was 16 nm; the slit width of the emission beam was 10 nm. Emission maxima were observed at 490 and 545 nm. The second order scatter peak is centered at 560 nm. The amplitude of the fluorescence emission is given in arbitrary units.
EDTA. EDTA forms very tight complexes with lanthanide ions (11) and competes favorably with the metal binding sites of the protein for the metal ions.
The specificity of the interaction of the venom coagulant protein with Factor X covalently bound to Sepharose was evaluated using columns of Sepharose. In the presence of 10 mM NdCla, no detectable fraction of crude venom bound to the unconjugated Sepharose column which could be eluted with EDTA. These results suggest that the interaction of the venom coagulant protein with Factor X covalently bound to Sepharose is specific and probably simulates the metal-dependent ternary complex formed in solution.
Binding of Terbium(III)
tQ Factor X-The fluorescence properties of Tb(II1) were studied in the presence of Factor X. EXcitation at 280 nm of a solution of Factor X (2.8 PM) and TbCla (2.2 PM) in 0.1 M NaCl at pH 6.8 and 25" produced emission maxima at 490 and 545 nm (Fii. 5) as well as intrinsic tryptophan emission centered at about 344 run. A small maximum at 560 nm represents the second order scatter peak; variation of the excitation wavelength predictably altered the wavelength of this peak. Solutions of Factor X in the absence of Tb(II1) showed intrinsic tryptophan fluorescence but no emission at 490 or 545 nm. The emission spectrum of 2.2 PM Tb(II1) in 0.1 M NaCl at pH 6.8 in the absence of Factor X was not observable while excitation at 280 nm of solutions containing 10 mM Tb(III) in 0.1 M NaCl at pH 6.8 produced emission spectra with maxima which were about one-half of the amplitude of those obtained for 2.2 C(M Tb(II1) in the presence of Factor X. From these data it would appear that Tb(II1) exhibits about a lO,OOO-fold fluorescence enhancement when bound to Factor X. The uncorrected fluorescence excitation spectrum of the Tb(III)-Factor X complex, monitored at 490 run, had a maximum at 283 run; this spectrum was similar to the ultraviolet absorption spectrum of Factor X in the aromatic region. The ultraviolet absorption difference spectrum between Factor X-Tb(II1) uersus Factor X and Tb(II1) was minimal, indicating that the increased Tb(III) fluorescence in the presence of Factor X is not due to an in- in the fluorescence emission spectrum of Tb(III)-Fact,or X. Tb(II1) was added to a solution of Factor X (2.8 PM) and 0.1 mM NaCl, pH 6.8 at 25". The fluorescence emission at 490 nm ( l ---0 ) and 545 nm (A---A) was monitored.
Protein precipitation was noted when the Tb(II1) was added in excess of 25 J.LM.
creased absorption at 280 nm, but due to energy transfer from Factor X to Tb(II1).
These results would suggest that a tyrosine or tryptophan residue in Factor X, in or near a terbium binding site(s), is an energy donor and that the protein-bound Tb (II1) is an energy acceptor. An increase in fluorescence emission at 490 and 545 nm (Fig. 6) and a 10% quenching of intrinsic Factor X fluorescence was associated with the titration of Factor X with Tb(II1) in 0.1 M NaCl at pH 6.8. Turbidity associated with protein precipitation was observed in solutions containing TbCL in excess of 25 PM.
Although the absence of a plateau precluded the complete analysis of the titration experiments, a dissociation constant, Kd, describing the interaction of Tb(II1) and Factor X was roughly estimated to be about 2 x 1OV M per n, where n is the number of Tb(II1) ions bound to 1 mol of Factor X which participates in significant energy transler (31). Because there arc four to six lower affinity metal binding sites determined by the rate dialysis experiments, 72 is an integer between 1 and 6. A Kd between 3 and 19 PM may be estimated from the fiuorescence titration; these values correspond favorably to the dissociation constant determined by rate dialysis describing the interaction of other lanthanide ions with the lower affinity binding sites of Factor X.
DISCUSSION
The mechanism of the activation of Factor X by the venom coagulant protein has been shown to be enzymatic (9), involving proteolytic cleavage of a single bond on Factor X (3-5). The products of this reaction include polypeptide fragments of 44,000 and 11,000 molecular weight (4) whose structures appear to be highly complementary and bind to each other with high affinity (5) The formation of binary and ternary complexes between lanthanide ions and bovine Factor X or the coagulant protein of Russell's viper venom (or both) was investigated with the objective of defining the metal binding properties of Factor X in the presence and absence of the venom protein and the effect of the substitution of lanthanide ions for calcium on the Ca(II)dependent activation of Factor X by the venom protein.
The similarity between the ionic radii and the electrostatic binding of trivalent lanthanide ions and calcium(H) to oxygen ligands originally led to the suggestion that lanthanide ions, with interesting magnetic and electronic properties, might facilitate the physical and biological characterization of Ca(I1) binding proteins (11,12). These ions have subsequently proved useful in characterizing the metal binding sites of Ca(I1) binding proteins (14,16,18,20), in examining three-dimensional structures of proteins and the active site of proteins by x-ray crystallography and nuclear magnetic resonance relaxation techniques (15,16,19,21), and in evaluating the role of metal ions in the catalytic mechanism of hydrolases (13,14,17,18 We have employed kinetic models describing the interaction of the venom protein with Factor X in the presence of Ca(I1) and lanthanide ions. A salient feature of these models is that a single Ca(I1) ion interacts with Factor X and venom protein to facilitate ternary complex formation and Factor X hydrolysis.
Furthermore, trivalent lanthanide ions compete with Ca(II) for the occupancy of this essential metal binding site, and enhance the formation of a stable, nonproductive complex of Factor X-metal-venom protein.
The linearity of the data at each Ca(I1) concentration and the common intercept of all three sets of data are consistent with these models of competitive inhibition. However, because of assumptions in these models, the estimation of the Ki of lanthanide ions in this reaction at pH 6.8 of 1 to 4 PM must be considered a first approximation. This Ki may be compared to the Kd of 0.4 PM and 15 pM describing the interaction of lanthanide ions with the high and lower affinity metal binding sites, respectively, as determined by the rate of dialysis method.
We suggest that, within the experimental uncertainty of this kinetic data, the critical metal binding site(s) which must be occupied by Ca(I1) for activation of Factor X is one (or both) of the high affinity sites on Factor X.
The similarities of certain structural features of the serine proteases make comparison of lanthanide interaction with bovine Factor X and bovine trypsinogen of interest. Trypsinogen and Factor X are zymogens of the serine proteases, trypsin and activated Factor X, respectively, whose active site and NH*terminal amino acid sequences demonstrate marked homology (33,34). Lanthanide ions bind to two metal binding sites on trypsinogen (13)) enhance the rate of trypsin-catalyzed trypsinogen activation (13), and are bound, albeit weakly, to a single metal binding site on porcine trypsin in close proximity of a tryptophan residue (20). It would appear that certain structural features of Factor X and trypsinogen, including metal binding properties, may have been preserved during evolution from a common ancestral protease.
As is the case with transfcrrin (35) and trypsin (20) the lO,OOOfold increase in the intensity of the Tb(II1) emission is due to the energy transfer through a donor in the protein.
Terbium-(III) exhibits a characteristic fluorescence emission spectrum which is due to the j-j electronic transition associated with irradiation by ultraviolet light (36). The magnitude of this emission is enhanced by energy transfer through contact or dipoledipole interactions when Tb(II1) is iiganded in close proximity to a donor fluorophore which can participate as an energy donor. The excitation maximum for Tb(II1) emission of 283 nm for the Factor X-Tb(II1) complex and the association of the quenching of intrinsic tryptophan fluorescence with the binding of Tb-(III) to Factor X suggests that a tryptophan residue may be the energy donor within the protein.
Further studies of energy transfer in the Tb(III)-Factor X interaction should facilitate characterization of the metal binding sites of Factor X. Successful applications of affinity chromatography to the purification of proteins have employed specific ligands covalently bound to an inert matrix and elution systems for the specific removal of proteins which interact with the derivatized matrix. For the purification of enzymes with protein substrates, affinity
|
2018-04-03T02:06:05.921Z
|
1975-01-25T00:00:00.000
|
{
"year": 1975,
"sha1": "30b03c0393c501a8d25f1bd5a4975d921b794c57",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(19)41939-x",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "346eb5fae15b3d213078a16234ea0d25fb1b2c77",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
225715250
|
pes2o/s2orc
|
v3-fos-license
|
Pleiotropic activities of nitric oxide-releasing doxorubicin on P-glycoprotein/ABCB1
Doxorubicin is one of the first-line chemotherapeutic drugs for osteosarcoma, but the rate of success is below 60% of patients. The main cause of this low success is the presence of P-glycoprotein (P-gp/ABCB1) that effluxes the drug, limiting the intracellular accumulation and toxicity of Doxorubicin. P-gp also inhibits immunogenic cell death promoted by Doxorubicin. Nitric oxide-releasing Doxorubicin is a synthetic anthracycline effective against P-gppositive osteosarcoma cells. It is not known how it impacts on P-gp expression and immunogenic cell death induction. To address this point, we treated human Doxorubicinsensitive osteosarcoma U-2OS cells and their resistant variants with increasing amount of P-gp, with Dox and Nitric oxide-releasing Doxorubicin. While Doxorubicin was cytotoxic only in U-2OS cells, Nitric oxide-releasing Doxorubicin maintained its cytotoxic properties in all the resistant variants. Nitric oxide-releasing Doxorubicin elicited a strong nitrosative stress in whole cell extracts, endoplasmic reticulum and plasma membrane. P-gp was nitrated in all these compartments. The nitration caused protein ubiquitination and lower catalytic efficacy. The removal of P-gp from cell surface upon Nitric oxide-releasing Doxorubicin treatment disrupted its interaction with calreticulin, an immunogenic cell death-inducer that is inhibited by P-gp. Drug resistant cells treated with Nitric oxide-releasing Doxorubicin exposed calreticulin, were phagocytized by dendritic cells and expanded anti-tumor CD8+ T-lymphocytes. The efficacy of Nitric oxide-releasing Doxorubicin was validated in Dox-resistant osteosarcoma xenografts and was higher in immune-competent humanized mice than in immune-deficient mice, confirming that part of Nitric oxide-releasing Doxorubicin efficacy relies on the restoration of immunogenic cell death. Nitric oxide-releasing Doxorubicin was a pleiotropic anthracycline reducing activity and expression of P-gp, and restoring immunogenic cell death. It can be an innovative drug against P-gpexpressing/ Doxorubicin-resistant osteosarcomas.
Doxorubicin is one of the first-line chemotherapeutic drugs for osteosarcoma, but the rate of success is below 60% of patients. The main cause of this low success is the presence of P-glycoprotein (P-gp/ABCB1) that effluxes the drug, limiting the intracellular accumulation and toxicity of Doxorubicin. P-gp also inhibits immunogenic cell death promoted by Doxorubicin. Nitric oxide-releasing Doxorubicin is a synthetic anthracycline effective against P-gppositive osteosarcoma cells. It is not known how it impacts on P-gp expression and immunogenic cell death induction. To address this point, we treated human Doxorubicinsensitive osteosarcoma U-2OS cells and their resistant variants with increasing amount of P-gp, with Dox and Nitric oxide-releasing Doxorubicin. While Doxorubicin was cytotoxic only in U-2OS cells, Nitric oxide-releasing Doxorubicin maintained its cytotoxic properties in all the resistant variants. Nitric oxide-releasing Doxorubicin elicited a strong nitrosative stress in whole cell extracts, endoplasmic reticulum and plasma membrane. P-gp was nitrated in all these compartments. The nitration caused protein ubiquitination and lower catalytic efficacy. The removal of P-gp from cell surface upon Nitric oxide-releasing Doxorubicin treatment disrupted its interaction with calreticulin, an immunogenic cell death-inducer that is inhibited by P-gp. Drug resistant cells treated with Nitric oxide-releasing Doxorubicin exposed calreticulin, were phagocytized by dendritic cells and expanded anti-tumor CD8 + T-lymphocytes. The efficacy of Nitric oxide-releasing Doxorubicin was validated in Dox-resistant osteosarcoma xenografts and was higher in immune-competent humanized mice than in immune-deficient mice, confirming that part of Nitric oxide-releasing Doxorubicin efficacy relies on the restoration of immunogenic cell death. Nitric oxide-releasing Doxorubicin was a pleiotropic anthracycline reducing activity and expression of P-gp, and restoring immunogenic
Introduction
Osteosarcoma is a common tumor in childhood and adolescent age. The golden standard of treatment is surgery coupled with neo-adjuvant and adjuvant chemotherapy, based on doxorubicin (Dox), cisplatin and methotrexate [1]. The average efficacy of chemotherapy is around 60% [2], because osteosarcoma is characterized by multiple mechanisms of drug resistance. The overexpression of the detoxifying enzyme glutathione-S-transferase p1 determines resistance to cisplatin [3]. The reduced expression of the folate carrier SLC19A1 [4,5,6] or the folate metabolizing enzyme folylpoly-γ-glutamate synthetase [7] causes resistance to methotrexate. P-glycoprotein (P-gp) is the main determinant of resistance to Dox, which is actively effluxed by this plasma membrane-associated transporter [8]. Indeed, P-gp is recognized as a robust negative clinical prognostic factor in osteosarcoma [9]. In the last years, three generations of P-gp inhibitors have been produced [10]. Although promising in vitro, most inhibitors failed in patients because of the poor specificity, the undesired toxicity, the unfavorable interactions with other drugs and pharmacokinetic profile [11].
Boosting the anti-tumor activity of dendritic cells (DCs) [12] or cytotoxic CD8 + T-lymphocytes [13], using CAR T-cells [14] or immune-checkpoint inhibitors [15] have been tested in preclinical models or clinical trials as alternative strategies in the treatment of osteosarcoma. Currently, these immune-therapybased approaches do not offer a significant advantage compared to chemotherapy [16].
Specific chemo-immune-therapy protocols may improve the efficacy of antitumor treatments. For instance, trabectedin, a drug employed in sarcomas and ovarian cancers, reduces osteosarcoma growth and metastases by recruiting CD8 + T-lymphocytes. These CD8 + T-lymphocytes, however, express high levels of the immune-suppressive checkpoint PD-1 [17], raising doubts on their effective functionality. Dox is a chemotherapeutic drug inducing immunogenic cell death (ICD): it increases the translocation -from endoplasmic reticulum (ER) to plasma membrane -of calreticulin (CRT), a molecule that stimulates the phagocytosis of tumor cells by DCs and expands anti-tumor CD8 + T-lymphocytes [18]. However, the presence of P-gp impairs Dox-mediated ICD for at least two reasons. First, P-gp-expressing cells have low accumulation of Dox that is insufficient to trigger ICD. Second, P-gp co-localizes with CRT on plasma membrane, impairing its immune-sensitizing functions [19,20]. Dox fails to induce ICD in resistant/P-gp-expressing osteosarcoma cells [21], and this failure mediates part of the resistance to the drug. In a previous work, we demonstrated that synthetic Dox with tropism for ER and ability to perturb protein folding by releasing nitric oxide (NO) or H 2 S are cytotoxic against P-gp-expressing osteosarcoma cells. These synthetic Doxs also induce ICD [22].
NO is an inhibitor of the catalytic activity of P-gp: by reacting with superoxide (O .− 2 ) NO generates peroxynitrite (ONOO − ) that nitrates both nucleic acids and proteins on tyrosine [23]. One of the nitrated protein is P-gp, which reduces its catalytic activity. This event increases the retention of Dox in P-gp-expressing myeloid leukemia cells, malignant pleural mesotheliomas and ovary cancers and facilitates Dox-induced ICD [24]. It has not been fully elucidated if nitric oxide-releasing Dox (NODox) restores Dox cytotoxicity and ICD by affecting P-gp amount, e.g. by inducing protein unfolding or destabilization, besides affecting P-gp activity.
The aim of this study is to analyze if NODox may affect stability and expression of P-gp in human Dox-resistant osteosarcoma cells, and improve tumor killing by restoring a complete ICD.
Chemicals
Fetal bovine serum and culture medium were purchased by Invitrogen Life Technologies (Carlsbad, CA), plastic ware by Falcon (Becton Dickinson, Franklin Lakes, NJ), reagents for electrophoresis by Bio-Rad Laboratories (Hercules, CA). BCA kit (Sigma-Merck-Millipore, St. Louis, MO) was used to measure the protein concentration. Dox was purchased from Sigma-Merck-Millipore. NODox was synthesized as reported in [25]. All other reagents were from Sigma-Merck-Millipore.
Dox accumulation
Dox accumulation was measured by a fluorimetric assay [27], using a Synergy HT Multi-Detection Microplate Reader (Bio-Tek Instruments, Winoosky, MT). The results were expressed as nmoles Dox/mg proteins, according to the calibration curve.
Apoptosis and cell viability
Apoptosis was measured fluorimetrically, by measuring the activation of caspase 3, indicated by the cleavage of the fluorogenic substrate Ac-Asp-Glu-Val-Asp-7-amino-4-methylcoumarin (DEVD-AMC) [20]. Fluorescence was read using a Synergy HT Multi-Detection microplate reader (Bio-Tek Instruments) and converted in nmoles AMC/mg cellular proteins, using a calibration curve of AMC solutions. Cell viability was evaluated by the AT-Plite Luminescence Assay System (PerkinElmer, Waltham, MA). Results were expressed as percentage of viable cells relative to untreated cells, considered 100% viable.
Nitrite measurement
The amount of nitrite, the stable derivative of NO, was measured spectrophotometrically in the medium by the Griess method [27], using a Packard EL340 microplate reader (Bio-Tek Instruments). Results were expressed as nmoles nitrite/mg cellular proteins, using a calibration curve.
Nitrotyrosine measurement
The amount of nitrotyrosines, an index of nitrosative stress, was measured in whole cell lysates, ER extracts or plasma membrane extracts, using the Nitrotyrosine ELISA kit (Hycult Biotechnology, Uden, The Netherlands). ER and plasma membrane extracts were isolated with the Endoplasmic Reticulum Isolation Kit (Sigma-Merck-Millipore) and the Cell Surface Protein Isolation kit (Thermo Fisher Scientific Inc., Rockford, IL), respectively. The absorbance was read with a Packard EL340 microplate reader and converted into pmoles nitrotyrosine/mg proteins according to the titration curve.
P-gp ATPase activity
The rate of ATP hydrolysis, an index of the catalytic cycle of Pgp, was measured after immunoprecipitating Pgp from 100 µg of membrane-associated proteins, in a spectrophotometric assay [28]. Results were expressed as nmoles hydrolyzed phosphate (Pi)/mg proteins.
In vivo tumor growth
100 µl of 1×10 7 U-2OS/DX580 cells, re-suspended in Matrigel, were injected subcutaneously (s.c.) in 6-week old female NOD SCID gamma mice (NSG) or in NSG mice engrafted with human hematopoietic CD34 + cells (Hu-CD34 + ; The Jackson Laboratories, Bar Harbor, MA). Mice (5/cage) were housed following 12 h light/dark cycles, with drinking and food ad libitum. Tumor growth was monitored daily by caliper and calculated as: (LxW 2 )/2, where L = tumor length and W = tumor width. When tumors had a 50 mm 3 volume, mice were randomized and treated as it follows (on day 3, 9 and 15 after randomization): 1) vehicle group, treated with 0.1 ml saline solution intravenously (i.v.); 2) Dox group, treated with 5 mg/kg Dox i.v.; 3) NODox group, treated with 5 mg/kg NODox i.v. Tumor volumes and mice weight were monitored daily. Zolazepam (0.2 ml/kg) and xylazine (16 mg/kg) were used for euthanasia on day 21. Tumors were excised and weighted. The Bio-Ethical Committee of the Italian Ministry of Health (#122/2015-PR) approved the experimental protocol.
Statistical analysis
Results are expressed as means ± SD and evaluated with a oneway analysis of variance (ANOVA), with the Statistical Package for Social Science (SPSS) software (IBM SPSS Statistics v. 19). P < 0.05 was considered statistically significant.
Nitric oxide-releasing doxorubicin is cytotoxic in P-gpexpressing human osteosarcoma cells
Dox and NODox (Fig. 1A) were evaluated for their cytotoxic potential in human Dox-sensitive U-2OS cells and in the U-2OS/DX30, U-2OS/DX100 and U-2OS/DX580 sublines, with increasing amount of P-gp (Fig. 1B). As shown in Fig. 1C, the intracellular Dox accumulation was progressively lower in the resistant sublines, while the amount of NODox was the same, independently on the levels of P-gp. In all the resistant variants, intracellular NODox remained significantly higher than Dox. Consistently, Dox triggered apoptosis, i.e. it activated caspase 3 (Fig. 1D), and decreased cell viability (Fig. 1E) in U-2OS cells only. By contrast, NODox induced apoptosis and decreased the number of viable cells without differences between sensitive and resistant cells (Fig. 1D-E). When accumulated within the cell, Dox is able to increase the synthesis of NO, by up-regulating NO synthase 2 (NOS2) gene: this mechanism mediates part of the Dox anticancer effects [24]. In line with the intracellular accumulation, Dox increased nitrite, the NO stable derivative [27], only in U-2OS cells. This property was progressively lost in the resistant variants (Fig. 1F). On the contrary, nitrite amount after treatment with NO-Dox was always significantly higher than the amount detected in the supernatants of untreated cells. Moreover, nitrite did not differ between Dox-sensitive and Dox-resistant cells after treatment with NODox (Fig. 1F), which releases NO directly from its NO donor moiety [25].
To investigate if and how the increased levels of NO impact on P-gp expression, we focused on Dox-sensitive U-2OS cells and on U-2OS/DX580 cells, i.e. the most Dox-resistant variant.
Nitric oxide-releasing doxorubicin induces P-gp nitration followed by ubiquitination
The release of NO and the generation of ONOO − induce a nitrosative stress that may alter folding, stability and activity of the target proteins [31]. While Dox increased the amount of nitrotyrosines in whole cell lysate from U-2OS cells but not from U-2OS/DX580 cells, NODox induced a robust and comparable nitration in both cell lines ( Fig. 2A). In U-2OS cells, where the levels of P-gp were undetectable, we did not find any nitration of the protein, neither by Dox nor by NODox. By contrast, NODox induced the nitration of P-gp in U-2OS/DX580 variant, where the protein was abundantly expressed (Fig. 2B). P-gp is synthesized and folded within ER, then it undergoes glycosylation in the Golgi apparatus and migrates to the plasma membrane [19,32]. Notably, the same pattern of nitrotyrosines observed in whole cell lysates was detected in ER (Fig. 2C) and plasma membrane (Fig. 2D) extracts: Dox elicited a small nitration only in U-2OS cells, NODox induced a stronger nitration in U-2OS cells and U-2OS/DX580 cells. Accordingly, we did not detect any nitration of P-gp in U-2OS cells, nor in U-2OS/DX580 cells treated with Dox in ER and plasma membrane (Fig. 2E-F). Instead, a clear nitration of P-gp was present in both compartments of U-2OS/DX580 cells treated with NODox ( Fig. 2E-F).
Notably, while Dox did not alter P-gp levels in U-2OS/DX580 cells, NODox reduced it in whole cell extracts (Fig. 2B), as well as in ER (Fig. 2E) and plasma membrane (Fig. 2F) fractions. PTIO, a NO scavenger that abrogated the increase of nitro-tyrosines induced by NODox (Fig. 2G), also prevented the nitration and the decrease in the amount of P-gp in U-2OS/DX580 cells (Fig. 2H).
The down-regulation of P-gp was paralleled by the increase in its ubiquitination: while in U-2OS cells we did not detect any sign of ubiquitination, a band of approximatively 170 kDa, corresponding to the mono-ubiquitinated form of P-gp was visible in U-2OS/DX580 cells. NODox treatment induced also a strong poly-ubiquitination (Fig. 3A). In parallel, NODox, but not Dox, produced a significant decrease in the catalytic activity of P-gp from U-2OS/DX580 extracts (Fig. 3B).
Overall, these data suggest that NODox decreases P-gp activity and amount by inducing nitration and ubiquitination of the protein. NO triggers both processes, as demonstrated by the reduced ubiquitination (Fig. 3A) and restored ATPase activity (Fig. 3B) in resistant cells treated with PTIO.
By lowering the amount of P-gp, nitric oxide-releasing doxorubicin restores the immunogenic cell death in drug resistant osteosarcomas
Since P-gp prevents the ICD induced by CRT [24], we next investigated whether the reduction of P-gp elicited by NODox resulted in the restoration of ICD. In U-2OS cells, both Dox and NO-Dox elicited the translocation of CRT on plasma membrane; in U-2OS/DX580, only NODox--that was well retained within the cells, differently from Dox--elicited the exposure of CRT (Fig. 3C). When constitutively overexpressed in U-2OS/DX580 cells, CRT co-immunoprecipitated with P-gp, as already reported in breast cancer P-gp-positive cells [24]. In U-2OS cells we did not detect any interaction between P-gp and CRT, because these cells had undetectable amount of P-gp (Fig. 1B). In P-gp-rich U-2OS/DX580 cells, neither after the treatment with Dox nor after the treatment with NODox we observed a co-immunoprecipitation between Pgp and CRT (Fig. 3C): in Dox-treated cells, CRT was not translocated; in NODox-treated cells, P-gp was dramatically reduced on We thus hypothesized that NODox-treated U-2OS/DX580 cells were functionally equivalent to U-2OS cells in terms of ICD. In sensitive cells, both Dox and NODox increased the DCphagocytosis of tumor cells (Fig. 3D) and expanded the activated (i.e. CD107 + IFN-γ + ) CD8 + T-lymphocytes (Fig. 3E). In U-2OS/DX580 cells only NODox, but not Dox, increased phagocytosis and activity of cytotoxic T-lymphocytes (Fig. 3D-E), suggesting a full restoration of ICD in these resistant cells.
The anti-tumor efficacy of NODox was finally validated in U-2OS/DX580 xenografts, where Dox was completely ineffective (Fig. 4A). In both immune-deficient (left panel) and immune- competent (right panel) mice, NODox reduced osteosarcoma growth. The rate of tumor growth was lower in immune-competent animals, in agreement with previous findings [33], suggesting that the host immune system is important in counteracting osteosarcoma growth. Notwithstanding the lower aggressiveness, Dox was unable to reduce tumor growth in immune-competent mice. By contrast, NODox was more effective in immune-competent mice than in immune-deficient ones (Fig. 4A), suggesting that part of Dox cytotoxicity is due to the engagement of the host immune system against the tumor. The reduction in tumor growth was con-firmed by the weights of the excised tumors (Fig. 4B): in both immune-deficient and immune-competent mice NODox, but not Dox, significantly reduced tumor weight. The weights of tumors were slightly lower in immune-competent animals (Fig. 4B), in agreement with the different tumor growth rate (Fig. 4A). NO-Dox confirmed its maximal efficacy in immune-competent mice (Fig. 4B). The animal weight did not vary significantly between the group of treatments (Fig. 4C), suggesting that tumor-related cachexia or toxicities related to the treatments were unlikely in our experimental protocol.
Discussion
NO is a double edge-sword in cancer: at nanomolar concentrations it may have a pro-tumoral effect, at high micromolar concentrations it has anti-cancer activity [34]. The ability of NO to react with O .− 2 produces huge amounts of reactive nitrogen species (RNS) as ONOO − . This radical can exert cytotoxic effects or act as post-translational modifier of proteins, by inducing a stable tyrosine nitration or a labile cysteine nitrosylation [35]. Nitration and nitrosylation can increase or decrease protein activity, implying a fine modulation of signal transduction and catalytic functions [35]. Among the proteins nitrated, we previously identified several proteins of the ATP binding cassette (ABC) transporters family, implicated in the efflux of chemotherapeutic drugs. Following nitration, the catalytic cycle of the transporter is inhibited and chemotherapeutic drugs are more retained within cancer cells, increasing their cytotoxicity [36,24]. Besides NO donors [36], ABC transporters' nitration can be elicited by synthetic NO-releasing Dox that deliver both the NO donor inhibiting the ABC transporter and the chemotherapeutic drug within the tumor cell. These synthetic Dox [25,37] and their liposomal formulations [27,38] have successfully overcome the resistance to Dox in P-gp-expressing tumors, including osteosarcoma [22]. Higher is the intracellular retention of Dox, higher is the direct killing of cancer cells and the ability of triggering ICD [30]. In the present work, carried out on osteosarcoma cells with increasing levels of P-gp, we confirmed that NODox maintained the cytotoxic potential, in terms of apoptosis induction, reduced cell viability and ICD induction, also in P-gp-expressing cell, where Dox was ineffective. Both NO or its downstream mediator cyclic guanosine 3',5'-monophosphate (cGMP) [34] can mediate such chemosensitization. Indeed, it has been shown that the type 5 cGMP-phosphodiesterase inhibitor Sildenafil, which increases the intracellular levels of cGMP, increases the apoptosis mediated by Dox in drug-resistant prostate cancer cells as well [39]. Also, Sildenafil directly impaired the efflux activity of P-gp, reversing the resistance to paclitaxel in Pgp-over-expressing cells [40]. Although we did not investigate if cGMP was responsible for the reversion of resistance to Dox in osteosarcoma, our findings were in line with these previous works, because we clearly demonstrated that the release of NO, which can increase cGMP, from the synthetic NODox overcomes the resistance to parental Dox.
The first reason for the different effect of Dox and NODox was the intracellular retention of the two drugs. The amount of Dox retained within P-gp-expressing cells was likely insufficient to elicit any direct cytotoxic effect nor to induce the ER stress that primes cells for ICD [41]. By contrast, NODox accumulation in U-2OS/DX580 cells was comparable to the accumulation of Dox in U-2OS cells, which underwent a classical ICD.
Second, one of the mechanism triggering CRT translocation and ICD is the increase of NO that activates the guanylate cyclase/PKG axis, induces a cytoskeleton rearrangement and promotes the translocation of CRT to the plasma membrane [19]. In addition, NO itself is an inducer of ER stress [42]: the nitrosative stress caused by NO up-regulates the ER stress sensor C/EBPβ LIP [43], which decreases P-gp [44] and increases CRT [29]. These events trigger a virtuous circle that amplifies Dox cytotoxicity, by decreasing the expression of its main efflux transporter and favoring the exposure of CRT. In our experimental conditions, Dox (in U-2OS cells) and NODox (in U-2OS and U-2OS/DX580 cells) elicited a nitrosative stress, as indicated by the increasing amount of nitrotyrosines in whole cells and in the key compartments --ER and plasma membrane --where P-gp is synthesized and active. P-gp was nitrated in all these compartments, although the mechanisms inducing nitration were likely different. Dox does not release NO, but it induces the production of micromolar amounts of NO by inducing NOS2 [24,36,42]. The induction of NOS2 is dose-dependent: it happens in Dox-sensitive cells, not in Doxresistant ones, where P-gp effluxes the drug [24,36]. This mechanism explains why Dox increased nitrotyrosines in U-2OS cells, not in U-2OS/DX580 cells. By contrast, NODox releases NO once entered the cells: although the intracellular localization of NODox is mainly mitochondrial [37], NO can diffuse intracellularly and react with O .− 2 . The ONOO − produced has a diffusion range of few microns [45] that is sufficient to elicit nitration of proteins in dif-ferent cellular compartments. The rate of nitration depends on the amount of ONOO − , on the amount of the target proteins, on the amount and exposure of tyrosines within the target proteins [46]. While the amount of tyrosines nitrated by NODox was the same in U-2OS and U-2OS/DX580 cells, suggesting a comparable level of NO released and ONOO − generated, P-gp resulted nitrated in U-2OD/DX580 cells only. The absence of P-gp in U-2OS cells and the abundance of the protein in U-2OS/DX580 cells may explain while we detected P-gp nitrated in the resistant variant only. In line with previous findings [43], the nitration determined a strong reduction in the catalytic activity.
Interestingly, in U-2OS/DX580 cells treated with NODox, we noticed a reduction in the amount of P-gp in whole cell lysates, as well as in ER and plasma membrane extracts. Since nitration is accompanied by protein oxidation and unfolding [47], it can destabilize the target protein and prime it for ubiquitination. This mechanism was elicited by NODox that induced a strong NO-dependent poly-ubiquitination of P-gp, explaining the lower amount of protein detected in all cell compartments. A band of 170 kDa, likely corresponding to mono-ubiquitinated P-gp, was detected in U-2OS/DX580 cells. Mono-ubiquitination often promotes the endocytosis of membrane proteins within lysosomes [48]. In the case of P-gp, the lysosomal localization contributes to drug resistance, because the P-gp present on the lysosomal membrane is active and sequesters Dox within lysosomes [49]. Therefore, it is not surprising to find a constitutively mono-ubiquitinated P-gp in the highly resistant osteosarcoma cells. By contrast, poly-ubiquitination typically primes proteins for proteasomal degradation [50].
The removal of P-gp from cell surface positively impacted on the induction of ICD, since P-gp interacts with CRT, masks its immune-sensitizing functions and impairs the recognition of tumor cells by DCs [19,20]. P-gp-expressing cells are chemo-and immuno-resistant not only because of low accumulation of Dox and low production of NO, but also because of defective functions of CRT. Thanks to the reduction of P-gp amount and to the increased production of NO, NODox increased the immunesensitizing functions of CRT that is free to translocate on the surface of resistant cells, without suffering any inhibition by P-gp. As a consequence, P-gp-expressing U-2OS/DX580 cells treated with NODox have the same sensitivity to ICD of P-gp-negative U-2OS cells, being phagocytized by DCs and inducing a significant increase of activated CD8 + T-lymphocytes.
The efficacy of NODox and the involvement of the immune system in the cell death induced by the drug was also validated in U-2OS/DX580 xenografts that were completely refractory to Dox. NODox significantly reduced tumor growth in immunedeficient mice, but it had a more pronounced effect in immunecompetent humanized mice. A competent immune system is important in controlling osteosarcoma progression, as demonstrated by the lower growth rate of U-2OS/DX580 xenografts in humanized mice. In these models, NODox exerted a stronger anti-tumor activity likely by activating the host immune system against the tumor. The critical role of a competent immune system against osteosarcoma is in agreement with a previous work showing that the activation of DCs by tumor lysates and the administration of anti-CTL4 antibody, which relieves the anergy of cytotoxic Tlymphocytes, slow down the progression of osteosarcoma [51].
Conclusions
We identified NODox as a drug with pleiotropic effects on Pgp and as a potentially effective tool agent against Dox-resistant osteosarcomas. Although it is well-known that NO donors are both chemo-and immune-sensitizers [52], our approach is innovative because it conjugates the advantage of using Dox, already employed as first-line treatment in osteosarcoma, with the advantage of using NO, an inhibitor of expression and activity of P-gp. These two events enforce the direct killing and the ICD induced by Dox, transforming chemo-immune-resistant osteosarcoma cells into chemo-immune-sensitive ones. Compared to parental Dox, NODox has also a safer profile of cardiovascular toxicity [27,53] that constitutes an additional advantage. Together, these evidences suggest the possibility to start clinical trials based on NODox in patients with Dox-resistant, P-gp-expressing osteosarcomas, characterized by a poor prognosis.
Author contributions
CC and JK performed the in vitro experiments; JK performed the assays in xenograft models; CC and JK analyzed the data; CR supervised the study and wrote the manuscript. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript.
|
2020-10-28T18:04:54.200Z
|
2020-06-20T00:00:00.000
|
{
"year": 2020,
"sha1": "4d67abe0459a10ae33e98c41a16988690f41c1e3",
"oa_license": null,
"oa_url": "https://jmcm.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=4549",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dfa34bd573faa6c15f36f201e431a9c811493350",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
261765879
|
pes2o/s2orc
|
v3-fos-license
|
Adding Glass Fibers to 3D Printable Mortar: Effects on Printability and Material Anisotropy
: Adding fibers is an effective way to enhance the printability and mechanical performance of 3D printable cementitious materials. Glass fibers are commonly used owing to their sound mechanical properties, high durability and affordable price. However, there is still a lack of systematic and in-depth research on the effects of adding glass fibers to cementitious materials. In this study, a series of 3D printable mortars with varying glass fiber content and water/cement (W/C) ratio were produced to evaluate their printability, flexural strength and compressive strength. The results showed that decreasing the W/C ratio generally has positive effects on printability and mechanical performance, whereas increasing the glass fiber content from 0% to 1% would substantially improve the extrudability, dimensional stability and buildability; increase the flexural strength by up to 82%; but decrease the compressive strength by up to 35%. Such large differences in the effects of glass fibers on the flexural and compressive strengths indicate significant material anisotropy. In fact, comparison of the strength results of printed specimens to those of un-printed specimens reveals that the printing process could increase the flexural strength by 98% but decrease the compressive strength by 47%.
Introduction
Three-dimensional concrete printing technology (3D-CPT) is a novel and promising construction technology which shows great potential to proliferate the industrialization and intellectualization of building construction [1][2][3].In recent years, 3D-CPT is rapidly gaining ground in some applications [4][5][6].Nevertheless, compared to conventional concrete construction methods, it is obvious that the absence of effective reinforcement is a detrimental weakness of 3D printed concrete.Applying reinforcing fibers to improve the tensile strength and toughness of 3D printed concrete is undoubtedly one of the effective ways of providing reinforcement [7][8][9][10].
Reinforcing fibers play an important role in the mechanical performance of 3D printed concrete, especially the flexural/tensile strength of the concrete, but the actual performance has an element of mechanical anisotropy [11][12][13][14][15]. Hambach and Volkmer [16] demonstrated that the extrusion process would align the fibers to be along the printing path, leading to an improvement in flexural/tensile strength of the printed mortar in the direction of printing.Bos et al. [17] revealed that the toughness of 3D printed mortar reinforced with short straight steel fibers is much higher than that of plain mortar without fibers.Panda et al. [18] made 3D printed mortar added with glass fibers, and tested that the flexural, tensile and compressive strengths in different loading directions are widely different, indicating that the layer-by-layer deposition could cause significant material anisotropy.Ma et al. [19] produced polypropylene (PP)-fiber-reinforced 3D printed mortar and observed that the tensile stresses perpendicular to the interlayer zones between adjacent layers tended to induce cracks more readily than those parallel to the interlayer zones.Liu et al. [20] prepared polyvinyl alcohol (PVA)fiber-reinforced 3D printed mortar and found that depending on the printing and loading directions, the strength of printed specimens could be higher or lower than the strength of just cast specimens.
The reinforcing fibers have been found to also have significant effects on the fresh properties and printability of 3D printable cementitious materials [21][22][23].For instance, Rubio et al. [24] noted that adding PP fibers could boost the cohesiveness of mortar for building up a more stable structure.Van Der Putten et al. [25] reported that increasing the PP fiber dosage would reduce the workability of 3D printable mortar partly because the use of PP fibers might thin the water film thickness.Arunothayan et al. [26] found that adding 2% steel fibers would slightly decrease workability but would enhance the shape-retention ability.Shakor et al. [27] showed that the use of glass fibers would allow the printable mortar to be printed at higher speed without causing negative effects.Weng et al. [28] revealed that increasing the PVA fiber content would increase the flow resistance, torque viscosity and thixotropy of the printable mortar mix; they developed some empirical models to predict these rheological properties.
Among the various fibers, glass fibers have been widely used in traditional fiberreinforced concrete [29][30][31] and also tried in 3D-CPT [18,27,32].However, there is still a lack of systematic and in-depth research on how glass fibers would affect the printability and mechanical properties of 3D printable cementitious materials.For the purposes of evaluating the printability and mechanical properties of glass-fiber-reinforced 3D printable mortar and studying the effects of the glass fiber content, a comprehensive research program has been launched, whereby a series of printable mortar mixes with different glass fiber contents and water/cement ratios were made for testing, as presented herein.It can be seen that due to the printing process, the printed mortar is highly anisotropic and the effects of glass fibers on the flexural and compressive strengths are widely different.
Raw Materials
The raw materials employed in this study were cement, fine aggregate, glass fibers, water and superplasticizer (SP).The cement used was an ordinary Portland cement, which has a strength grade of 42.5R conforming to Chinese Standard GB 175-2020 [33] and has a specific gravity of 3.10.The fine aggregate used was river sand, which had a maximum particle size of 1.18 mm, specific gravity of 2.66, water absorption of 1.10% and moisture content of 0.04%.The particle size distribution of the cement was determined by laser diffraction method (applying a laser-diffraction particle size analyzer based on the light scattering principle), and the fine aggregates were measured by a mechanical sieving method (using an automatic sieving machine by means of mechanical vibrating and sieving).The particle size distributions of the cement and fine aggregate are exhibited in Figure 1 for reference.
The glass fibers used were alkali-resistant glass fibers with a specific gravity of 2.68, a fiber diameter of 16 µm and a fiber length of 10 mm.Other properties of the glass fibers are listed in Table 1, and a photograph of the glass fibers is presented in Figure 2. The water used was local tap water, and the SP used was a third-generation polycarboxylate-based admixture with a solid content of 20% and a specific gravity of 1.03.
Mixture Design
Fifteen 3D printable mortar mixtures were designed by varying the fiber content and water to cement (W/C) ratio.The fiber contents, each defined as the fiber to cement ratio by mass, were set as either 0.00%, 0.25%, 0.50%, 0.75% or 1.00%, whereas the W/C ratios by mass were set as either 0.24, 0.28 or 0.32.For all the mortar mixtures, the cement to fine aggregate ratio by mass was fixed at 1.0.For ease of reference, each mortar mixture was assigned a mix number in the form of X-Y, in which X is the fiber content and Y is the W/C ratio, as listed in the first columns of Tables 2-4.It should be noted that a mortar mixture suitable for 3D printing needs to have a fluidity which is neither too low nor too high [34].To ensure proper 3D printing, after numerous trials, the target range of flow spread, measured by a flow table test [35], was set within 180 to 220 mm.For the purpose of attaining this target flow spread, the optimum SP dosage (the SP dosage at which the mortar can achieve a flow spread within the above target range of 180 to 220 mm, expressed as a percentage by mass of cement) for each mortar mixture was determined by trial mortar mixing.
Flow Table Test
The flow table test for measuring the fluidity of the mortar mixtures in terms of flow spread followed the method specified in the Chinese Standard GB/T 2419-2005 [35].To perform the test, the slump cone was placed on a flow table and then filled with the fresh mortar sample.Once the slump cone was full, the slump cone was lifted vertically, and the flow table was dropped 25 times in about 25 s.After dropping, the fresh mortar sample would form a patty, and the diameters of the patty in two perpendicular directions were measured.Finally, the flow spread was recorded as the average diameter of the patty, i.e., the average of the two measured diameters.
Printing Machine
As exhibited in Figure 3a, the 3D printing machine employed was a laboratory scale triaxial printer with dimensions of 2.0 m (length) × 1.6 m (width) × 1.8 m (height).The triaxial linear movement of the extruder across the printing platform was controlled by a customized computer program.The printing nozzle mounted on the extruder has a circular cross section with an internal diameter of 15 mm.The nozzle extrusion rate was set at about 9 mL/s, and the moving speed was set at about 50 mm/s.During the printing process, the fresh mortar mix was poured into the extruder and then extruded through the nozzle onto the printing platform, as shown in Figure 3b.
Extrudability Test
Extrudability relates to the ability of mortar to be extruded from the nozzle as a continuous and steady filament [7,36,37].In this study, an extrudability test was applied to assess the extrudability of the fresh mortar mix for forming a single layer.For the test, two rows of mortar, each about 500 mm long, were first printed side-by-side on the printing platform.After that, the combined width W of the two rows of printed mortar was measured at 50 mm intervals along the length, as exhibited in Figure 4. Based on 10 readings of combined widths, the average width and standard deviation were calculated.At the end, the extrudability variation coefficient (EVC) was determined as the standard deviation divided by the average width.A lower EVC implies better extrudability.
Dimensional Stability Test
Dimensional stability relates to the ability of lower layers of printed mortar to resist deformation under the weight of the upper layers during the stacking up process [38,39].In this study, a dimensional stability test was developed by the authors' team.To perform the test, two successive layers of mortar each about 200 mm long were first printed.The total height of the printed stripe was immediately measured at 20 mm intervals.A total of 10 readings were recorded, and the initial average height h i was calculated from the readings.Then, ten customized steel plates (each with mass equal to that of one layer of printed mortar) were placed on the printed mortar one-by-one at 30 s intervals to simulate the printing process.After that, the height of the printed mortar was recorded again to determine the final average height h f .Finally, the dimensional stability coefficient (DSC) was determined as (h i − h f )/h i .The lower the DSC, the better the dimensional stability.The schematic diagram and photograph of this test are displayed in Figure 5.
Buildability Test
Buildability relates to the ability to build up multiple layers of printed mortar continuously and steadily [40,41].In this research, two rows of mortar each about 500 mm long were printed side-by-side, and then, additional layers were successively printed on top to build up to six layers.Afterwards, the total height H of the six layers of printed mortar at 50 mm intervals was recorded, and then, the average height and standard deviation were calculated from 10 total height readings.At the end, the buildability variation coefficient (BVC) was determined as the standard deviation divided by the average height.A lower BVC implies better buildability.The schematic diagram and photograph of this test are displayed in Figure 6.
Flexural Strength Test and Compressive Strength Test
To assess the flexural and compressive strengths of hardened mortar, mortar strips with two rows and three layers were first made and cured indoor by spraying water and covering plastic film at a room temperature of 20 ± 2 • C until the age of 28 days.After that, the mortar strips were cut into prismatic specimens with dimensions of 40 mm (width) × 40 mm (height) × 160 mm (length), as shown in Figure 7.All specimens were placed into a hydraulic testing machine to carry out a flexural strength test and then a compressive strength test in accordance with Chinese Standard GB/T 17671-1999 [42].
Previous studies have revealed that the printed mortar structure tends to exhibit mechanical anisotropy due to the presence of interlayer zones between adjacent layers [16][17][18][19][20], as shown in Figure 7.
In this study, the three-point loads for the flexural strength test and the two compression loads for the compressive strength test were applied to each prismatic specimen in the direction perpendicular to the interlayer zones to simulate the actual in-situ condition of vertical loads acting on the printed concrete structure, as shown in Figure 8.Meanwhile, standard prismatic specimens were also cast from the same batch of mortar mixture and cured under the same condition as for the printed specimens until the same age.These standard specimens, produced with no interlayer zones, are called the un-printed specimens.They were also each subjected first to the flexural strength test and then to the compressive strength test in accordance with Chinese Standard GB/T 17671-1999 [42].
SP Dosage and Flow Spread
The optimum SP dosages determined by trial mortar mixing are listed in the second column of Table 2 and plotted against the W/C ratio at different fiber contents in Figure 9.As expected, the optimum SP dosage increased as the W/C ratio decreased and/or the fiber content increased.This was because the decreased water content due to the decrease in W/C ratio and the increased fiber entanglement due to the increase in fiber content had both rendered the fresh mortar mix more difficult to flow causing the mortar mixture to require more SP to achieve the target flow spread [43,44].The corresponding flow spread results are tabulated in the third column of Table 2.It is seen that the flow spread varied from 185 to 218 mm, which is well within the target range.
Extrudability Variation Coefficient
The extrudability variation coefficient (EVC) results of the printed mortar mixtures are listed in the fourth column of Table 2. From these results, it can be seen that the EVC varied from 1.07% to 2.73%.To graphically depict how the EVC varied, the EVC is plotted against the W/C ratio at different fiber contents in Figure 10.
Generally, at a given fiber content, the EVC increased as the W/C ratio increased, meaning that the increase in water content has a negative impact on the extrudability.This phenomenon is reasonable because with a higher water content, the extruded mortar would be more prone to uncontrolled flow, thus adversely affecting the extrudability [27].Hence, decreasing the W/C ratio has a positive effect on the extrudability.
Meanwhile, at a fixed W/C ratio, a higher fiber content would generally lead to a lower EVC.For example, at a W/C ratio of 0.24, increasing the fiber content from 0% to 1% reduced the EVC from 2.10% to 1.08%.This indicates that adding glass fibers has a great positive effect on the extrudability.The reason may be that the fiber wrapping would render the extruded mortar more continuous and steadier [27,45].
Dimensional Stability Coefficient
The dimensional stability coefficient (DSC) results of the printed mortar mixtures are exhibited in the fifth column of Table 2. From these results, it can be seen that the DSC varied from 2.94% to 14.29%.To illustrate how the DSC varied, the DSC is plotted against the W/C ratio at different fiber contents in Figure 11.From the figure, it is clear that regardless of the fiber content, as the W/C ratio increased, the DSC also increased, showing that increasing the water content has an adverse effect on the dimensional stability.This phenomenon is expected because adding more water would reduce the cohesiveness of fresh mortar, leading to larger deformation under the weight of the upper layers [25].Hence, decreasing the W/C ratio has a positive effect on the dimensional stability.
More importantly, at a given W/C ratio, the increase in fiber content would lead to the decrease in DSC.For example, at a W/C ratio of 0.24, when the fiber content increased from 0% to 1%, the DSC dropped from 4.57% to 2.94%.This means that the addition of glass fibers has a great positive effect on the dimensional stability.The reason for this phenomenon may be that the deformation of mortar in lower layer was inhibited by the wrapping of glass fibers [46,47].
Buildability Variation Coefficient
The buildability variation coefficient (BVC) results of the printed mortar mixtures, tabulated in the sixth column of Table 2, varied from 0.45% to 1.68%.To graphically display how the BVC changed, the BVC is plotted against the W/C ratio at different fiber contents in Figure 12.As expected, at a fixed fiber content, the BVC increased with the increase in W/C ratio, meaning that the water added has negative influence on the buildability.This situation is reasonable because as the water content increases, it is more difficult for the printed mortar to hold itself [26].Hence, decreasing the W/C ratio has a positive effect on the buildability.
More significantly, regardless of W/C ratio, a higher fiber content would generally lead to a lower BVC.For instance, at a W/C ratio of 0.32, when the fiber content increased from 0% to 1%, the BVC decreased from 1.68% to 0.77%.This well confirmed that adding glass fibers has a great positive effect on buildability.This may be due to the fact that the glass fibers played an important part in supporting the printed mortar during the mortar stacking process [25,26].
Flexural Strength
The flexural strength results of the un-printed and printed specimens are tabulated in the second and third columns of Table 3.
For graphical presentation, the flexural strength results of the un-printed and printed specimens are plotted in Figures 13a and 13b, respectively.
For both the un-printed and printed specimens, at the same fiber content, the flexural strength tended to be higher at lower W/C ratios.This observed effect of the W/C ratio is expected, as it is well known that the W/C ratio is a crucial parameter affecting the strength of cementitious materials and decreasing the W/C ratio would increase the flexural strength [48,49].For the un-printed specimens, the reduction in the W/C ratio from 0.32 to 0.24 increased the flexural strength from 7.90 to 8.70 MPa by 10.1%.For the printed specimens with 1% fibers added, the reduction in the W/C ratio from 0.32 to 0.24 increased the flexural strength from 16.10 to 20.90 MPa by 29.8%.Apparently, the percentage increase in flexural strength was higher in the presence of fibers; it was also higher when being printed.On the other hand, for both the un-printed and printed specimens, at the same W/C ratio, the flexural strength always significantly increased with the fiber content.This observed effect of the fiber content is reasonable as the glass fibers would take up part of the tensile stresses and act as tensile reinforcement for bridging cracks to improve the tensile strength of cement-based materials [27,50].For the un-printed specimens, at a W/C ratio of 0.28, the increase in fiber content from 0% to 1% increased the flexural strength from 8.07 to 10.00 MPa by 23.9%.For the printed specimens, at a W/C ratio of 0.28, the increase in fiber content from 0% to 1% increased the flexural strength from 10.85 to 19.76 MPa by 82.1%.Apparently, the percentage increase in flexural strength was higher when printed.
From the above, it is evident that the printing process, especially the fiber orientation, also has a significant effect.To study its effect, the percentage changes in flexural strength owing to the printing process have been evaluated by comparing the flexural strength of each printed specimen to that of the respective un-printed specimen, as listed in the fourth column of Table 3.These changes were all positive and very substantial indeed.Moreover, the increase in flexural strength was higher at a higher fiber content.For instance, at a fiber content of 0%, the increase due to printing was 34.4% to 40.7%, whereas at a fiber content of 1%, the increase due to printing was 92.4% to 97.6%.It is noteworthy that even with no fibers added, the printing process had increased the flexural strength.Such great differences in flexural strength between the printed and un-printed specimens are the results of material anisotropy caused by the printing process [16][17][18][19][20].During the printing process, the mortar mixture was squeezed through the nozzle, making the mortar mixture more compact along the printing direction and causing the fibers to be aligned along the printing direction, as shown in Figure 14.In the flexural strength test, since the tensile stresses induced were along the printing direction, the printed mortar specimens could withstand higher imposed loads.Hence, the beneficial influence of using glass fibers to improve the flexural strength is better manifested after printing.
Compressive Strength
The compressive strength results of the un-printed and printed specimens are tabulated in the second and third columns of Table 4.
To graphically depict how the compressive strength varied, the compressive strength results of the un-printed and printed specimens are plotted in Figures 15a and 15b, respectively.It is noted that for both the un-printed and printed specimens, at the same fiber content, the compressive strength increased as the W/C ratio decreased.This is a common phenomenon also seen in the compressive strength of other cement-based materials [51][52][53][54].For the un-printed specimens, the reduction in the W/C ratio from 0.32 to 0.24 increased the compressive strength from 66.13 to 83.47 MPa by 26.2%.For the printed specimens with 1% fibers added, the reduction in the W/C ratio from 0.32 to 0.24 increased the compressive strength from 32.23 to 38.28 MPa by 18.8%.Apparently, the percentage increase in compressive strength was lower in the presence of fibers and when printed.
On the other hand, for both the un-printed and printed specimens, at the same W/C ratio, the compressive strength always significantly decreased as the fiber content increased.This observed effect of the fiber content may be explained as follows: adding fibers had rendered the mixing of the fresh mortar harder and caused entanglement of the glass fibers in the mortar mixture and the entrapment of air bubbles, which are detrimental to the compressive strength [55,56].Similar results had been reported by other researchers [27,50].In this particular case, for the un-printed specimens, at a W/C ratio of 0.28, the increase in fiber content from 0% to 1% decreased the compressive strength from 75.28 to 61.90 MPa by 17.8%.For the printed specimens, at a W/C ratio of 0.28, the increase in fiber content from 0% to 1% decreased the compressive strength from 47.07 to 33.50 MPa by 28.8%.Apparently, the percentage decrease in compressive strength was higher when printed.
From the above, it is evident that the printing process, especially the fiber orientation, also has a significant effect.To better assess the influence of the printing process, the percentage changes in compressive strength were calculated and listed in the last column of Table 4.These changes were all negative, indicating that the printing process has a negative impact on the compressive strength.Moreover, the decrease in compressive strength was larger at higher fiber contents.For instance, at a fiber content of 0%, the decrease due to printing was 29.7% to 37.5%, whereas at a fiber content of 1%, the decrease due to printing was 44.6% to 47.2%.This detrimental effect may be explained by material anisotropy [11][12][13][14][15] and the presence of porous and weak interlayer zones caused by the printing process [7,18] as shown in Figure 7.In the compressive strength test, the compressive loads acted directly on the porous and weak interlayer zones, making the printed mortar specimens more vulnerable to compressive failure, as shown in Figure 16.This view is also supported by others [16,57,58].Such reductions in compressive strength are unavoidable; therefore, for a mortar mixture to be printed, it may be necessary to lower the W/C ratio to compensate for the decline in compressive strength resulting from printing, especially after fibers have been added to improve the printability and flexural strength.
Conclusions
A series of 3D printable mortar mixes with different glass fiber contents and water/cement (W/C) ratios, with their SP dosages optimized to attain a certain target fluidity, were designed, made and tested for evaluating their printability and mechanical properties.The conclusions are drawn hereunder: 1.
Decreasing the W/C ratio would increase the SP demand, enhance the extrudability, dimensional stability and buildability, and improve the flexural strength and compressive strength.2.
Adding glass fibers would increase the SP demand, but could also substantially improve the extrudability, dimensional stability and buildability of 3D printable mortar, due to wrapping of the glass fibers.
3.
An increase in glass fiber content would greatly increase the flexural strength, due to the reinforcing and crack-bridging effects of the glass fibers; but it would also significantly decrease the compressive strength, which is caused by the entanglement of the fibers and the entrapment of air bubbles.4.
The printing process, especially the fiber orientation, has great effects on both the flexural and compressive strengths; it could substantially increase the flexural strength by up to 98% but would also significantly decrease the compressive strength by up to 47%.Such effects are mainly due to the material anisotropy caused by printing and the formation of porous and weak interlayer zones in the printed mortar.5.
To compensate for the decline in compressive strength caused by the printing process, it may be necessary to lower the W/C ratio, especially after fibers have been added, to improve the printability and flexural strength.
Figure 1 .
Figure 1.Grading curves of cement and sand.
Figure 4 .
Figure 4. Schematic diagram and photograph of extrudability test.
Figure 5 .
Figure 5. Schematic diagram and photograph of dimensional stability test.
Figure 6 .
Figure 6.Schematic diagram and photograph of buildability test.
Figure 9 .
Figure 9. SP dosage vs. W/C ratio at different fiber contents.
Figure 10 .
Figure 10.EVC vs. W/C ratio at different fiber contents.
Figure 11 .
Figure 11.DSC vs. W/C ratio at different fiber contents.
Figure 12 .
Figure 12.BVC vs. W/C ratio at different fiber contents.
Figure 13 .
Figure 13.Flexural strength versus W/C ratio at different fiber contents.
Figure 14 .
Figure 14.Force diagram of specimen in flexural strength test.
Figure 15 .
Figure 15.Compressive strength versus W/C ratio at different fiber contents.
Figure 16 .
Figure 16.Force diagram of specimen in compressive strength test.
Table 1 .
Properties of fiber.
Table 2 .
Test results of fresh properties.
Table 3 .
Test results of flexural strength.
Table 4 .
Test results of compressive strength.
|
2023-09-14T15:22:23.204Z
|
2023-09-09T00:00:00.000
|
{
"year": 2023,
"sha1": "3f1726e9a19460c71d297a65f998184ceebd607c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/13/9/2295/pdf?version=1694240386",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "de6066fbdde66189de839ab5c3e22fcd21437782",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
119199122
|
pes2o/s2orc
|
v3-fos-license
|
Mott insulating state in a quarter-filled two-orbital Hubbard chain with different bandwidths
We investigate the ground-state properties of the one-dimensional two-band Hubbard model with different bandwidths. The density-matrix renormalization group method is applied to calculate the averaged electron occupancies $n$ as a function of the chemical potential $\mu$. Both at quarter and half fillings,"charge plateaux"appear in the $n$-$\mu$ plot, where $d\mu/dn$ diverges and the Mott insulating states are realized. To see how the orbital polarization in the one-quarter charge plateau develops, we apply the second-order perturbation theory from the strong-coupling limit at quarter filling. The resultant Kugel-Khomskii spin-orbital model includes a $magnetic$ field coupled to orbital pseudo-spins. This field originates from the discrepancy between the two bandwidths and leads to a finite orbital pseudo-spin magnetization.
"Orbital degrees of freedom" has been one of the important keywords to understand the low temperature properties of strongly correlated electron systems. Among them, recently, the orbital-selective Mott transition (OSMT) is proposed to explain the exotic heavy metallic state in Ca 2−x Sr x RuO 4 [1,2]. To illustrate this scenario, the two-orbital model with different bandwidths has been extensively investigated by dynamical mean-field theory [3,4]. They have clarified the realization conditions of the OSMT by changing various parameters, such as Coulomb interaction parameters, band structures, band filling, and others. In this paper, we also study the ground state properties of the two-band Hubbard model with different bandwidths, but in one dimension (1D). Our main concern here is whether the OSMT survives in 1D, where the effect of quantum fluctuations is most severely enhanced.
First, let us begin with the description of the 1D two-orbital Hubbard model [5] defined by; where c † jασ creates an electron with spin σ (=↑ or ↓) and orbital α (= 1 or 2) at the j-th site. The electron transfers of strength t α are possible between the same type of neighboring orbitals and µ denotes the chemical potential. As for the intra-site Coulomb interactions, we assume the rotational invariance for simplicity, and thus the Coulomb parameters, U , U ′ , and J, always hold the following relation; U = U ′ + 2J. As long as the intra-atomic Coulomb interactions are concerned, physically relevant parameters would satisfy U > U ′ > J. However, we do not restrict our calculations to this parameter region, since in general the above Hamiltonian can be also viewed as a variation of coupled Hubbard chains.
In this paper, we have used the infinite-size density-matrix renormalization group (DMRG) algorithm [6,7] modified by adapting the recursion relation to the wave function, namely, the wave function is so upgraded that it should gradually approach the ground-state wave function when the system size is extended. This technique, called the product-wave-function renormalization-group method [8,9], enhances the accuracy of calculations and overcomes metastability problems during the renormalizing process if the ground state to target is gapless. The different bandwidths are implemented in Eq.(1) by taking t 1 = 1 and t 2 = 0.5. Since we are mainly interested in strongly correlated regime, U is always fixed at 10 throughout this paper and U ′ as well as J are varied while keeping U = U ′ + 2J. By means of the DMRG technique, we have calculated the electron density of α-orbital defined by n α = iσ n iασ /N , where N is the total number of sites. Figure 1 shows the µ-dependence of n 1 , n 2 , and n tot ≡ n 1 + n 2 for U ′ = 5 and J = 2.5. Let us take a closer look at Fig. 1 from the low chemical potential region. When µ exceeds µ c1 = −2t 1 , the bottom of the orbital-1 energy band, n 1 increases sharply while n 2 stays zero until µ reaches µ c2 . At µ c2 , n 1 and n 2 , respectively, drops down and jumps up suddenly. Then between µ c2 and µ qs , n 1 (n 2 ) shows a gradual (steep) monotonic increase and, finally, the first plateaux for both n 1 and n 2 appear simultaneously between µ qs and µ qe . At these plateaux, the values of n 1 and n 2 are irrational but n tot definitely equals to unity corresponding to one-quarter filling. Therefore, this Mott insulating state itself is not controversial except for that n 1 is not equal to n 2 . We will come back to this point later. With further increase of µ above µ qe in Fig. 1, we find the next plateau at n 2 = 1 between µ hs2 and µ hs1 , which means the charge gap opens for the orbital-2 band. On the other hand, n 1 increases continuously toward unity in this region and thus the orbital-1 band should be metallic. This implies that the filling-control OSMT takes place at µ hs2 . Then, the orbital-selective Mott state with one orbital localized and the other itinerant is realized just below one-half filling in the present system. For µ ≥ µ hs1 , the n 1 = 1 plateau emerges in addition to the n 2 = 1 plateau, which correspond to the normal half-filled Mott insulating state. Hereafter, we will investigate the quarter-filled Mott state in more detail. For this purpose, the DMRG calculations are performed within the S z tot ≡ i S z i = 0 subspace at quarter filling. In Fig. 2 shown are the electron densities (n 1 and n 2 , upper panel) and correlation functions ( T z i T z i+1 and S i · S i+1 , lower panel) as a function of U ′ for U = 10 and J = (U − U ′ )/2. Considering the atomic limit, there are three possible ground states characterized by; i) spin-triplet and orbital-singlet states like c † i1↑ c † i2↑ |0 (averaged site energy ε = (U ′ − J)/2 = (3U ′ − U )/4), ii) singly occupied states like c † iασ |0 (null site energy), and iii) spin-singlet and orbital-triplet states like (c † i1↑ c † i1↓ + c † i2↑ c † i2↓ )|0 (ε = (U + J)/2 = (3U − U ′ )/4). Therefore, two quantum critical points, U ′ c1 = U/3 and U ′ c2 = 3U , separate the above three phases when t 1 = t 2 = 0. The singularity around U ′ = 3.5 in Fig. 2 seems to correspond to U ′ c1 and, in fact, the vanishing orbital polarization below U ′ ≤ 3.5 is consistent with the formation of a local orbital singlet. With further increase of U ′ above U ′ c1 , an orbital magnetization: T z i ≡ (n 1 − n 2 )/2 develops accompanied by the antiferromagnetic spin-spin and the ferromagnetic orbital-orbital correlations. These results show that, at the one-quarter plateau, the magnitude of charge disproportionation between n 1 and n 2 depends on Coulomb interaction parameters and only the total electron density is preserved. For U ′ ≥ U = 10, T z i is saturated and all electrons reside in the orbital 1. In such a strong-coupling regime, the system should be identical to the isotropic Heisenberg spin chain, which is confirmed by S i · S i+1 = 1/4 − ln 2 ∼ −0.443 as well as T z i T z i+1 = 1/4 in Fig. 2. In order to examine how the orbital polarization at quarter filling is developed with increasing U ′ , we have derived the effective spin-orbital Hamiltonian of Kugel-Khomskii type [10]. Starting from the atomic limit and within U ′ c1 ≤ U ′ ≤ U ′ c2 , the effective Hamiltonian is given by; where the spin-1/2 ( S) and orbital pseudospin-1/2 ( T ) operators are, respectively, defined by S j ≡ ασσ ′ c † jασ τ σσ ′ c jασ ′ /2 and T j ≡ σαα ′ c † jασ τ αα ′ c jα ′ σ /2 with the use of the Pauli matrices τ 's. ∆
(S)
i,j is an effective crystal field coupled to orbital pseudospins, which originates from the discrepancy between the two bandwidths, that is, is expected to be developed accordingly. Therefore, the different bandwidth in the present model is indispensable for producing the orbital polarization. On the other hand, when t 1 = t 2 and U = U ′ , ∆ (S) i,j vanishes and H eff is equivalent to the SU(4) spin-orbital model [11,12]. In this model, the ground state is an SU(4) singlet ground state with T z i = 0 corresponding to n 1 = n 2 .
To conclude, we have investigated the ground-state properties of the 1D two-orbital Hubbard model with different transfer integrals by the DMRG method. Two plateaux appear in the total electron density as a function of the chemical potential, which suggests the existence of Mott insulating states. The first insulating state ranging in µ qs ≤ µ ≤ µ qe corresponds to the quarter-filled Mott insulator and the charge disproportionation between two orbitals exists due to the different bandwidth of the two bands. On the other hand, in the second Mott insulating state for µ hs1 ≤ µ, the two bands are both half filled and thus the total electron density is at half filling. It is also found that the OSMT takes place just below half filling, where the charge gap exists in the narrower band and the wider band remains metallic. The detailed analysis around the charge one-half plateau will be published elsewhere.
|
2008-10-14T20:00:30.000Z
|
2008-10-14T00:00:00.000
|
{
"year": 2008,
"sha1": "98f54985ed972aaac4f862abd71c5d0951d105df",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/150/4/042128",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "98f54985ed972aaac4f862abd71c5d0951d105df",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18639626
|
pes2o/s2orc
|
v3-fos-license
|
Genetic Distances of Three White Clam (Meretrix lusoria) Populations Investigated by PCR Analysis
The twenty-one individuals of Meretrix lusoria were secured from Gunsan, Shinan and Yeonggwang on the coast of the Yellow Sea and the southern sea in the Korean Peninsula, respectively. Amplification of a single COI fragment (720 bp) was imagined, and no apparent size differences were observed in amplified fragments between Meretrix lusoria and M. petechialis individuals. The size of the DNA fragments also varied excitedly, from 200 to 1,600 bp. The oligonucleotides primer BION-08 produced the least loci (a total of 17), with an average of 2.43 in the Gunsan population, in comparison to the other primers used. Remarkably, the primer BION-13 detected 42 shared loci by the three populations, major and/or minor fragments of sizes 200 bp and 400 bp, respectively, which were identical in all samples. The dendrogram gained by the seven oligonucleotides primers highlight three genetic clusters: cluster 1 (GUNSAN 01 ~ GUNSAN 07), cluster 2 (SHINAN 08 ~ SHINAN 14) and cluster 3 (YEONGGWANG 15 ~ YEONGGWANG 21). The longest genetic distance among the twenty-one Meretrix lusoria individuals that displayed significant molecular differences was between individuals GUNSAN no. 01 and SHINAN no. 14 (genetic distance = 0.574). Comparatively, individuals of SHINAN population were fairly closely related to that of YEONGGWANG population. In this study, PCR analysis has discovered significant genetic distances between two white clam population pairs (P<0.05).
INTRODUCTION
Asian white clams (Meretrix lusoria) is commercially important bivalves, belonging to family Veneridae, widely distributed on the coast of the Yellow Sea, the southern sea and Jeju island in the Korean Peninsula and the several sea areas in China under the natural ecosystem (Min et al., 2004). Meretrix is widely distributed in the sandy tidal flat, the intertidal zone and 20-meter depth of seawater areas.
Generally, Meretrix petechialis can be easily distinguished lusoria is skewed to the anterior side relative to that of M.
petechialis (Yamakawa & Imai, 2012). But, juveniles of M. lusoria and M. petechialis have very similar morphologies and shell colors, making species identification difficult at the juvenile stage. Currently, DNA-based techniques for identifying interspecific variation have been established and applied to some bivalve species, including closely related species belonging to the same Genus. Studies on molecular phylogeny of Veneridae were reported genetic relationship of Veneridae five species (Jung et al., 2004), were announced using mitochondrial 16S rRNA gene or cytochrome oxidase sequencing (Chen et al., 2009). But, no studies have tested for the identification of M. lusoria and M. petechialis in some Korean localities.
In the present study, to elucidate characteristics of individuals and populations by identifying the genetic distances and geographical variations among three white clam (Meretrix lusoria) populations collected from Gunsan, Shinan and Yeonggwang, we performed a clustering analysis by using PCR method and Systat pc-package program.
Sample collection and purification of genomic DNA
The twenty-one individuals of Meretrix lusoria were secured from Gunsan, Shinan and Yeonggwang on the coast of the Yellow Sea and the southern sea in the Korean Peninsula, respectively (Fig. 1). Muscle tissues was collected in sterile tubes and stored at -40°C until needed. DNA extraction should be carried out according to the separation and extraction methods . The precipitates obtained were then centrifuged and resuspended in lysis buffer II (10 mM Tris-HCl, pH 8.0; 10 mM EDTA; 100 mM NaCl; 0.5% SDS), and 15 µL of proteinase K solution (10 mg/mL) was added. After incubation, we added 300 µL of 3 M NaCl, and gently pipetted for a few minutes. 600 µL of chloroform was then added to the mixture and inverted (no phenol). Ice-cold 70% ethanol was added, and then the samples were centrifuged at 19,621 g for 5 minutes to extract the DNA from the lysates.
Oligonucleotides primers, molecular markers, amplification conditions and data analysis
Seven oligonucleotides primers BION-01 (5'-CAGGC CCTTC-3'), BION-08 (5'-TCCGCTCTGG-3'), BION-13 The degree of variability was calculated by use of the Dice coefficient (F), which is given by the formula: F = 2 n ab / (n a +n b ), where n ab is the number of bands shared between the samples a and b, n a is the total number of bands for sample a and n b is the total number of bands for sample b (Jeffreys & Morton, 1987;Yoke-Kqueen & Radu, 2006 (Min et al., 2004).
Identifying Meretrix lusoria based only on morphological shell shape can be difficult while judgments based on a combination of morphology and genetic information is generally more trustworthy.
Data analysis
The amplified products were separated by agarose gel electrophoresis with oligonucleotides primers, and stained with ethidium bromide. Similarity matrix including bandsharing values (BS) and genetic differences was calculated using Nei and Li's index of the similarity of venerid clam individuals from Gunsan, Shinan and Yeonggwang of the Korean Peninsula, respectively, as illustrated in Table 2. oyster , Korean catfish , Venus clam (Park & Yoon, 2008) and cockle (Kang & Yoon, 2013). (Huang et al., 2000).
Three Meretrix lusoria populations can be evidently distinguished, by PCR-founded approach. The potential of oligonucleotides amplified polymorphic and/or specific DNAs to identify diagnostic markers, species and population identification in shellfish (Callejas & Ochando, 1998;McCormack et al., 2000;Park et al., 2008;Kang & Yoon, 2013) has also been well recognized. PCR fragments discovered in this study may be worthwhile as a DNA marker the three geographical populations to discriminate.
In general, the population classification of venerid clam is constructed on morphological variations in shell body weight, shell color, shell height, shell length, shell type and feet length. It is presumed that differences in such characters reflect diverse origins or genetic identity (Chenyambuga et al., 2004). If systematic research of Korean Veneridae is in additive progress, these data could Table 4. The number of unique loci to each population and number of shared loci by the three populations produced by PCR analysis using 7 oligonucleotides primers in Gunsan, Shinan and Yeonggwang population of Meretrix lusoria, respectively Table 5
|
2016-05-12T22:15:10.714Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "4b6da0376619c952a8ebef25914736cea42793b0",
"oa_license": "CCBYNC",
"oa_url": "https://www.ksdb.org/download/download_pdf?pid=dr-18-2-89",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b6da0376619c952a8ebef25914736cea42793b0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
56174785
|
pes2o/s2orc
|
v3-fos-license
|
Primary intraosseous meningioma: atypical presentation of a common tumor
A 41-year-old woman presented with an approximately one-year history of progressive facial swelling and left-sided visual impairment. A computed tomography (CT) scan of the skull showed a sclerotic, expansile lesion on the lateral/upper wall of the left orbit, narrowing and extending to the optic canal. Magnetic resonance imaging (MRI) showed a lesion with a Primary intraosseous meningioma: atypical presentation of a common tumor Eduardo Kaiser U. N. Fonseca1,a, Bruna Bringel Bastos1,b, Fernando Ide Yamauchi1,c, Ronaldo Hueb Baroni1,d 1. Hospital Israelita Albert Einstein, São Paulo, SP, Brazil. Correspondence: Dr. Fernando Ide Yamauchi. Hospital Israelita Albert Einstein – Departamento de Imagem. Avenida Albert Einstein, 627, Jardim Leonor. São Paulo, SP, Brazil, 05652-900. E-mail: fernando.yamauchi@einstein.br. a. https://orcid.org/0000-0002-0233-0041; b. https://orcid.org/0000-0001-9875-8458; c. https://orcid.org/0000-0002-4633-3711; d. https://orcid.org/0000-0001-8762-0875. The rupture of an endometrioma is a rare event, with an estimated incidence of less than 3% among women of childbearing age who are known to have endometriomas. This situation occurs more commonly during pregnancy, due to hormonal stimulation of endometrial stromal elements, albeit with larger (≥ 6.0 cm) lesions. The imaging aspect of endometrioma is that of an ovarian cyst with heterogeneous content, irregular contours, and parietal discontinuity, together with hemoperitoneum, which can be seen as heterogeneous fluid content on ultrasound and as a collection with a hyperintense signal in T1-weighted MRI sequences. In an emergency setting, its presentation may mimic other acute gynecological conditions, such as corpus luteum, ectopic gestation, and even spontaneous hemoperitoneum. In addition, the rupture of endometriomas can significantly increase serum CA-125 levels, mimicking ovarian epithelial neoplasms. However, a history of endometrioma, previous examinations demonstrating endometriomas, or endometriomas accompanied by peritoneal blood content in emergency imaging studies should raise the suspicion of spontaneous rupture. The importance of the preoperative diagnosis is to support treatment strategies. Although some milder cases can be managed conservatively, there is a tendency toward greater use of early surgical exploration because of long-term undesirable effects of cyst fluid in the peritoneal cavity, such as adhesions, pelvic pain, and infertility. In addition, the presumptive diagnosis of ruptured endometrioma, rather than ovarian neoplasms, facilitates the decision to perform laparoscopic exploration and allows the surgeon to perform the procedure with greater confidence. REFERENCES
The rupture of an endometrioma is a rare event, with an estimated incidence of less than 3% among women of childbearing age who are known to have endometriomas (5) . This situation occurs more commonly during pregnancy, due to hormonal stimulation of endometrial stromal elements (2) , albeit with larger (≥ 6.0 cm) lesions (6) .
The imaging aspect of endometrioma is that of an ovarian cyst with heterogeneous content, irregular contours, and parietal discontinuity, together with hemoperitoneum, which can be seen as heterogeneous fluid content on ultrasound and as a collection with a hyperintense signal in T1-weighted MRI sequences. In an emergency setting, its presentation may mimic other acute gynecological conditions, such as corpus luteum, ectopic gestation, and even spontaneous hemoperitoneum (7,8) . In addition, the rupture of endometriomas can significantly increase serum CA-125 levels, mimicking ovarian epithelial neoplasms (9) . However, a history of endometrioma, previous examinations demonstrating endometriomas, or endometriomas accompanied by peritoneal blood content in emergency imaging studies should raise the suspicion of spontaneous rupture.
The importance of the preoperative diagnosis is to support treatment strategies. Although some milder cases can be managed conservatively, there is a tendency toward greater use of early surgical exploration because of long-term undesirable effects of cyst fluid in the peritoneal cavity, such as adhesions, pelvic pain, and infertility (6) . In addition, the presumptive diagnosis of ruptured endometrioma, rather than ovarian neoplasms, facilitates the decision to perform laparoscopic exploration and allows the surgeon to perform the procedure with greater confidence. representing approximately 14-20% of cases. The vast majority are intradural lesions, extradural lesions accounting for only 1-2% (4) . Extradural meningiomas affect the cranial vault in 68% of cases, such lesions being referred to as primary intraosseous meningiomas (PIMs), which mainly affect the frontal and parietal bones, as well as the region of the orbit (5-7) . Other common locations for extradural involvement are the subcutaneous tissue, paranasal sinuses, and parapharyngeal spaces, as well as, in rare cases, the lungs and adrenal glands (5,6) . Unlike typical intradural meningiomas, which primarily affect females between the ages of 50 and 69 years and usually have a benign course, PIMs can affect either gender, have a peak incidence in the second decade of life, and are more likely to evolve to malignant degeneration (6) .
On CT, most PIMs (65%) present as expansile, osteoblastic bone lesions, with or without cortical destruction (6) . On MRI, they commonly hypointense in T1-and T2-weighted sequences, typically without significant contrast enhancement, as in the case reported here (5) . However, in rarer cases, if a PIM presents as an osteolytic lesion on CT, an MRI scan can show a hypointense signal in T1-weighted sequences and a hyperintense signal in T2-weighted sequences, as well as contrast enhancement (6,7) . Although PIMs do not present the dural tail sign that is often found in intradural meningiomas, there can be contrast uptake in the dura mater subjacent to the tumor due to venous stasis or to tumor invasion, as demonstrated in our case (7) . There are inherent differences between CT and MRI, the former allowing better delineation of bone involvement, whereas the latter provides a better assessment of the soft-tissue involvement and extradural extent of the lesion (6) .
The differential diagnosis of osteoblastic PIM includes typical intradural meningioma with reactive hyperostosis, in which the meningeal component of the lesion is the most obvious. Other diagnoses that should be considered are metastases, plasmacytoma, fibrous dysplasia, osteoma, osteosarcoma, and Paget's disease (6) .
In most cases of PIM, the treatment is total surgical resection, with subsequent cranial reconstruction. If the resection is partial, there should be radiological follow-up; if the disease has recurred or if the residual lesion has progressed, the next surgical procedure can be accompanied by adjuvant radiotherapy (6) .
In conclusion, although rare, PIMs should be considered in the differential diagnosis of bone lesions, especially when the lesions are osteoblastic and located in the cranial vault.
|
2018-12-19T14:03:52.889Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "08fabda8c96b16f9c02d2357d977bfa8c9c806f8",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rb/v51n6/0100-3984-rb-51-06-0412.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf61b7816a9bb3e6ddd50079bc83ab4e695133c3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
25560441
|
pes2o/s2orc
|
v3-fos-license
|
Identification of a New Interaction Mode between the Src Homology 2 Domain of C-terminal Src Kinase (Csk) and Csk-binding Protein/Phosphoprotein Associated with Glycosphingolipid Microdomains♦
Background: Src homology 2 (SH2) domains are known to specifically bind to phosphotyrosine followed by a few amino acids. Results: A novel interaction region was revealed by the solution structure of the C-terminal Src kinase SH2 domain in complex with the Csk-binding protein. Conclusion: The novel interaction region was required for tumor suppression. Significance: The structure sheds new light on the interaction mode of SH2 domains. Proteins with Src homology 2 (SH2) domains play major roles in tyrosine kinase signaling. Structures of many SH2 domains have been studied, and the regions involved in their interactions with ligands have been elucidated. However, these analyses have been performed using short peptides consisting of phosphotyrosine followed by a few amino acids, which are described as the canonical recognition sites. Here, we report the solution structure of the SH2 domain of C-terminal Src kinase (Csk) in complex with a longer phosphopeptide from the Csk-binding protein (Cbp). This structure, together with biochemical experiments, revealed the existence of a novel binding region in addition to the canonical phosphotyrosine 314-binding site of Cbp. Mutational analysis of this second region in cells showed that both canonical and novel binding sites are required for tumor suppression through the Cbp-Csk interaction. Furthermore, the data indicate an allosteric connection between Cbp binding and Csk activation that arises from residues in the βB/βC loop of the SH2 domain.
Proteins with Src homology 2 (SH2) domains play major roles in tyrosine kinase signaling. Structures of many SH2 domains have been studied, and the regions involved in their interactions with ligands have been elucidated. However, these analyses have been performed using short peptides consisting of phosphotyrosine followed by a few amino acids, which are described as the canonical recognition sites. Here, we report the solution structure of the SH2 domain of C-terminal Src kinase (Csk) in complex with a longer phosphopeptide from the Csk-binding protein (Cbp). This structure, together with biochemical experiments, revealed the existence of a novel binding region in addition to the canonical phosphotyrosine 314-binding site of Cbp. Mutational analysis of this second region in cells showed that both canonical and novel binding sites are required for tumor suppression through the Cbp-Csk interaction. Furthermore, the data indicate an allosteric connection between Cbp binding and Csk activation that arises from residues in the B/C loop of the SH2 domain.
Src homology 2 (SH2) 3 domains are noncatalytic regions commonly observed in various types of signal transduction proteins. They function as modules that mediate the interaction between proteins by recognizing a phosphotyrosine (Tyr(P)) in the target proteins. Structural and quantitative binding analyses of many SH2 domains in complexes with related ligand peptides have shown that SH2 domains generally recognize Tyr(P) with three amino acid residues toward the C terminus (Tyr(P)-Xaa 1 -Xaa 2 -Xaa 3 ) of the target ligands using two recognition pockets on the surface of the SH2 domains (1)(2)(3)(4). One pocket recognizes Tyr(P) of the target primarily through electrostatic interactions and hydrogen bonds, whereas the other recognizes the remaining three amino acids (Xaa 1 -Xaa 2 -Xaa 3 ) specifically through hydrophobic interactions. This specificity is considered to generate versatility in the interaction of SH2 domains (5,6). However, it is still controversial whether the latter pocket alone is sufficient to determine the specificity of the associated interactions (4,7).
The protein C-terminal Src kinase (Csk) includes SH3, SH2, and kinase domains. This kinase specifically phosphorylates a regulatory Tyr in the C-terminal tail of Src Tyr kinases (SFKs) (8,9). This event leads to an intramolecular interaction between the Tyr(P)-containing tail and the SH2 domain of the phosphorylated SFK, shifting it to an inactive closed conformation. Thus, Csk negatively regulates the kinase activity of SFKs and plays an important role in physiological functions via signaling pathways for cell proliferation, differentiation, adhesion, and migration (10).
Although SFKs are anchored to membranes via their fattyacylated N termini, Csk, which lacks such a fatty acylation site, exists in the cytoplasm (9). Thus, for Csk to efficiently access SFKs, Csk interacts with the Csk-binding protein (Cbp/PAG), which is localized to membrane microdomains enriched in cholesterol, glycosphingolipids, and lipid rafts, and it is subsequently recruited to the reaction space (11,12). This interaction is known to occur between the SH2 domain of Csk and the SFK-phosphorylated Tyr-314 of Cbp, but it remains unknown whether any other region in Cbp is involved in the interaction (13). In addition, unlike SFKs, which conserve specific Tyr in their C-terminal tails for regulating activity, Csk lacks such a functional tail (9). Nevertheless, the crystal structure of Csk by itself exhibited the existence of two conformers corresponding to active and inactive forms (14). The mechanism of Csk activation is described in detail in a previous review (15). The SH2 domain appears to be required for stabilizing the active form of Csk through connection between the B/C loop (SH2) and the 3/␣C loop (kinase). It was also reported that Csk activity is increased through interactions with phosphorylated Cbp or its peptides (13, 16 -19). Therefore, it can be speculated that Cbp binding shifts the dynamic equilibrium of at least these two conformers to the more active form. However, the mechanism through which Cbp binding increases Csk activity remains unclear.
To elucidate these mechanisms, we first analyzed the interaction of Csk with various lengths of Tyr-314-containing regions of Cbp using gel filtration chromatography. On the basis of this biochemical result, we determined the tertiary solution structure of the complex of Csk-SH2 with a region of Cbp that contained both Tyr-296 and Tyr(P)-314 using liquidstate nuclear magnetic resonance (NMR) spectroscopy. We found that Csk recognizes not only the four canonical amino acids beginning with Tyr(P)-314 but also a region on the N-terminal side of Tyr(P)-314 that contains Tyr-296.
EXPERIMENTAL PROCEDURES
Preparation of Cbp Mutants-The DNA fragment for rat Cbp peptide (from 289 to 321; Cbp5) was amplified using the pGEX-6P-1 plasmid (GE Healthcare) containing the DNA fragment encoding the region from 195 to 328 of Cbp (Cbp3) by PCR using 5Ј-CGC GGA TCC AAG AGA TTT AGT TCC TTG TCA-3Ј and 5Ј-GGC GAA TTC CTA TCC AGG CTT ATT CAC TGA AGA-3Ј as primers (Fig. 1A). The amplified DNA fragment was cleaved with BamHI and EcoRI, and the resulting gene was inserted into the BamHI-EcoRI site of pGEX-6P-1. The pGEX-6P-1 plasmid containing the DNA fragment encoding the region from 302 to 321 of Cbp (Cbp6) or a mutant of Cbp5 (Y296F) was produced by the same method as described above using the following primers: 5Ј-CGC GGA TCC CCA ACT CTT ACA GAA GAG GAG-3Ј and 5Ј-GGC GAA TTC CTA TCC AGG CTT ATT CAC TGA AGA-3Ј for Cbp6 and 5Ј-CGC GGA TCC AAG AGA TTT AGT TCC TTG TCA TTC AAG TCT CGA-3Ј and 5Ј-GGC GAA TTC CTA TCC AGG CTT ATT CAC TGA AGA-3Ј for Cbp5-Y296F. The DNA fragment encoding the region from 312 to 321 of Cbp (Cbp7) was amplified by annealing 5Ј-GATCC ATG TAT TCT TCA GTG AAT AAG CCT GGA TAG G-3Ј and 5Ј-ATTTC CTA TCC AGG CTT ATT CAC TGA AGA ATA CAT G-3Ј, and the amplified DNA fragment was inserted into the BamHI-EcoRI site of pGEX-6P-1. For a doubly phosphorylated Tyr(P)-296/ Tyr(P)-314 peptide, a synthetic product (TORAY) was used.
Expression and Purification of Various Cbp Peptides-Escherichia coli BL21(DE3) cells were transformed with pGEX-6P-1 containing DNA encoding each of the Cbp peptides and grown in LB media. The cells were incubated at 25°C with shaking at 240 rpm for 12 h. The expression of GST-fused Cbp peptides was induced by addition of isopropyl -D-thiogalactopyranoside to a final concentration of 0.1 mM when the absorbance at 600 nm was between 0.3 and 0.6. The cells were further incubated overnight, harvested by centrifugation, and stored at Ϫ80°C. For purification, the cells were dissolved at 4°C and disrupted by sonication in 100 mM Tris-HCl buffer (pH 7.4) containing 150 mM NaCl, 1 mM EDTA, 5 mM -mercaptoethanol, and 1% Nonidet P-40 (lysis buffer). After centrifugation, the supernatant was applied to the GSTrap FF affinity column (GE Healthcare), and adsorbed proteins were eluted using lysis buffer (pH 9.0) containing 20 mM reduced glutathione. All purified GST-Cbp fusion proteins were phosphorylated using recombinant Fyn (Millipore) in 50 mM Tris-HCl buffer (pH 7.4) containing 3 mM MgCl 2 , 1 mM -mercaptoethanol, and 4 mM ATP.
Expression and Purification of Csk-Full-length (amino acids 1-450) rat Csk was expressed using a baculovirus vector in Sf9 insect cells, as described previously (13). Cells were lysed using the above-mentioned lysis buffer containing EDTA-free protease inhibitor mixture (Nacalai Tesque) and disrupted using a Dounce homogenizer. The supernatant was collected by centrifugation and applied to the HiTrap Q HP anion exchange column (GE Healthcare) equilibrated with 50 mM Tris-HCl buffer (pH 8.0) containing 1 mM EDTA, 5% glycerol, 5 mM -mercaptoethanol, and 0.02% octyl-D-glucoside (buffer A). The protein was eluted with a linear gradient of 75-300 mM NaCl. Protein-containing fractions were applied to the HiTrap SP HP cation exchange column (GE Healthcare) equilibrated with buffer A. The protein, eluted with a linear gradient of 50 -300 mM NaCl, was applied to the Superdex 200 gel filtration column (GE Healthcare).
Expression and Purification of 13 C-and 15 N-Labeled Csk-SH2-To obtain 13 C-and 15 N-labeled proteins, E. coli Origami B(DE3) cells were transformed with pGEX-6P-1(GE Healthcare) containing the gene of Csk-SH2 and were grown in M9 minimal medium containing 1.5 g/liter 15 NH 4 Cl and 2.0 g/liter D-[U- 13 C 6 ]glucose, as nitrogen and carbon sources, respectively. For the expression of uniformly 15 N-labeled proteins, D-[U- 13 C 6 ]glucose was replaced with 4.0 g/liter D-glucose, and 0.1% glycerol was added to the minimal medium. Cells were incubated at 37°C with shaking. The expression of proteins was induced by the addition of isopropyl -D-thiogalactopyranoside to a final concentration of 0.5 mM when the absorbance of cells reached an A 600 of 0.6. Bacteria were grown for an additional 3 h at 37°C. Cells were collected by centrifugation and disrupted by sonication in 50 mM Tris-HCl buffer (pH 7.0) containing 400 mM NaCl, 0.1% Tween 20, 1 mM EDTA, and 1 ml of protease inhibitor mixture (Sigma). After centrifugation, the supernatant solution was passed through the DEAE-Sepharose anion exchange column (GE Healthcare) to remove contaminating nucleic acids. The eluate was applied to the GSTrap FF column (GE Healthcare), and the proteins were eluted with 50 mM Tris-HCl buffer (pH 8.0) containing 10 mM reduced glutathione. The eluate was dialyzed for a few hours at 4°C against 2 liters of the sonication buffer. After addition of 20 l of PreScission protease (2 units/l; GE Healthcare) to the solution, it was further dialyzed over 17 h for cleavage of the fusion proteins. In some cases, GST was cut off by the same protease before elution of the fusion protein from the GSTrap FF column during overnight incubation at 4°C. The proteins were concentrated using an Amicon ultra-4-centrifugal filter unit (Millipore; molecular cutoff, 3000) and were applied to the Superdex 75 gel filtration column (GE Healthcare) with a 20 mM sodium phosphate buffer solution (pH 6.0) containing 50 mM NaCl. The obtained protein (Csk-SH2) ranged from Met-80 to Met-173 with a tag derived from the expression vector (H 2 N-Gly-Pro-Leu-Gly-Ser-) attached to the N terminus. Protein concentration was estimated using absorbance at 280 nm (A 280 ) with the calculated molar absorption coefficient of 16,000.
Expression and Purification of 13 C-and 15 N-Labeled Cbp Peptide-The Cbp5 peptide was expressed in E. coli BL21(DE3) cells (Takara) as a fusion protein with GST, which was labeled with 13 C-and 15 N-stable isotopes. The procedures for expression and purification were almost the same as those for Csk-SH2. After elution from the GSTrap FF column, the eluate was diluted with the same amount of 20 mM Tris-HCl buffer (pH 8.0). The solution was applied to the HiTrap DEAE FF anion exchange column (GE Healthcare), and the Cbp peptide was phosphorylated by addition of Fyn and ATP prior to removal of GST. A solution containing Csk-SH2 and Cbp5 was applied to the gel filtration column, and the fractions containing the complex were collected.
Protein Binding Assays Using Gel Filtration Chromatography-Each peptide was mixed with equimolar intact Csk or Csk-SH2 in 100 mM Tris-HCl buffer (pH 8.5) containing 150 mM NaCl or 1 M (NH 4 ) 2 SO 4 , 5% glycerol, 5 mM -mercaptoethanol, and 0.02% octyl-D-glucoside with or without ATP. The solution was applied to the Superdex 200 HR 10/30 (GE Healthcare) gel filtration column; A 280 was monitored, and each fraction was analyzed by SDS-PAGE. The synthesized peptide and Csk were mixed in a molar ratio of 2:1, and the mixture was assayed as described above.
NMR Spectroscopy-All NMR spectra were acquired at 298 K, except the three-dimensional aromatic 13 C-edited nuclear Overhauser effect spectroscopy (NOESY), which was performed at 288 K (20), using Bruker DRX-500 and DRX-600 instruments equipped with shielded triple-axis gradient tripleresonance probes, and DRX-800, AvanceII-800, and AvanceIII-950 instruments equipped with z axis gradient triple resonance cryogenic probes. For assignments of 1 H, 13 C, and 15 N resonances, a series of two-and three-dimensional experiments were performed (21). Two-dimensional 1 H-15 N heteronuclear single-quantum correlation (HSQC), three-dimensional HNCACB, CBCA(CO)NH, HNCA, HN(CO)CA, HNCO, HN(CA)CO, and HBHA(CBCACO)NH spectra were acquired for assignment of backbone signals. For assignment of the aliphatic side-chain signals, two-dimensional 1 H-13 C constant time HSQC, three-dimensional 15 N-edited total correlation spectroscopy (TOCSY) with a mixing time of 79.4 ms, HCCH-TOCSY with a mixing time of 22.6 ms, and C(CO)NH and H(CCO)NH with a mixing time of 22.6 ms spectra were used. For assignment of the aromatic side-chain signals, two-dimensional 1 H-1 H double-quantum filtered correlation spectroscopy (22)(23)(24) and three-dimensional aromatic 13 C-edited NOESY with a mixing time of 100 ms were used. The chemical shifts of the 1 H ␦/⑀ spins in the aromatic residues were assigned by means of two-dimensional (H)C(C␥C␦)H␦ and (H)C(C␥C␦C⑀)H⑀ experiments (25). For detection of intermolecular NOEs, a series of filter-related experiments were conducted, namely a 13 C-filtered/ 13 C-edited NOESY using 13 C, 15 N-labeled Csk-SH2 complexed with nonlabeled Cbp5 peptide and a 13 C-filtered/ 13 C-edited NOESY and 13 C, 15 N-filtered/ 15 N-edited NOESY using 13 C, 15 N-labeled Cbp5 peptide complexed with nonlabeled Csk-SH2 (26).
Structure Calculations-The NOE peaks were manually assigned using Sparky. Distance restraints were generated according to the assignment of the NOE cross-peaks, and pseudo-atom corrections were applied to the upper bound restraints involving methyl, methylene, and aromatic ring protons as described previously (27). Torsion angle restraints were derived using TALOSϩ (28) with the assigned chemical shifts of 1 H ␣ , 13 C ␣ , 13 C  , 13 CO, and 15 N, in reference to the x-ray structure of intact Csk (14). Hydrogen bond restraints, 2.5-3.3 Å for N-O pairs and 1.8 -2.3 Å for H-O pairs, were added only to secondary structural regions as confirmed through the corresponding NOE patterns. One disulfide bond between Cys-122 and Cys-164 was confirmed through characteristic 13 C ␣ and 13 C  chemical shifts (29) and used as a restraint for structure calculations. Structure calculations with torsion angle dynamics were performed using CYANA-2.1 (30). A total of 100 structures were calculated with 40,000 steps, and after a root-mean-square deviation (r.m.s.d.) for the backbone atoms had reached 1.0 Å, the r Ϫ6 sum averaging method, originally implemented in the CYANA calculations, was applied instead of the pseudo-atom corrections. Finally, the 20 structures with the lowest target functions were selected. Molecular models were prepared using MOLMOL (31) and UCSF Chimera (32). Hydrogen bonds in the complex structure were defined using UCSF Chimera. An accessible surface area for each residue was calculated using MOLMOL. The electrostatic potential of the complex structure was calculated using Delphi (33). All the amino acid sequence alignments were performed using Clust-alW, version 2.0 (34).
zz-Exchange Spectroscopy-Chemical exchange associated with the interaction was monitored using 15 N-labeled Csk-SH2 mixed with each of the nonlabeled phosphopeptides, Cbp5, Cbp6, and Cbp7, at a molar ratio of 2:1 by two-dimensional 15 where p i represents the relative population for the site i; k ex represents the sum of the forward, k 1 , and reverse, k off , kinetic rate constants for interconversion between the sites; and R 1 represents the longitudinal relaxation rate of the 15 N nucleus in the observed spin system. Note that k 1 is an apparent pseudofirst-order rate constant that corresponds to the product of the second-order association rate constant, k on , and the concentration of free Cbp, i.e. k 1 ϭ k on [Cbp] free . The total cross-peak intensity (I BA ( m ) ϩ I AB ( m )) for each time point, m , was fitted to the sum of the third and forth equalities given in Equation 1 to obtain k ex and R 1 . p A and p B were estimated from the intensity ratio of the monomer and complex peaks in a two-dimensional 1 H-15 N HSQC spectrum. All NMR data were processed and analyzed using NMRPipe and Sparky, respectively.
Cells and Gene Transfer-A549 cells were cultured in Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS). Genes were transfected into the A549 cells using a previously described method (42). In brief, cDNAs encoding wild-type human Cbp and its mutant, in which Tyr-296 was substituted with Phe (Y296F) or both Tyr-296 and Tyr-314 were substituted with Phe (Y296F/Y314F), were subcloned into the retroviral vectors pCX4puro and pCX4bleo, respectively. All constructs were generated using a PCR-based procedure. Retroviral vectors were co-transfected into Plat-E with pGPϩ pE-Ampho (Takara Bio, Japan), which is an amphotropic retrovirus-packaging construct. Transformed cell populations were selected with puromycin or bleomycin, and mixed cell populations were assayed.
Immunoprecipitation and Immunoblotting-Cells were lysed on ice in 20 mM Tris-HCl (pH 7.4) buffer containing 250 mM NaCl, 0.5 mM EDTA, 2% n-octyl--D-glucoside, 1% Nonidet P-40, 10% glycerol, 1 mM Na 3 VO 4 , 50 mM NaF, and protease inhibitor mixture and were clarified by centrifugation. Immunoprecipitation was performed as described previously (43). In brief, cell lysates were quantified; equal amounts of the total cell proteins were incubated with anti-Cbp antibody, and the immune complex was collected on protein G-Sepharose beads (GE Healthcare). The beads were washed three times in lysis buffer and boiled in 50 l of the SDS sample buffer. Protein samples were separated by SDS-PAGE, transferred to PVDF membranes, immunoblotted with the indicated antibodies, and subjected to chemiluminescent detection (PerkinElmer Life Sciences). Anti-human Cbp antibody was generated by immunizing rabbits with a GST-Cbp (residues 331-430) fusion protein (44). The anti-Src antibody (Ab-1) was purchased from Calbiochem, anti-Src pY418 antibody from BIOSOURCE, anti-Csk antibody from Santa Cruz Biotechnology, and anti-Tyr(P) antibody (4G10) was from Upstate Biotechnology.
Gel Filtration Assay Using Cbp Peptides of Different Lengths-
To identify the Cbp region responsible for the interaction with Csk, we assayed the interaction between each of the Cbp peptides of different lengths containing Tyr(P)-314 or Tyr-314 and intact Csk using gel filtration chromatography and SDS-PAGE. Deletion mutants of Cbp used for the assay, Cbp3 (195-328), Cbp5 (289 -321), Cbp6 (302-321), and Cbp7 (312-321; Fig. 1A), were fused with GST and expressed in E. coli, and intact Csk was expressed in a baculovirus/insect Sf9 cell system. The GST-fused Cbp peptides were phosphorylated at Tyr-314 by addition of recombinant Fyn and ATP. We confirmed that this reaction did not phosphorylate Tyr-296 by a peptide mass fingerprinting method. Nonphosphorylated Cbp peptides were prepared as controls by mixing each peptide with Fyn in the absence of ATP.
Each of the GST-fused peptides, containing Tyr-314 or Tyr(P)-314, was mixed with Csk, and the solutions were applied to a gel filtration column. The eluted fractions were applied to SDS-PAGE to detect the contents. To evaluate the affinity qualitatively, we used two types of running buffers in the gel filtration assay, one containing 150 mM NaCl and the other containing 1 M (NH 4 ) 2 SO 4 . The results showed that phosphorylated Cbp3 and Cbp5 eluted from the column as one peak and formed a complex with Csk at both salt concentrations, whereas the nonphosphorylated peptides eluted separately from Csk or later than the corresponding phosphorylated peptides without dominantly forming any complex with Csk (Fig. 1B). Moreover, the peak corresponding to Csk in the experiment using the phosphorylated peptides appeared earlier than that using the nonphosphorylated peptides, indicating that the former eluate contained Csk of high molecular weight, because of complex formation with the phosphorylated peptides (Fig. 1B). When Tyr-314 was not phosphorylated, none of the examined peptides dominantly formed a complex in either salt concentration. This indicates that the phosphorylation of Tyr-314 is at least necessary for interaction with Csk. Interestingly, shorter peptides lacking N-terminal regions, namely phosphorylated Cbp6 and Cbp7, formed a complex with Csk during gel filtration in the running buffer containing 150 mM NaCl but not in the buffer containing 1 M (NH 4 ) 2 SO 4 (Fig. 1B). Because the interaction involving the phosphate group of Tyr(P)-314 was weakened by a high salt concentration, it is expected that this interaction is electrostatic in nature. The differences observed in elusion patterns of Cbp5 and Cbp6 at high salt concentrations indicate that a region within Cbp5 but outside Cbp6 (i.e. 289 -302) generates an additional interaction to Tyr(P)-314 that enables high affinity binding to Csk even at high salt concentrations. Gel Filtration Assay Using Y296F Mutant and Phospho-Tyr-296-Here a question emerged as to which residues between 289 and 302 mediate the secondary interaction. We focused on Tyr-296 because it is a unique Tyr in this region, and we expected its phosphorylation to strengthen the interaction with Csk in combination with phosphorylation of the canonical Tyr-314. To study the possible secondary interaction region, we prepared a mutant of GST-fused Cbp5 peptide in which Tyr-296 was replaced with Phe, and we assayed its interaction with Csk or Csk-SH2 using gel filtration chromatography as described above. Although both interactions were maintained in 150 mM NaCl, they were broken in the presence of 1 M (NH 4 ) 2 SO 4 (Fig. 1C). As revealed by NMR spectroscopy (see below), replacement of Tyr-296 with Phe in Cbp5 weakened its interaction with Csk and Csk-SH2, probably because of the absence of a hydrogen bond between the hydroxyl group of Cbp Tyr-296 and the guanidinium group of Csk-SH2 Arg-107.
Overall Structure of the Complex of Csk-SH2 with Cbp5-The biochemical data described above, including gel filtration assays, indicate that Tyr(P)-314 interacts with Csk and that at least Tyr-296 contributes to binding affinity. To obtain detailed structural information about this interaction, we have determined the tertiary structure of the complex of Csk-SH2 with Cbp5 using multidimensional and multiresonance NMR spectroscopy. We confirmed that Csk-SH2 and phosphorylated Cbp5 (without GST) maintained a stable complex during gel filtration, whereas Csk-SH2 and nonphosphorylated Cbp5 did not by MALDI-TOF mass spectrometry. In analysis, we used not only isotope-labeled Csk-SH2 but also isotope-labeled Cbp5, which was overexpressed as a GST fusion protein in
E. coli.
Most of the phosphopeptides that have been used to date for structural analyses were not labeled because they have often been synthesized chemically as phosphorylated forms. However, with the isotopically labeled phosphopeptide of Cbp, we successfully obtained a high quality solution structure of the complex. The Cbp peptide was phosphorylated by Fyn and ATP during purification, and its purity was estimated to be Ͼ99% by mass spectrometry. The sequence of Cbp5 includes 33 amino acid residues, which was much longer in the N-terminal direction compared with the sequences of various other peptides that have been used for structural analyses of complexes with SH2 domains (Fig. 2A). This N-terminal sequence is not conserved in other ligands that interact with Csk-SH2 (Fig. 2B). Fig. 3A shows an overlay of the final 20 structures, which exhibited the best target functions among 100 calculated structures (PDB code 2rsy). In Fig. 3B, a ribbon diagram of the representative structure with the minimum target function is presented. The coordinates of the backbone atoms of Cbp5 were well defined in the wide range between Tyr-296 and Lys-319 with respect to the coordinates of Csk-SH2. These gave an average r.m.s.d. value to the mean structure of 0.26 Ϯ 0.04 Å for the backbone of Tyr-296 -Lys-319 (Cbp) and Met-80 -Lys-171 (Csk-SH2; Table 1). Bound Cbp5, regardless of its length, had a compact conformation comprising an ␣-helix between Glu-307 and Met-313 and a turn between Arg-299 and Asp-302 (Fig. 3B).
Conventional Interaction Mode-As shown in Fig. 4A, the region of Cbp5 that was expected to be specifically recognized by Csk-SH2, pYSSV, adopted an extended conformation in the complex. In the conformation, the aromatic and phosphate groups of Tyr(P)-314 were accommodated in a positively charged pocket of Csk-SH2. The electrostatic potential of this pocket was provided by Arg-107 in B and His-128 in D (Fig. 4, B and C). Both residues are conserved in most SH2 domains and are known to be involved in interactions with phosphotyrosines of associated ligands (45). In addition, the phosphate group of Tyr(P)-314 formed hydrogen bonds with the hydroxyl group of Ser-109 in B, the amino (-NH 2 ) group of Asn-111 in C, and the amide group of Tyr-116 through the hydroxyl group of Thr-117 in C (Fig. 4A). Another intermolecular hydrogen bond was found between the amide group of the next (Tyr(P)ϩ1) residue Ser-315, and the carbonyl group of His-128. The (Tyr(P)ϩ3) residue Val-317 was accommodated in a pocket containing Ile-140 and Leu-163 through hydrophobic interactions (Fig. 4, A and B). This binding mode was the same as that commonly observed for other SH2 domains and their associated phosphopeptides (1, 46 -49).
Another Novel Interaction Region between Csk-SH2 and Cbp5-Consistent with the results of our gel filtration experiments, structure determination of the complex revealed an additional interaction region in the N-terminal direction from Tyr(P)-314 of Cbp5 (Fig. 5). This region contained a unique ␣-helix between Glu-307 and Met-313 and a hydrophobic core comprising aliphatic Lys-297 and Arg-299 and side chains of Leu-305, Ile-310, and Met-313. The hydrophilic side chains of Lys-297, Arg-299, Glu-301, Asp-302, Glu-307, Glu-308, and Glu-309 were exposed to the solvent (Fig. 5B). In addition, the residues ranging from Arg-299 to Asp-302 took a turn conformation that exposed the hydrophilic side chains of Glu-300, Glu-301, and Asp-302 to the solvent. In the structure of Cbp5, at least in its complex with Csk-SH2, Tyr-296 and Tyr(P)-314 were positioned such that both side chains were inserted together into the pocket of Csk-SH2 with their aromatic rings arranged side by side (Fig. 5C). Most regions (99 Ϯ 1%) of Tyr-296 were buried in the interior of the complex structure, and the accessible surface area of its main chain and side chain was 1 Ϯ 1 (%). This is in contrast to the accessible surface area of Tyr-296 in the free state, which is 30 Ϯ 3 (%). Fig. 5D shows the residues of Csk-SH2 that were involved in the interaction with Tyr-296. The position of the Tyr-296 hydroxyl group allowed electrostatic interactions with Csk-SH2 via the phosphate group of Tyr(P)-314, the main-chain amide group of Thr-110, and the main-chain amide and side-chain amino groups of Asn-111. It also formed a hydrogen bond with the guanidinium group of Arg-107. These data explain the decreased affinity observed in gel filtration experiments when Tyr-296 was replaced with Phe lacking this hydroxyl group. Phosphorylation of Tyr-296 caused decreased affinity between Cbp and Csk-SH2, probably because of steric hindrance and electrostatic repulsion of bulky negatively charged phosphate groups on Tyr-296 and Tyr-314. In addition, the complex structure revealed van der Waals contacts between Arg-299/Glu-300 and Thr-110/Asn-111, and the filtered NOESY experiments exhibited strong intermolecular NOE peaks between these residues. These data indicate that Arg-299 and Glu-300 are also involved in the interaction (Figs. 5E and 6).
We compared the two-dimensional 1 H-15 N HSQC spectra of Csk-SH2 before and after complex formation (Fig. 7A) and plotted chemical shift differences against the amino acid sequence (Fig. 7B). No significant difference was observed in the overall peak distribution in the spectra. However, Thr-110 and Asn-111 exhibited the weighted average chemical shift deviations over 1.0 ppm (Fig. 7B). These data correspond to interactions of Thr-110 with Tyr-296 and Arg-299 as well as Glu-300 of Cbp and Asn-111 with Tyr-296, Arg-299, and Tyr(P)-314 of Cbp (Figs. 4 and 5). Because we labeled Cbp5 with isotopes, we compared the two-dimensional 1 H-15 N HSQC spectra of Cbp5 before and after complex formation (Fig. 7C) and plotted chemical shift differences against the amino acid sequence (Fig. 7D). The complex formation led to a wider distribution of Cbp5 amide 1 H-15 N cross-peaks (Fig. 7C). In particular, Lys-297 exhibited a chemical shift change as large as 1.0 ppm (Fig. 7D). The structure of the complex indicated a hydro- FIGURE 5. Another interaction region found in the structure of the complex of Csk-SH2 with Cbp5. A, interaction region between Csk-SH2 and Cbp5 is circled in black in the ribbon diagram of the complex of Cbp5 with Csk-SH2. Amino acid residues involved in the interaction are represented using a stick model. B, Cbp5 exhibits an amphipathic character, with hydrophilic residues exposed to the solvent and hydrophobic residues and aliphatic chains oriented inward. Positively and negatively charged residues are labeled in cyan and red, respectively, and hydrophobic residues are labeled in black. C, Tyr-296 and Tyr(P)-314 of Cbp5 (in khaki) are accommodated in the binding pocket of Csk-SH2. The two residues are represented using a stick model. An ␣-helix and a turn of Cbp formed on Csk-SH2 bring the two Tyr residues close to each other. D, residues of Csk-SH2 interacting electrostatically with the hydroxyl group of Tyr-296. Hydrogen bonds are shown by cyan lines. Amino acid residues of Cbp5 and Csk-SH2 are labeled in orange and black, respectively. E, residues additionally found as contributors to the second interaction site. Amino acid residues of Cbp5 and Csk-SH2 are labeled in orange and black, respectively. Hydrogen bonds are shown as cyan lines. Csk-SH2 is shown in green and Cbp5 in khaki.
gen bond between the carbonyl group in the main chain of Lys-297 and the amino group of the side chain of Asn-111, indicating that Lys-297 also contributes to the interaction of Cbp5 with Csk-SH2 (Fig. 5E). Among known ligands of SH2 domains, no residues that correspond to Tyr-296, Lys-297, Arg-299, and Glu-300 of Cbp and Thr-110 or Asn-111 of Csk-SH2 are definitely conserved (Fig. 2C). Likewise, in other SH2 domains, the residues that correspond to Thr-110 and Asn-111 are various and not conserved as expected (45).
Involvement of Tyr-296 in the Cbp-Csk Interaction in Cells-
To evaluate the physiological role of Tyr-296 in stabilizing the Cbp-Csk interaction, we examined the function of a Cbp mutant (Y296F) with a substitution of Tyr-296 by Phe in human lung cancer A549 cells with marked up-regulation of c-Src. We have recently shown that the expression of Cbp is extensively down-regulated in these cells and that the re-expression of Cbp inactivates c-Src and suppresses tumor growth by facilitating the recruitment of Csk to the membrane (44). In Cbp-expressing cells, the Cbp-Csk interaction was readily detected by immunoprecipitation (Fig. 8A). However, in mutant cells, Y296F bound very weakly to Csk. Furthermore, a double mutant, in which Tyr-296 and Tyr-314 were replaced with Phe (Y296/314F), completely failed to interact with Csk. Inversely, the Tyr phosphorylation levels of Cbp, which reflect the activation status of c-Src, were significantly augmented when Cbp mutants were expressed. These observations demonstrate that Tyr-296 contributes to stabilizing the Cbp-Csk interaction even in cells. We next assessed the role of Cbp in cellular function by examining the effects of the expression of Cbp on the colony-forming ability of tumor cells. As described previously, the expression of Cbp significantly suppressed colony formation in A549 cells (Fig. 8B). However, this growth suppression was significantly attenuated when Y296F was expressed. The expression of Y296/314F did not induce significant suppression of the colony-forming ability of these cells, revealing that Tyr-296 supports the tumor-suppressive function of Cbp by stabilizing the Cbp-Csk interaction.
Interestingly, another mutant, in which Tyr-296, Lys-297, Arg-299, and Glu-300 were replaced with Ala (AASAA), attenuated growth suppression more than Y296F did. This effect was comparable with that of mock and Y314F despite retention of Tyr-314, which is critical for the Cbp-Csk interaction (Fig. 8C). As described above, these residues in the wild-type specifically interact with Thr-110 or Asn-111 of Csk-SH2 and form the core of the second interaction region. Hence, it is clear that simultaneous replacement of all of these residues in AASAA led to decreased Csk-binding affinity and that the second recognition region is at least required for tumor suppression via the Cbp-Csk interaction.
Cbp Binding to the SH2 Domain Changes Its Structure to the More Active Form-Previous x-ray analyses of intact Csk demonstrated crystal structures of both active and inactive forms, which probably establish an equilibrium in solution (14). Thus, in order for Csk to be completely active, its conformation must shift to the active form and be maintained at least during the kinase reaction. Interestingly, past experiments showed that Csk activity increased upon interaction with a Cbp phosphopeptide consisting of only six residues (AMpYSSV) or 10 residues (ISAMpYSSVNK) (13, 16 -19). However, overlaying the solution structure of the complex on intact crystal structures of the free form showed that the Cbp5 peptide was distant from the catalytic domain (Fig. 9A). Therefore, it is conceivable that the catalytic domain is activated through a possible conformational change in Csk-SH2 triggered by Cbp binding to it.
To investigate how the effect of Cbp binding is structurally transmitted to the SH2 domain, we examined the chemical shift perturbation observed for Csk-SH2 upon binding with Cbp5. Thus, mapping of the perturbed residues on the structure of Csk-SH2 suggested that its B/C loop and D strand underwent conformational or dynamic changes following Cbp5 binding (Fig. 10A). We next compared the structure of Csk-SH2 of the complex with the crystal structure of the free form, which had previously been determined by x-ray analysis (PDB code 3EAC). Consistent with the above-mentioned chemical shift changes, these structures in the B/C loops were distinctly different from each other (Fig. 10A). Furthermore, in our complex structure, hydrogen bonds were found in the B/C loop around Tyr(P)-314, including those between the amide group of Ser-109 and the carbonyl group of Asp-115, the hydroxyl group of Ser-109 and the amide group of Tyr-112, and the carbonyl group of Tyr-112 and the amide group of Asp-115 (Fig. FIGURE 8. Tyr-296 is required for a stable Cbp-Csk interaction and tumor-suppressive function of Cbp. A, whole cell lysates (WCL) from A549 cells and Cbp-expressing wild-type cells, Y296F mutant, and Y296F/Y314F double mutants were immunoblotted with the indicated antibodies. Cbp was then immunoprecipitated (IP) from whole cell lysates with anti-Cbp and then immunoblotted with the indicated antibody. Asterisks indicate the location of the immunoglobulin. B, effects of the expression of Cbp and its YF mutants on the tumor growth of A549 cells were examined using colony formation assays in soft agar. Colonies were stained with 3-(4,5-dimethylthiazol-2-2-yl)-2,5-diphenyltetrazolium bromide 10 days after plating. The colony numbers per cm 2 are presented as mean Ϯ S.E. from three independent experiments. C, effects of the expression of Cbp and another mutant, AASAA, on the tumor growth of A549 cells were examined as described in B. The mutated amino acids are shown in bold letters in the sequence above the graph.
10B). The presence of these hydrogen bonds was supported by intramolecular NOEs between these residues (Fig. 9B). Interestingly, these were also observed in the crystal structure of the active form (Fig. 10C). The complex structure also showed that the phosphate group of Tyr(P)-314 forms hydrogen bonds with Ser-109 and Asn-111 as well as with Tyr-116 through the hydroxyl group of Thr-117 (Fig. 10B). It is noteworthy that the novel binding sites, Tyr-296, Lys-297, Arg-299, and Glu-300, of Cbp bound with Thr-110 and Asn-111 in the same B/C loop of Csk-SH2 (Fig. 5, D and E). These novel binding sites apparently contribute to maintenance of a network of hydrogen bonds around the B/C loop, with the phosphate group of Tyr(P)-314 as the center.
In contrast, these hydrogen bonds were not observed in the crystal structure of the inactive form. The residue pairs in the B/C loop in the inactive form are too distant to form such hydrogen bonds. Thus, the structure of Csk-SH2 in complex with Cbp5 was more similar to the crystal structure of Csk-SH2 in the active form than to the one in the inactive form (Fig. 10, C and D). Taken together, these observations show that Cbp binding to the SH2 domain generated a hydrogen bond network around the B/C loop, shifting its conformation to the more active form.
NMR Relaxation Analysis-To evaluate changes in the structural dynamics of Csk-SH2 induced by Cbp binding, we measured 15 N R 1 and R 2 relaxation rates, and { 1 H}-15 N steady-state NOE values of uniformly 15 N-labeled Csk-SH2 with and with-out uniformly 15 N-labeled Cbp5 peptide. Resonances showing severe overlap or low intensity were excluded from the study. The average rotational correlation time m was estimated as 5.8 and 8.4 ns for Csk-SH2 in the free and complex forms, respectively. Using these relaxation data, we analyzed residue-specific internal motions of Csk-SH2 in the two forms in terms of the order parameters squared (S 2 ) by the model-free approach (Fig. 11, A and B). in the ␣B/G loop had greater S 2 values (about 0.1) in the bound form. These differences are mapped on the structure of Csk-SH2 (Fig. 11D). The S 2 values of Cbp5 in the complex exceeded 0.65 (mean; 0.80 Ϯ 0.01) over the range of 22 amino acids from Tyr-297 to Asn-318 (Fig. 11D). These data indicate that a wide range of the region, including a secondary interaction site, stably bound with Csk-SH2.
zz-Exchange Spectroscopy-To evaluate the affinity of Csk-SH2 for the secondary binding site of Cbp, we performed zzexchange spectroscopy of 15 N-labeled Csk-SH2 mixed with each of Cbp5, Cbp6, and Cbp7. Kinetic data were analyzed from well resolved peaks of two residues: Gly-162, which binds directly to Cbp, and Val-172. Although Val-172 is located at the opposite side of the binding region, it exhibited an NMR peak pattern similar to that of Gly-162 because of a possible small conformational change synchronized with Cbp binding. As a result, Cbp5, the longest peptide that includes the secondary binding site, exhibited distinctly lower exchange rates (k off ϭ 0.80 s Ϫ1 for Gly-162), whereas both Cbp6 and Cbp7 exhibited ϳ8 -25 times higher k off values for Gly-162 (k off ϭ 6.69 s Ϫ1 for Cbp6 and k off ϭ 23.1 s Ϫ1 for Cbp7) ( Table 2). These results evidently indicated that longer peptides interacted more strongly and that Cbp5 had the highest affinity because of the additional interaction, which Cbp6 and Cbp7 lack.
DISCUSSION
It is generally accepted that SH2 domains bind to ligands by specifically recognizing a Tyr(P) along with the ϩ1 to ϩ3 amino acid residues that follow in the C-terminal direction. Our complex structure of Csk-SH2 with Cbp5 showed that Tyr(P)-314 of Cbp and the three residues that follow toward the C terminus interacted with Csk-SH2 in the same manner as observed in many other phosphopeptides that interact with normal SH2 domains. This observation was supported by our gel filtration experiments and indicates that Csk-SH2 is a normal SH2 domain that mediates typical interaction patterns. However, we have identified a novel binding region of Cbp with Csk-SH2. Analyses of intermolecular NOE and 15 N-spin relaxation rates indicated that the region of Cbp that interacts with Csk-SH2 spans the 23 amino acid residues between Tyr-296 and Asn-318.
Even when nonlabeled Csk-SH2 was added to a solution of nonphosphorylated [ 15 N]Cbp5, the two-dimensional 1 H-15 N HSQC spectrum of Cbp5 exhibited neither chemical shift changes nor signal broadening in its amide resonance peaks. None of the amide peaks that were assigned to a region close to Tyr-296 in the sequence underwent any chemical shift perturbation either. This indicates that the novel N-terminal interaction region of Cbp alone is not sufficient to enable Cbp5 to bind to Csk-SH2. In fact, addition of a phosphate group to Tyr-314 induced amide chemical shift changes of not only Tyr-314 but also Asp-302 and Lys-289, which are distant from Tyr-314 in the context of the amino acid sequence. In addition, our preliminary molecular dynamic simulation suggested a bent conformation of Cbp5, with Tyr(P)-314 close to Tyr-296 even in its FIGURE 11. Relaxation analyses of 15 N-labeled Csk-SH2 of the free and complex form and 15 N-labeled Cbp5 of the complex form. The data were obtained at a 15 N resonance frequency of 50.68 MHz, which corresponded to a 1 H resonance frequency of 500.13 MHz. These relaxation data indicate that Cbp5 in the complex takes a stable conformation across a wide range that includes the N-terminal region from Tyr(P) (residues between Tyr-296 and Asn-318). A-C, plots of the generalized order parameter squared (S 2 ) obtained from the model-free analysis of the 15 N-spin relaxation rates and { 1 H}-15 N steady-state NOE data at 11.7 tesla for Csk-SH2 (A) and Cbp5 (B) and of the changes in S 2 induced by Cbp-binding (⌬S 2 ϭ S 2 complex Ϫ S 2 free ) (C) as a function of residue number. Open diamonds correspond to the free form of Csk-SH2 and filled diamonds the complex form of Csk-SH2 and Cbp5. D, mapping of the changes in the generalized order parameter squared induced by Cbp-binding (⌬S 2 ) on the structure of Csk-SH2. Positive to negative changes are gradually colored in blue to red. Residues showing ⌬S 2 above ϩ0.09 or below Ϫ0.09 are colored in blue or red, respectively. free state, rather than a randomly fluctuating extended conformation as often seen in short peptides (data not shown). Thus, there may be interference between Tyr-314 and Tyr-296, and when Tyr-314 is not phosphorylated, Tyr-296 may also fail to interact with Csk-SH2. In either case, the result shows that phosphorylation of Tyr-314 dominantly controls association and dissociation with Csk-SH2.
Although the second binding region containing Tyr-296 certainly contributes to the interaction, it appears to enhance the selectivity of Csk-SH2 for this ligand. Intact Cbp possesses six Tyr residues that can undergo phosphorylation (13). Among them, the three residues following Tyr-224 (pYASV) are almost the same as those following Tyr-314 (pYSSV) except for Ala and Ser at the ϩ1 position in the respective sequences. In fact, another ligand for Csk-SH2, SIT, possesses this pYASV sequence as the binding site (50). In addition, the short peptide GDGpYXXXSPLLL, in which pYXXX corresponds to any of the sequences pY(T/A/S)(K/R/Q/N)(M/I/V/R), is reportedly recognized by Csk-SH2 (51). Furthermore, sequential alignment of Cbp5 with other ligands containing Tyr(P) residues showed that the N-terminal interaction region around Tyr-296 of Cbp5 is unique and that the corresponding regions are rich in variety in the sequences of other ligands (Fig. 2B). These results indicate that the sequence that includes a few residues following Tyr(P)-314 is too short to ensure strict selectivity by Csk-SH2. Through much discussion of the specificity of many SH2 domains and corresponding ligands, it has been concluded that SH2 domains generally have no distinctive selectivity for short ligands (52). However, this type of discussion has been based on peptide ligands that include about 10 amino acids as the longest region (2,51,53), probably because such short regions of peptides, exclusively analyzed by x-ray crystallography or NMR, take particularly rigid conformations in complex with corresponding SH2 domains. In reality, however, a region other than that containing Tyr(P) may contribute to the specificity of binding and may play an essential role in the biological and physiological function of each SH2 domain, as demonstrated by the complex structure of Csk-SH2 and Cbp5. Indeed, it is reported that the SH2 domain of phospholipase C␥ recognizes a region of the ligand wider than the conventional recognition sequence (54). Such wide range recognition is expected as a common characteristic of SH2 domains once longer peptide ligands or intact proteins are examined in structural analyses of these complexes.
According to a sequence alignment of various SH2 domains, including Csk-SH2, a part of the additional interaction region found in Csk-SH2 (Thr-110 and Asn-111 in the B/C loop) is unique to Csk-SH2 and is not conserved in other SH2 domains (45). Interestingly, it is reported that the N-terminal SH2 domain of phospholipase C␥ interacts with the activated Tyr kinase domain of FGFR1. This occurs through Thr-590 -Val-592 residues in the B/C loop of the SH2 domain as a secondary binding site with Gln-606, Asp-755, Val-758, and Ala-759 of FGER1 (54). This region in the B/C loop matched the one that we identified as the secondary interaction site in our experiments. In particular, Thr-590 of phospholipase C␥-SH2 structurally corresponded to Asn-111 of Csk-SH2. Furthermore, in various ligands of SH2 domains, the residues that corresponded to Tyr-296, Lys-297, Arg-299, and Glu-300 of Cbp that interacted with Thr-110 or Asn-111 of Csk-SH2 are definitely not conserved (Fig. 2C). Taken together, residues in the B/C loops in many other SH2 domains may also be involved in interactions with their ligands, and the versatility of the residues in these loops may enhance the ligand specificity of each SH2 domain.
A two-dimensional 1 H-15 N HSQC spectrum of Cbp5 in the free form exhibited most of the main-chain amide signals crowding within a 1 H resonance range between 8.0 and 8.5 ppm, suggesting that no distinct conformation exists dominantly in the free state of Cbp (Fig. 5C). Although, as described above, particular conformations may be taken, these are unstable. However, Cbp5 in the complex state had a hydrophobic core (Leu-305, Ile-310, and Met-313 and the aliphatic parts of Lys-297 and Arg-299) and hydrophilic residues on the surface (Lys-297, Arg-299, Glu-301, Asp-302, Glu-307, Glu-308, and Glu-309) and was stabilized by adopting a characteristic conformation (Fig. 5B). These characteristics eluded studies of shorter peptide ligands in complex with SH2 domains. Importantly, this structural compaction allowed the Cbp5 peptide to interact with a limited region in the B/C loop of Csk-SH2, in which a characteristic hydrogen bond network was observed.
Our NMR study revealed that Cbp binding to Csk-SH2 formed a hydrogen bond network around the B/C loop, with the phosphate group of Tyr(P)-314 as the center. According to the crystal structure of intact Csk in the active form, Tyr-116 in this hydrogen bond network forms a hydrophobic pocket with Tyr-133, Leu-138, and Leu-149 that interacts with the catalytic domain through the methyl group of Ala-228 (Fig. 9C). However, these residues are geometrically distant from each other in the inactive form. Interestingly, a recent study reported that any mutation in Tyr-116, Leu-138, or Leu-149 in the hydrophobic pocket or in Ala-228 in the kinase domain to Ala or Gly decreases Csk activity to 10 -20% of the wild type in the absence and presence of a Cbp peptide (18). These experiments demonstrate that binding of Ala-228 to this hydrophobic pocket is essential for Csk activation. Cumulatively, these observations suggest that a hydrogen bond network between Tyr(P)-314, probably involving the novel binding sites, of Cbp and the above-mentioned residues in Csk-SH2 maintains the active form, stabilizes packing between the hydrophobic pocket, including Tyr-116 of the SH2 domain and Ala-228 in the catalytic domain, and consequently leads to allosteric activation of Csk (Figs. 10B and 9C).
In conclusion, we determined the solution structure of the complex of Csk-SH2 with Cbp. This structure revealed the presence of a novel binding region in Cbp that is separate from the canonical binding region. Furthermore, mutational analysis in cells showed that both canonical and novel binding sites are required for tumor suppression via the Cbp-Csk interaction. These findings indicate that conventionally known interaction sites in target ligands do not explain the biological functions of the associated SH2 domains alone.
|
2018-04-03T02:35:12.270Z
|
2013-04-02T00:00:00.000
|
{
"year": 2013,
"sha1": "e220ab9228ee8836a3213807ea5e8f9ccf2725d5",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/288/21/15240.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "0b58495bf21f5d9633069dbe17ac989efe12e041",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
261312711
|
pes2o/s2orc
|
v3-fos-license
|
First record of Leptopilina japonica Novković & Kimura, 2011 (Hymenoptera: Figitidae) in Germany, a parasitoid of the Spotted Wing Drosophila Drosophila suzukii (Matsumura, 1931) (Diptera: Drosophilidae)
Two years after the first European record in Italy, we report the first occurrence of the parasitoid wasp Leptopilina japonica Novković & Kimura, 2011 (Hymenoptera: Figitidae) in Germany. The species is a larval‐pupal parasitoid of Drosophila suzukii (Matsumura, 1931) (Diptera: Drosophilidae), which is a widespread invasive and economically important pest of soft‐skinned fruit. In total, we found 29 specimens of L. japonica in five different locations in southern and western Germany in the years 2021, 2022 and 2023. We examined the specimens morphologically and generated their DNA barcodes for identification. In three of the locations, L. japonica was sampled from raspberries. In two locations, L. japonica was caught in two and three consecutive years, respectively, which indicates adventive establishment. As D. suzukii and L. japonica originate from the same region in Asia, the possible establishment of L. japonica could be a case of unintentional biological control in Germany. In addition to this first record in Germany, we present a diagnosis of L. japonica to distinguish the species from the rest of the European Leptopilina fauna.
Diapriidae) and Pachycrepoideus vindemmiae (Rondani, 1875) (Hymenoptera: Pteromalidae) (Englert & Herz, 2016;Knoll et al., 2017;Kremmer et al., 2017) and these are currently explored for use in augmentative biological control of SWD in Germany (Eben et al., 2022).European larval-pupal parasitoids of SWD have very low reproduction rates due to host resistance and lead to low levels of mortality of the host (Kruitwagen et al., 2021;Poyet et al., 2013); these species may fail to attack SWD when it is offered as a host (Chabert et al., 2012).
Both species have followed their host to western parts of North America and have been present there since 2016 (L.japonica) and 2019 (G.brasiliensis) (Abram et al., 2020).They have since established adventive populations (Beers et al., 2022).G. brasiliensis is also found in several Central and Southern American countries (Buffington & Forshage, 2016;Gallardo et al., 2022;Gonzalez-Cabrera et al., 2020) and has been released since 2021 as part of a classical biological control programme in Italy (Fellin et al., 2023).
L. japonica is less widely distributed and the only report outside of its area of origin in Southeast Asia (Novković et al., 2011) or North America (Abram et al., 2020) is from Italy in 2019 (Puppato et al., 2020).
In this study, we report the presence of L. japonica in Germany for the first time, which is also the northernmost record of the species in the world.This arrival of L. japonica may open the possibility to enhance natural regulation of SWD in the future.
| MATERIAL S AND ME THODS
We analysed specimens from five different locations in western and southern parts of Germany, details on locations and collection methods are given in Table 1.From three of the locations, wasps were reared from drosophilid-infested berries; in one of them, the berries were exclusively infested by SWD.The collections in Veitshöchheim were conducted to monitor the parasitoid complex of SWD in Germany.
Sequences of the CO1 barcode region were obtained from five specimens (Table 1) using standard procedures at Advanced Identification Methods (AIM, Leipzig, Germany, see Morinière et al., 2015) and the Leibniz Institute for the Analysis of Biodiversity Change (LIB, Museum Koenig Bonn, Germany, see Jafari et al., 2023 for lab protocol, andAstrin &Stüben, 2008 for the LCO1490-JJ forward-and the HCO2198-JJ reverse-primer).We combined our CO1 barcode sequences with those of Leptopilina specimens deposited at the DROP Database (Lue et al., 2021), including the sequences published together with the description of L. japonica (Novković et al., 2011).Using Geneious (vers.7.1.9,Biomatters Ltd.), we aligned sequences with MUSCLE and generated a neighbour-joining tree (Tamura-Nei).Based on this tree, we evaluated conspecificity of the sampled specimens and those with data deposited at DROP on their distance-based clustering.
In addition to CO1 barcode analysis, we morphologically examined barcoded specimens and 24 additional specimens from the same localities using a Leica M205C stereomicroscope.We used the latest treatments of the genus for the Western Palearctic region (Forshage & Nordlander, 2008;Nordlander, 1980;Novković et al., 2011;van Alphen et al., 1991), including the relevant terminology.Our diagnosis of L. japonica is based on information from these sources and verified with specimens of all included taxa, except L. australis (Belizin, 1966), since no specimen of that species was available to us, but which can be distinguished from L. japonica through literature alone.
The result from analysis of the nucleotide sequence data is congruent with the results from the morphological identification.Our sequences match the reference sequences of L. japonica from DROP with 97.8% to 100% similarity (excl.the cluster around the TP strain, since none of our sequences fell within that cluster and the specimens were described as separate subspecies of L. japonica).The minimum similarity among the sequences in the DROP dataset (i.e.alignment without our sequences) is 98.2%.
| DISCUSS ION
The same way that pests spread without intentional human interference, their natural enemies may spread as well (Weber et al., 2021).
In the case of non-native parasitoid Hymenoptera in Europe, 45% originate from unintentional introductions (Weber et al., 2021).
These unintentional introductions may substantially reduce pest populations, as in the case of Aphelinus certus Yasnosh, 1963 (Hymenoptera: Aphelinidae) in the United States (Kaser & Heimpel, 2018), or serve as unintended test area for a later classical biological control programme, as in the case of Trissolcus japonicus (Ashmead, 1904) (Hymenoptera: Scelionidae) in Italy (Falagiarda et al., 2023).
was discovered in 2021 at three different locations up to 220 km apart, it is likely that the species is even more widespread or has been (accidentally) introduced multiple times.In two of the locations, L. japonica was found in consecutive years, which may indicate that adventive populations are established.The specimens in Germany, especially in Bonn, represent, to the authors' knowledge, the northernmost findings of L. japonica (Latitude 50.7).This is possibly due to the mild mesoclimate of the Rhine valley and the favourable conditions in the urban environment.The small number of recorded specimens in this study is linked to the unsystematic type of collecting efforts and probably does not relate to actual population sizes.Although not restricted to the Drosophila melanogaster species group, the known host range of L. japonica is relatively narrow (Daane et al., 2021; Kimura & Novković, 2021; F I G U R E 1 Records of Leptopilina japonica in Germany.F I G U R E 2 (a, b) Leptopilina japonica habitus (lateral view, a) and mesoscutellum (dorsal view, b).The arrows indicate diagnostic characters mentioned in the diagnosis.(c, d) L. heterotoma habitus (lateral view, c) and mesoscutellum (dorsal view, d).
|
2023-08-30T15:12:20.180Z
|
2023-08-27T00:00:00.000
|
{
"year": 2023,
"sha1": "031497b9e88569c2d2889da8db0267174648909f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jen.13182",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "5595963b78bfa2369b170d419ffa04e6356b483c",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
89791071
|
pes2o/s2orc
|
v3-fos-license
|
Isolation, Enrichment and Metagenomic Characterization of Simultaneous DDT and Lindane Degrading Microbial Consortium
Organochlorine pesticides (OCPs) such as Lindane and DDT (dichlorodiphenyltrichloroethane) have been extensively used for agricultural purposes primarily for pest management and DDT is still the “sought after” for public health care programs to control vector-borne diseases like malaria in developing nations. OCPs, due to recalcitrant nature slowly degrade and pose adverse health effects to the environment and community. Residues of OCPs were detected in soil, water and air leading to potential bioaccumulation in food chains and were considered persistent organic pollutants. Microorganisms were found to be potential bio-degraders of organochlorine pesticides. In this study, the microbial population from aquatic systems, rivers from Yamuna (North India) and Godavari (South India) was isolated and enriched until a Lindane and DDT tolerant population was established. Screening of the population for understanding bioremediation thresholds was done using 5ppm of DDT and Lindane. The populated microbial cells formed the consortium that was subjected to metagenomic analysis to identify the organisms till species level. The 16S amplicon sequencing identified 871 species in the consortium and established the biodiversity of the consortium. The defined consortium was able to degrade DDT and Lindane up to 30 ppm simultaneously in varying order of pesticide concentrations.
Organochlorine Pesticides (OCPs) have been indiscriminately used for agricultural purposes primarily for pest management and OCPs such as DDT (dichlorodiphenyltrichloroethane) is used for public health programs to control vectorborne diseases like malaria.Even though OCPs were proscribed or banned in many countries; being persistent organic pollutants and recalcitrant in nature, they pose a plethora of environmental and health concerns 1,2 .It is crucial and imperative to develop methods and strategies for removing them from the environment 3 .
DDT and Lindane (g-hexachlorocyclohexane) were the major OCPs that have been ubiquitously used in developing nations 4 .A major sink for persistent organic pollutants discharged into the environment is the water ecosystem i.e. rivers and lake beds 5 .Endrin aldehyde, Endosulfan sulfate and DDT were detected in highest percentage in River Yamuna which demonstrates the pollution of the river with pesticide residues 6 .DDT, Trans-chlordane and Endo-sulfansulfate were the dominant OCPs in soil sediments from River Godavari 7 .Microorganisms are found to be potential degraders of organochlorine compounds, notably soil habitants belonging to genera Bacillus, Pseudomonas, Arthrobacter and Micrococcus were found to be effective bio-degraders 8 .Several persistent organochlorine pesticides were detected in other rivers where higher concentrations of Endosulfan sulfate and DDT were detected and even found their presence in drinking and bottled water 9 .Hence, it becomes imperative to remove these pollutants from the environment, from the sinks primarily water ecosystems to eliminate their residues.
In the current study, a novel microbial consortium sampled from the River Yamuna (North India) and River Godavari (South India) was enriched until a Lindane and DDT tolerant population was established.This consortium was characterized using metagenomics, 16S amplicon sequencing in Illumina Next Generation Sequencing (NGS) platform 10,11 .The biodiversity of riverine metagenome was established using QIIME (Quantitative Insight into Microbial Ecology) 12 .The defined microbial consortium was able to degrade DDT and Lindane simultaneously.
Chemicals
Lindane (g-HCH) was of 97% purity and obtained from Sigma-Aldrich, USA.DDT, 99.4% pure, was donated by Hindustan Insecticides Ltd, India.All other chemicals and reagents used in the study were of analytical grade and were purchased from standard manufacturers.
Isolation and Enrichment of Microbial Consortium
Water samples from the rivers Yamuna (North India) and Godavari (South India) were collected in clean bottles and brought to the lab in sealed condition.These water samples were mixed and incubated with 1% (w/v) peptone in a rotary shaker maintained at 150 rpm and run in ambient conditions.Once the microbial growth was sufficient to make the broth highly turbid, the culture was starved for a week followed by addition of 0.5% (w/v) peptone, 2 ppm Lindane and 2 ppm DDT.The growing culture was left shaking for a month followed by addition of 0.1% peptone, 5 ppm Lindane and 5 ppm DDT.After shaking for another month, the culture was continuously shaken only in presence of gradually increasing concentrations of Lindane and DDT for many months till a stable Lindane and DDT tolerant population was established in the flask 13 .These populated microbial cells formed the consortium that was used in this study.
Screening of DDT and Lindane Tolerant Microbial Consortium
The established consortium was inoculated to 5ppm Lindane and 5ppm DDT mixture in 25 mL sterile RO water taken in 250 mL Erlenmeyer flasks.The flasks were kept in a rotary shaker set at 150 rpm and maintained under ambient condition.Whole flask samples were drawn at 0 h, 24 h, 48 h and 72 h, acidified by adding 3-5 drops of fuming nitric acid and extracted twice with equal volume of dichloromethane (DCM).Both the organic layers were pooled after passing through anhydrous sodium sulfate and activated fluorisil.The organic solvent was evaporated under ambient conditions and the residual pesticides were re-suspended in HPLC grade acetone before transferring into a microfuge tube followed by complete evaporation of acetone at room temperature (RT) 14 .The residual pesticides were dissolved in a known volume of HPLC grade acetone for further analysis by thin layer chromatography (TLC) and GCMS/MS.
Estimation of Residual Pesticide Concentration Thin Layer Chromatography
TLC was performed on 0.25 mm thick silica gel G plate with cyclohexane mobile phase.The thin layers were air-dried before detecting the pesticide residual spots using o-tolidine (2% as acetone solution) spray in bright sunlight.The spots appeared as peacock green.The area under the spot was used for quantifying the residual Lindane and DDT using the relationship that the square root of the area is directly proportional to the log of concentration 15 .The results were further confirmed using GC-MS/MS.
Gas Chromatography-MS/MS
The residual pesticide was quantified by Gas Chromatography using instrument GCMS/MS Triple quad; Model 7000D (Agilent Technologies Ltd) 16 .The column HP-5ms, Agilent 19091S EPC was used for analysis of residual pesticides.These columns have low bleed characteristics, excellent inertness for active compounds, and improved signal-to-noise ratio for better sensitivity.The sample was dissolved in 1 ml MS grade ethylacetate and appropriate dilutions were used for analysis.
The injector was maintained at 70°C initial set point to post run temperature of 280°C, while the column was programmed with pressure 30.797 psi, flow of 3.1793 mL/min, Average Velocity of 54.506 cm/ sec and initial temp of 70°C.The ion source was electron ionized (EI) with source temperature of 300°C for triple quadrupole acquisition method.
Metagenomic Analysis of the Consortium
Microbial consortium enriched using organochlorine pesticides was subjected to 16S Metagenomic Study for species-level identification using Illumina Platform.DNA was isolated using Xcelgen Bacterial gDNA kit and quality of gDNA was checked on 0.8 % agarose gel (loaded 5 µl) for the single intact band.The gel was run at 110 V for 30 min. 1 µl of each sample was loaded in Nanodrop 8000 for determining A 260/280 ratio.The DNA was quantified using Qubit dsDNA HS Assay kit (Life Tech). 1 µl of each sample was used for determining concentration using Qubit® 2.0 Fluorometer.Amplicon library was prepared using Nextera XT Index Kit (Illumina Inc) as per the 16S Metagenomic Sequencing Library and amplicon sequencing was performed using Hi Seq 2500 using Illumina platform.
RESULTS AND DISCUSSION
A turbid microbial growth that was continuously shaken only in presence of gradually increasing concentrations of Lindane and DDT for many months till a stable Lindane and DDT tolerant population was established in the flask (Figure -1).These populated microbial cells formed the consortium that was used in this study.As the objective of the study was to genotypically characterize the consortia till species level, no growth based techniques were used for screening individual organism, since such techniques will not able to identify any unculturable microorganisms 17, .
The established consortium was incubated with 5ppm Lindane and 5ppm DDT mixture in 25 mL sterile RO water and whole flask samples were drawn at 0 h, 24 h, 48 h and 72 h.The residual pesticides dissolved in a known volume of HPLC grade acetone for analysis by thin layer chromatography (TLC) and GCMS/MS.The consortium was able to degrade effectively DDT and Lindane in varying order of pesticide dissipation simultaneously (Figure -2).A 72 h sample showed around 69% of DDT and 75% of Lindane being effectively degraded through thin The defined microbial consortium was subjected to metagenomic analysis to identify the organisms till species level.The 16S amplicon sequencing identified 871 species in the consortium and established the biodiversity of the consortium 19 .The taxonomic distribution of phylum was determined using QIIME (Quantitative Insight Into Microbial Ecology) after analyzing 16S metagenome data from Next Generation Sequencing (NGS) platform in python language (Figure -3).
NCBI Sequence Accession Number
Complete DNA sequences obtained have been deposited at National Center for Biotechnology Information (NCBI) Sequence Read Archive under the bioproject ID PRJNA420925.
The metagenomic analysis identified the 871 species in the Microbial Consortium.The major species in the defined microbial consortia with highest abundance ratio are found to be Brevundimonas diminuta, Alcaligenes faecalis, Stenotrophomonas acidaminiphila, Bacillus cereus and Desulfosporosinus meridiei (Table 1).
A vast research has been carried out using single organism and a single compound of OCPs in the realm of bioremediation 20,21 .However, the present study identified a mixture of the microbial population from the river Yamuna and the river Godavari that were characterized till species level using novel metagenomics Illumina platform which could degrade DDT and Lindane up to 30 ppm concentration simultaneously (data not shown).
The biodiversity of microbial population and pesticide removal provides information about 0.22% f__Peptococcaceae;g__Desulfosporosinus;s__meridiei using this defined consortium as a viable strategy for remediation of a mixture of organochlorine pesticides.Further, the consortia were found capable of degrading mixture of organochlorine pesticides simultaneously, a crucial phenomenon which can be a promising solution for removal of DDT and Lindane pesticide mixture in aquatic ecosystems by eliminating pesticide residues thereby enhancing environmental conditions.
Fig. 3 .Fig. 4 .
Fig. 3. Pie chart showing the relative abundance of each phylum within the microbial consortium
Table 1 .
Species with Higher Abundance Ratio in the Microbial Consortium
|
2019-04-02T13:13:02.420Z
|
2017-12-30T00:00:00.000
|
{
"year": 2017,
"sha1": "23ff2c7edba15bf464de6b50a3f4cfe87fd43828",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.22207/jpam.11.4.36",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9979dc972951ffd343c393c33ee08e17fd9b076b",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252881701
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Evaluation and Exploration of Mechanisms for Modified Xiebai Powder or Modified Xiebai Powder Combined with Western Medicine in the Treatment of Pneumonia
Objective . To systematically evaluate the clinical e ffi cacy of modi fi ed Xiebai Powder or modi fi ed Xiebai Powder combined with Western medicine in the treatment of pneumonia and explore its potential mechanism of action. Methods . Meta-analysis was used to screen the eligible literature on randomized controlled trials (RCTs) about Xiebai Powder in the treatment of pneumonia, and Review Manager 5.3 software was used for statistical analysis of the data. Based on the results of the meta-analysis, the active ingredients in Xiebai Powder and their therapeutic targets, disease-related targets, and intersection targets were screened using methods of network pharmacology, and their biological processes and key signaling pathways were analyzed using bioinformatics tools. Molecular docking was carried out to verify and predict the mechanisms for Xiebai Powder combined with Western medicine in the treatment of pneumonia. Results . A total of 16 papers were screened out, with a total of 1,465 patients. The results of the meta-analysis showed that modi fi ed Xiebai Powder or modi fi ed Xiebai Powder combined with Western medicine were superior to conventional Western medicine in terms of clinical e ffi cacy, shortening the disappearance time of symptoms (body temperature, cough, and pulmonary rales) and reducing the level of C-reactive protein, and the incidence of adverse reactions was signi fi cantly reduced. A total of 40 active ingredients in Xiebai Powder and 285 therapeutic targets of Xiebai Powder combined with azithromycin after deduplication were screened out from the database. KEGG enrichment analysis showed that Xiebai Powder combined with azithromycin might play a role in the treatment of pneumonia through the IL-17 signaling pathway, tumor necrosis factor signaling pathway, C-type lectin receptor signaling pathway, Toll-like receptor signaling pathway, and HIF-1 signaling pathway. Conclusions . Modi fi ed Xiebai Powder or modi fi ed Xiebai Powder combined with azithromycin has better e ff ects in treating pneumonia, and modi fi ed Xiebai Powder combined with azithromycin may play a role in treating pneumonia through several pathways such as the IL-17 signaling pathway.
Introduction
Pneumonia is an acute inflammation of the lower respiratory tract and lung parenchyma, and the main clinical symptoms are fever, cough, and shortness of breath [1].It is usually caused by bacterial and viral infections [2], which is common in children, the elderly, and people with poor immune functions.The high incidence and high mortality of pneumonia worldwide pose a great threat to human lives.There are many adverse reactions and drug resistance in the treatment of pneumonia with antibiotics, which are widely used in clinics at present.For example, azithromycin preparation is widely used in clinic and is one of the mainstream antibacterial drugs for community-acquired pneumonia (CAP) in children.Aiming at the minimum applicable age for azithromycin treatment and according to the guidelines for the management of communityacquired pneumonia in children (revised in 2013), the efficacy and safety of azithromycin have not been established for children younger than six months with CAP, so it should be used with caution [3].Increasing numbers of reports have shown that the TCM treatment of pneumonia has the advantages of lower incidence of adverse reactions and outstanding clinical efficacy.
Xiebai Powder is a classic prescription that comes from Key to Therapeutics of Children's Diseases, written by Qianyi in the Northern Song Dynasty.Key to Therapeutics of Children's Diseases an early monograph on syndrome differentiation and treatment of pediatric diseases in China.It is also the first existing pediatric book preserved in the form of the original book in the world.It is available in Japan and some other countries and has a wide range of influence [4].Xiebai Powder contains four kinds of herbs, including Cortex Mori, Cortex Lycii, licorice root, and japonica rice, which have the functions of clearing away lung heat, relieving cough, and relieving asthma [5].In Xiebai Powder, the amount of Cortex Mori and Cortex Lycii is one liang each, licorice is one qian, and the amount of japonica rice is a zuo [6].Cortex Mori, Cortex Lycii, and licorice root are three plantderived traditional Chinese medicines in Xiebai Powder.They are specified and recorded in the Chinese Pharmacopoeia and the Japanese Pharmacopoeia, and their sources, uses, and active ingredients are specified [7].Modified prescription of Xiebai Powder is more widely used in the modern clinic for the treatment of pneumonia and other pulmonary diseases [8], but the current evidence-based basis is insufficient, and the mechanism of action is not clear.The clinical efficacy of modified Xiebai Powder or modified Xiebai Powder combined with western medicine in treating pneumonia was evaluated in this study by collecting published literature of RCTs.Based on meta-analysis, the potential mechanisms for Xiebai Powder in treating pneumonia was analyzed using network pharmacology to provide a basis for subsequent studies.
Materials and Methods
2.1.Meta-Analysis 2.1.1.Inclusion Criteria.For RCTs published in both Chinese and English, adult patients in the study should meet the diagnostic criteria published in the Chinese Journal of Tuberculosis and Respiratory Diseases in 2016 [9], and children should meet the diagnostic criteria published in the Chinese Journal of Pediatrics in 2013 [3].The English databases used include PubMed, Cochrane Library, and Science Direct.Relevant Chinese and English literature from the establishment of the databases to January 2022 were obtained.The keywords of English and Chinese retrieval were "Xiebai Powder (泻白散)," "pneumonia (肺炎)," etc.
Exclusion
2.1.6.Data Extraction and Quality Assessment.After obtaining the eligible literature, relevant information such as title, first author, publication time, gender of patients, sample size, name of the disease, classification of disease, treatment measures, and outcomes were extracted.RCTs were reviewed using the Cochrane Handbook for random sequence generation, allocation concealment, blinding (subjects, experimenters, and evaluators), completeness of outcome data, selective reporting, and other biases [10].The assessment of "low risk," "high risk," and "unclear risk" were given according to the specific content of the literature.
2.1.7.Data Statistics and Analysis.Review Manager 5.3 software was used for statistical analysis.Enumeration data were analyzed with an odds ratio (OR), mean difference (MD) for measurement data, and 95% confidence interval (CI) for efficacy analysis and statistics.P ≤ 0:05 indicated a statistically significant difference.The Chi-square test was used to evaluate statistical heterogeneity between studies.P °≥0:1 and I 2 °≤ °50% indicated no heterogeneity, and analysis was performed using a fixed-effects model; otherwise, a randomeffects model was used, combined with subgroup analysis or sensitivity analysis to find the source of heterogeneity.
Network Pharmacological Study on Xiebai Powder
Combined with Western Medicine in the Treatment of Pneumonia 2.2.1.Screening of Active Ingredients of Xiebai Powder.According to the published literature [11,12], and taking "Cortex Mori," "Cortex Lycii," and "licorice root" as the keywords, the chemical ingredients of the three herbs were retrieved using the TCMSP database (https://old.tcmsp-e.com/tcmsp.php), and the active ingredients of the drugs were screened according to the oral bioavailability ðOBÞ ≥ 30% and the drug-likeness ðDLÞ ≥ 0:18.
Acquisition of Therapeutic Targets of Active Ingredients of Xiebai Powder and Azithromycin and Disease-Related
Targets.The active ingredients of Xiebai Powder were obtained from the TCMSP database, and the names of protein targets were converted to gene names using the UniProt database.Azithromycin was retrieved in the TTD database, and the canonical SMILES structure was imported into the Swiss Target Prediction database and the SEA database to obtain the therapeutic targets.The Gene Cards database was used to search the keyword "pneumonia" to obtain pneumonia-related targets, and the DisGeNET database and the TTD database were used to complement pneumonia-related targets.[13,14] were grouped by a random number table and rated as "low risk; " one [15] was grouped according to the order of admission and rated as "high risk; " the remaining 13 RCTs were rated as "high risk."All RCTs without allocation concealment and blinding were rated as "unclear risk."All RCTs with precise outcomes were rated as "low risk."None of the RCTs mentioned selective reporting, and all RCTs were rated as "unclear risk."All RCTs with unclear other biases were rated as "unclear risk."The specific information is shown in Figure 2.
(2) Body Temperature Recovery Time.A total of five papers [13,17,18,24,25] reported the body temperature recovery time in patients with pneumonia, and the heterogeneity testing result showed great heterogeneity (P °<0:00001, I 2 °= °97%).Subgroup analysis was performed according to whether the disease was clearly classified.The heterogeneity testing result of the three studies with unknown disease classification was P °<0:00001 and I 2 °= °96%, and the heterogeneity testing result of the two studies with well-defined disease classification was P °<0:00001 and I 2 °= °99%, indicating that disease classification was not the source of heterogeneity in this outcome.It is found by sensitivity analysis that the heterogeneity came from three studies [18,24,25], which were removed, and the remaining two studies [13,17] 5 Computational and Mathematical Methods in Medicine (-0.87, -0.71), P °<0:00001.By analyzing and comparing the results, it was found that modified Xiebai Powder combined with conventional Western medicine had a better therapeutic effect than conventional Western medicine alone.
(4) Pulmonary Rales Disappearance Time.A total of five papers [13,17,18,24,25] reported the pulmonary rales disappearance time in patients with pneumonia, and the heterogeneity testing result showed heterogeneity (P °= 0:001, I 2 °= °77%).Subgroup analysis was performed according to whether the disease was clearly classified.The heterogeneity testing result of the three studies with unknown disease classification was P °<0:00001 and I 2 °= °82%, and the heterogeneity testing result of the two studies with welldefined disease classification was P °<0:00001 and I 2 °= °72%, indicating that disease classification was not the source of heterogeneity in this outcome.Sensitivity analysis showed that the heterogeneity came from two studies [17,24], which were removed, and the remaining three studies [13,18,25] Total events 6 Computational and Mathematical Methods in Medicine were retested with small heterogeneity (P °= 0:15, I 2 °= °47%) and analyzed by a fixed-effects model.The results showed that the elimination of pulmonary rales in the experimental group was more effective than that in the control group (MD = −2:09, 95% CI (-2.30, -1.88), P < 0:00001; Figure 6).A descriptive analysis was conducted for the two excluded studies, in which the effects of modified Xiebai Powder combined with conventional Western medicine treatment on shortening the pulmonary rales disappearance time in patients with pneumonia were compared.Metaanalysis results were as follows: MD = −3:57, 95% CI (-4.72, -2.42), P < 0:00001; MD = −1:70, 95% CI (-1.93, -1.47), P < 0:00001.By analyzing and comparing the results, it was found that modified Xiebai Powder combined with conventional Western medicine had a better therapeutic effect than conventional Western medicine.
(5) C-Reactive Protein (CRP).A total of three papers [16,19,25] analyzed the level of C-reactive protein in patients with pneumonia, and the heterogeneity testing result showed great heterogeneity (P °<0:00001, I 2 °= °100%).Sensitivity analysis found that the heterogeneity came from 1 study [25], which was removed, and the remaining two studies [16,19] were retested with small heterogeneity (P °= 0:18, I 2 °= °43%) and analyzed by a fixed-effects model.The results showed that the reduction of CRP level in the experimental group was more effective than that in the control group (MD = −3:05, 95% CI (-4.24, -1.85), P < 0:00001; Figure 7).A descriptive analysis was conducted for one excluded study, in which the effects of modified Xiebai Powder combined with conventional Western medicine treatment on reducing CRP levels were compared.Metaanalysis showed that the effect of reducing CRP levels in the experimental group was better than that in the control group (MD = −49:19, 95% CI (-52.16,-46.22),P < 0:00001).
Adverse Reactions.
A total of three papers [13,18,24] mentioned the adverse reactions in patients, among which the incidence of adverse reactions was counted in three studies [13,24], and one study [18] showed no adverse reactions in patients.The heterogeneity testing result exhibited no heterogeneity (P °= 0:84, I2 °= °0%), and a fixed-effects model was used for analysis.The results showed that the incidence of adverse reactions in the experimental group (modified Xiebai Powder combined with conventional Western medicine treatment) was lower than that in the control group (conventional Western medicine treatment) (OR = 0:33, 95% CI (0.15, 0.69), P = 0:003; Figure 8).
3.1.5.Publication Bias Analysis.Publication bias analysis was performed on clinical effective rates, and funnel plots were drawn to observe symmetry.The number of points on the left side of Figure 9 is 11, and the number of points on the right side is 5.There was a significant difference in the number distribution of points between both sides, indicating a certain publication bias (Figure 9).
3.1.6.Sensitivity Analysis.Sensitivity analysis was performed on the included literature, and descriptive analysis was performed on the results.The pooled effect size of clinical effective rates in the 16 studies was excluded.The results showed no qualitative change in the pooled effect size, and the results of this study were relatively stable.target genes.Xiebai Powder combined with azithromycin had 285 therapeutic targets after deduplication.A total of 1359 pneumonia-related targets were retrieved through the Gene Cards database, 1032 through the DisGeNET database, and 17 through the TTD database.After deduplication, 1926 pneumonia-related targets were collected (Table 2).
Construction of the Drug-Active Ingredient-Target
Network.The "drug-active ingredient-target" visualized network is shown in Figure 10.According to the degree value, the top active ingredients were β-sitosterol, quercetin, kaempferol, naringenin, acacetin, isorhamnetin, etc., as shown in Table 3.
Construction of PPI Network and Screening of Key
Targets.After using the Venny 2.1 online tool to intersect the targets of active ingredients in Xiebai Powder and the pneumonia-related targets, a total of 129 common targets were obtained, as shown in Figure 11.The common targets were input into the STRING database to obtain the PPI network, which was processed by Cytoscape 3.9.1 software.8
Computational and Mathematical Methods in Medicine
There were 129 nodes and 2783 edges in the network (Figure 12).After topological analysis, the chi values of degree ≥ 85, CC ≥ 0:744, and BC ≥ 200:368 were selected as the screening conditions, and the targets that met the above three chi values were selected as key targets, mainly including TNF, IL-6, ALB, AKT1, IL-1Β, TP53, CASP3, PTGS2, JUN, and STAT3 (Table 4).
Enrichment Analysis.
A total of 169 signaling pathways were obtained through KEGG pathway enrichment analysis (P < 0:05), and the top 20, according to the P value ranking, were plotted (Figure 13).Xiebai Powder combined with azithromycin may play a role in the treatment of pneumonia through the IL-17 signaling pathway, tumor necrosis factor signaling pathway, c-type lectin 9 Computational and Mathematical Methods in Medicine receptor signaling pathway, Toll-like receptor signaling pathway, and HIF-1 signaling pathway.
Molecular Docking.
The active ingredients and key targets with high degree values were selected, and AutoDock software was used for molecular docking.The results are shown in Table 5.All active ingredients and key targets can spontaneously bind (the binding free energy was less than 0 kJ•mol -1 ).The docking results were visualized by PyMol software, partially shown in Figure 14.
Discussion
During the analysis of this study, a total of 138 papers were read, and only 16 papers in Chinese were eligible for this systematic review and meta-analysis, with a total of 1,465 patients.Clinical efficacy, body temperature recovery time, cough disappearance time, pulmonary rales' disappearance time, and C-reactive protein level were selected as primary outcomes.Meta-analysis results showed that all the outcomes were statistically significant, indicating that modified Xiebai Powder or modified Xiebai Powder combined with conventional Western medicine treatment had better effects than conventional Western medicine treatment.
Due to the limitation of the overall level and quantity of the literature included in this study, more high-quality literature should be included in the follow-up to further verify the analysis results.The outcomes in the included literature were mainly clinical efficiency and clinical manifestation, and less attention was paid to the changes in vital signs (respiratory rate, heart rate, systolic blood pressure, etc.) [29], procalcitonin [30], and T cell population [30,31].It is suggested to add relevant outcomes to related RCTs in the future to improve the accuracy of clinical efficacy evaluation and quality of evidence.
The network pharmacological analysis showed that the main active ingredients in Xiebai Powder in the treatment of pneumonia involved β-sitosterol, quercetin, kaempferol, naringenin, isorhamnetin, and other compounds.Among them, β-sitosterol is a phytosterol, and the others are flavonoids.β-Sitosterol can inhibit proinflammatory cytokines such as TNF-α and IL-6.A number of in vitro and in vivo experiments have shown that quercetin has antiinflammatory activity and can also inhibit apoptosis and repair damaged lung tissue by inhibiting the growth and metastasis of lung cancer cells [32].Kaempferol has the effects of anti-inflammation, antioxidation, and inhibiting apoptosis.Both naringenin and isorhamnetin have antiinflammatory activities and can significantly reduce the levels of proinflammatory cytokines in serum and lung tissue [33].It can be seen that the above active ingredients play important roles in the treatment of pneumonia.
Results of network pharmacology and molecular docking show that beta-sitosterol can inhibit the proinflammatory cytokines TNF-α and IL-6.Beta-sitosterol is one of the effective components of TCM.Maxing Shigan Decoction comes from the treatise on Shang Han Lun and is composed of Ephedrae herba, armeniacae semen amarum, gypsum fibrosum, and Glycrrhizae Radix et Rhizoma.Among the four TCM, Ephedrae herba, armeniacae semen amarum, and Glycrrhizae Radix et Rhizoma all contain β-sitosterol [34][35][36].Maxing Shigan decoction combined with azithromycin in the treatment of mycoplasma pneumonia in children is more effective than azithromycin alone [37].
Qianjin Weijing decoction is derived from Jin Kui Yao Lue.It is composed of Phragmitis rhizome, Coicis Semen, coix The key targets of Xiebai Powder combined with azithromycin in the treatment of pneumonia were mainly TNF, IL-6, ALB, AKT1, IL-1Β, TP53, CASP3, PTGS2, JUN, STAT3, etc.Among them, both TNF and IL-6 are inflammatory factors.A study showed that the levels of TNF-α and IL-6 in patients with pneumonia were significantly increased, indicating that TNF-α and IL-6 genes were involved in the onset of pneumonia [42].ALB is highly correlated with acute lung injury and is a predictor of disease severity [43].TP53 is a tumor suppressor gene involved in the regulation of the cell cycle and apoptosis, and its overexpression can promote cancer [44].IL-1Β has been found to be a targeted treatment for COVID-19 [45].STAT3 protein is involved in the transcription of inflammatory factors after entering the nucleus.The inhibition of the activation of transcription factor AP-1 can reduce the proinflammatory response induced by IL-1Β.It is speculated that the expression of the abovementioned transcription factor proteins is closely related to inhibiting the occurrence of the inflammatory response [46].Computational and Mathematical Methods in Medicine The KEGG analysis showed that Xiebai Powder might play a role in the treatment of pneumonia through the hepatitis B-induced signaling pathway, cancer signaling pathway, toxoplasmosis-induced signaling pathway, Tolllike receptor signaling pathway, tumor necrosis factor signaling pathway, and pertussis-induced signaling pathway.The KEGG analysis showed that azithromycin might play a role in the treatment of pneumonia through the IL-17 signaling pathway, NOD-like receptor signaling pathway, Toll-like receptor signaling pathway, TNF signaling pathway, and PI3K-Akt signaling pathway.
The KEGG enrichment pathways of Xiebai Powder combined with azithromycin mainly involved the IL-17 signaling pathway, tumor necrosis factor signaling pathway, c-type lectin receptor signaling pathway, Toll-like receptor signaling pathway, and HIF-1 signaling pathway.The IL-17 signaling pathway is associated with the occurrence and development of pneumonia-induced sepsis.In addition, the upregulation of upstream and downstream molecules in the IL-17 signaling pathway indicates the activation of the IL-17 signaling pathway, which can promote apoptosis in pneumonia-induced sepsis [47].Excessive TNF-α in the tumor necrosis factor signaling pathway can not only lead to abnormal pathways but also promote inflammatory factors such as IL-1 and IL-8, stimulate the body's inflammatory responses, or even cause inflammatory chain reactions [48].The Toll-like receptor signaling pathway transmits sig-nals into cells by recognizing lipopolysaccharide and finally activates inflammatory factors such as TNF to mediate the inflammatory response in diseased tissues [49].
The main KEGG pathways of azithromycin in pediatric pneumonia are the IL-17 signaling pathway, NOD-like receptor signaling pathway, Toll-like receptor signaling pathway, TNF signaling pathway, and PI3K-Akt signaling pathway.NOD-like receptors can mediate inflammatory responses, activate NF-B p65 and p38 MAPK signaling pathways, and lead to the production of inflammatory factors [50].A NOD-like receptor signaling pathway is the main difference between the action pathway of Xiebai Powder combined with azithromycin and that of azithromycin.
Conclusions
In this study, meta-analysis combined with network pharmacology was used to evaluate Xiebai Powder in the treatment of pneumonia systematically and to predict its potential mechanism of action.In terms of clinical efficacy, body temperature recovery time, cough disappearance time, pulmonary rales' disappearance time, and C-reactive protein reduction, the experimental group was superior to the control group, with a lower incidence of adverse reactions.
Also, the mechanism for Xiebai Powder combined with azithromycin in the treatment of pneumonia was analyzed using network pharmacology.Due to the complexity of chemical ingredients and targets of TCM and their effects on the human body, further studies are needed to provide more evidence.Based on the potential pathways obtained through meta-analysis and network pharmacological studies, the next step will be to obtain key pathways based on cell level or animal experiments to study the mechanisms of Xibai Powder in the treatment of pediatric pneumonia.
Overall, the analysis results are credible.More highquality papers will be included in the follow-up to verify the analysis results further.Considering dynamic changes in pneumonia studies, in future research, other relevant outcomes such as changes in vital signs (respiratory rate, heart rate, systolic blood pressure, etc.), procalcitonin (PCT), and T cell population will be added to improve the accuracy of clinical efficacy evaluation further and enhance the quality of evidence.
Figure 1 :
Figure 1: The flow chart of literature screening for meta-analysis.
Figure 2 :
Figure 2: (a) Overall risk of bias graph.(b) Detailed risk of bias summary.
Figure 3 :
Figure 3: Meta-analysis forest plot of clinical efficacy.
Figure 4 :
Figure 4: Meta-analysis forest plot of body temperature recovery time.
Figure 5 :
Figure 5: Meta-analysis forest plot of cough disappearance time.
Figure 7 :
Figure 7: Meta-analysis forest plot of C-reactive protein.
Figure 8 :
Figure 8: Meta-analysis forest plot of adverse reactions.
Figure 10 :
Figure 10: Drug-ingredients-targets diagram.Note: the orange diamonds represent the target genes of the ingredients; the yellow circles represent the three herbs; the hexagons represent the ingredients contained; the octagons represent the common ingredients.SBP: Cortex Mori; DGP: Cortex Lycii; GC: licorice root; AQMS: azithromycin.
Figure 13 :
Figure 13: Bubble diagram of KEGG pathway enrichment analysis.
Patients in the control group were treated with Western medicine.Patients in the experimental group were treated with modified Xiebai Powder or modified Xiebai Powder combined with Western medicine.2.1.5.Strategies for Literature Retrieval.The Chinese retrieval platforms used include CNKI, VIP, and Wanfang. 2.1.4.Interventions.
Table 1 :
Essential features of the included literature.
T: test team; C: control group; M: male; F: female.① Clinical efficacy.② Body temperature recovery time.③ Cough disappearance time.④ Pharmacology 3.2.1.Screening of Active Ingredients in Xiebai Powder and Pneumonia-Related Targets.There were 40 active ingredients with 235 therapeutic targets in Xiebai Powder, including ten in Cortex Mori, ten in Cortex Lycii, 16 in licorice root, and four repetitive ingredients.Azithromycin (drug ID: D03HJK, molecular formula: C 38 H 72 N 2 O 12 ) had 55
Table 2 :
Basic biological information of Xiebai Powder combined with azithromycin.
Table 3 :
Top 10 active ingredients of Xiebai Powder for treating pneumonia.
Table 5 :
The docking results of core components and key target molecules.
|
2022-10-14T15:02:24.024Z
|
2022-10-11T00:00:00.000
|
{
"year": 2022,
"sha1": "7856007737a649cbc71d1f178e1824c981a8ac98",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cmmm/2022/2287470.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "993f6b4456183d43d3bf7a52802337fba5866045",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.