id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
56816392
|
pes2o/s2orc
|
v3-fos-license
|
Morbidity pattern and its association with personal hygiene practices among school going children (11 to 15 years of age group) in Surendranagar, Gujarat, India
and 31% of all deaths in Africa and Southeast Asia, respectively. 2 It is well-documented that children with proper hand-washing practices are less likely to report gastrointestinal and respiratory symptoms. 3,4 Handwashing with soap can reduce diarrheal morbidity by 44% and respiratory infections by 23%. 5,6 The Millennium development goals have firmly established the issues of ‘water, sanitation and hygiene on the global agenda. Public health importance of hand washing as well as its importance in reduction of communicable diseases such as diarrhoea and acute respiratory infections has been highlighted in many studies. The study was conducted with an objective to find out the awareness regarding personal hygiene practices, to assess the personal hygiene practices followed by the school children and its association with morbidity profile. An attempt was also made to assess the awareness regarding menarche and menstrual hygiene. ABSTRACT
INTRODUCTION
Poor hygiene practices and inadequate sanitary condition play major roles in the increased burden of communicable diseases in developing countries like India. Still a large proportion of the global morbidity and mortality is attributable to infectious diseases, e.g. they cause 62%. 1 and 31% of all deaths in Africa and Southeast Asia, respectively. 2 It is well-documented that children with proper hand-washing practices are less likely to report gastrointestinal and respiratory symptoms. 3,4 Handwashing with soap can reduce diarrheal morbidity by 44% and respiratory infections by 23%. 5,6 The Millennium development goals have firmly established the issues of 'water, sanitation and hygiene on the global agenda. Public health importance of hand washing as well as its importance in reduction of communicable diseases such as diarrhoea and acute respiratory infections has been highlighted in many studies. The study was conducted with an objective to find out the awareness regarding personal hygiene practices, to assess the personal hygiene practices followed by the school children and its association with morbidity profile. An attempt was also made to assess the awareness regarding menarche and menstrual hygiene.
METHODS
It was a cross sectional study. All schools were registered first and from the list one school from private and one school from government were selected for the study. school by simple random sampling. Students of 5 th to 9 th standards of the selected schools formed the study group. Prior permission was sought from the principal. School record was used for getting reasonable accuracy in age assessment. A pre-designed and pre-tested proforma was used for data collection. Clinical examination was done to assess the morbidities among school children.
RESULTS
Daily bathing (84%), brushing teeth (63%), washing hands with soaps and water were the most common hygienic practices in the school children (Table 1). Maximum personal hygiene practices not followed by the students were cutting nails (40.8%); washing hands (37.8%) followed by others ( Figure 1).
Out of 500 total students 134 girls were there; out of them 51 were aware regarding menstrual hygiene practices ( Figure 2). About 49.8% of the students had knowledge regarding common health problems in school children. Cold was the most common health problem they know ( Table 2). Abdominal pain and worm infestations (6.0%) were the most common health related problems sufferred by the children (Table 3).
A significant association found between personal hygiene practices followed by school children and their health related problems. ( 2 =65.2, d.f.=1, P value<0.001) ( Table 4). Study data revealed that maximum personal hygiene practices not followed by the students were cutting nails (40.8%);washing hands (37.8%) etc.
DISCUSSION
Hygiene refers to practices associated with ensuring good health and cleanliness. School age children form a substantial proportion of the world's population, numbering about 24% of population of the developing world School setting provides a strategic point of entry for improving child health, self-esteem, life skills and behavior. Hygiene is very important for living a healthy life free from diseases. Poor hygiene practices and inadequate sanitary condition play major roles in the increased burden of communicable diseases within the developing countries.
Knowledge and awareness are some of the measures which are thought to be on the causal pathway to behaviour. The study revealed that proportion of positive hygiene behaviour among school children was fairly high in those who had adequate knowledge. In our study the maximum awareness regarding personal hygiene practices was daily bathing (84%) followed by brushing teeth (63%), washing hands with soaps (53.4%) etc. there is a significant association found between personal hygiene practices followed by school children and their health related problems. ( 2 =65.2, d.f.=1, P value<0.001) The study carried out by Mulubirhan Assefa and Abera Kumie in which more than half of the children were aware on hand washing and water handling accounts for 58.9% and 52.7% respectively. 1 Students who were aware reggarding personal hygiene practices in which 71.6% students followed positive hygiene behaviour as compared to those not aware had reported positive hygiene behaviour only 50.8% and the result was found statistically significant (P < 0.001). Personal hygiene practices not followed by the students were untrimmed nails (40.8%), unclean clothes (37.8%), untidy hair (34.8%), unclean teeth (29.6%) etc. A study carried out by Kaviraj Motakpalli et al. in which 34% had bad oral hygiene followed by 25% unclean external or internal ear, 21% had unclean tongue, 14% had unclean nose, 11% had unclean skin, 8% had unclean clothes, 7% had uncombed and dirty hairs and 4% had unclean hands. 7 Where as a study carried out by Paliwal in which dirty hairs (17.9%), dirty clothes (45.2%) dirty nails (57.4%) were found which was nearly similar to our study. 10 It was concluded that morbidities found amongst students are basically due to low awareness and negligent behaviour about personal hygiene which are the key areas of concern and by active involvement of school teachers and improvement in personal hygiene of school children, the reduction in related morbidities may be achieved. A holistic approach addressing social, economical and geographical characteristics of the children should be introduced aimed at improving the hygiene practices among school children. 49.8% of the students had knowledge regarding general health problems among both private and government school children (N=500). Out of them cold was the most common health problem narrated by them. Overall 122 (24.4%) students suffered one or the other kind of health related problem. (N=500). In our study the most common morbidities were abdominal pain with worm infestation (24.59%) followed by ocular problems (21.31%), dental problems (20.49%), throat pain (19.67%), ear problems (7.37%) and skin problems (6.55%). out of the total health related problems. Ocular morbidities commonly found were refractory errors. Out of dental problems most common problems were staining of teeth, toothache and dental caries. The most common ear problems were itching, earache and wax impaction. The most common skin problems were itching and dryness of skin. In a study carried out in Karnataka; the major health related problems were dental problems (32.4%), vitamin deficiency (16.8%), skin diseases (11%) respiratory tract infections (9.2%), A study carried out by Mayavati S. Mhaske et al. in which the major morbidities observed were dental caries (66.1%), upper respiratory tract infections (38.20%), ear wax (29.9%) and myopia (10.0%) observed ENT problems (9%), eye diseases(8.2%), gastro intestinal(7%) and others(7.6%). 7 Out of 295 students following personal hygiene practices in which only 36 students had health related problems whereas 205 students who were not following personal hygiene practices in which 86 students had health related problems a significant association found between personal hygiene practices followed by school children and their health related problems. ( 2 =65.2, d.f.=1, P value<0.001).
CONCLUSION
Majority of the health problems affecting school children are preventable by promotion of hygiene practices through proper health education by the teachers who are the first contacts.
|
2018-12-25T14:14:08.554Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "66280876944327c2de96d38a5dd1e10aa17e2620",
"oa_license": null,
"oa_url": "https://ijcmph.com/index.php/ijcmph/article/download/371/364",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3ef6273ce693179313e7029e0df8e6dbf112d5bf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11720945
|
pes2o/s2orc
|
v3-fos-license
|
Profiling of Substrate Specificities of 3C-Like Proteases from Group 1, 2a, 2b, and 3 Coronaviruses
Background Coronaviruses (CoVs) can be classified into alphacoronavirus (group 1), betacoronavirus (group 2), and gammacoronavirus (group 3) based on diversity of the protein sequences. Their 3C-like protease (3CLpro), which catalyzes the proteolytic processing of the polyproteins for viral replication, is a potential target for anti-coronaviral infection. Methodology/Principal Findings Here, we profiled the substrate specificities of 3CLpro from human CoV NL63 (group 1), human CoV OC43 (group 2a), severe acute respiratory syndrome coronavirus (SARS-CoV) (group 2b) and infectious bronchitis virus (IBV) (group 3), by measuring their activity against a substrate library of 19×8 of variants with single substitutions at P5 to P3' positions. The results were correlated with structural properties like side chain volume, hydrophobicity, and secondary structure propensities of substituting residues. All 3CLpro prefer Gln at P1 position, Leu at P2 position, basic residues at P3 position, small hydrophobic residues at P4 position, and small residues at P1' and P2' positions. Despite 3CLpro from different groups of CoVs share many similarities in substrate specificities, differences in substrate specificities were observed at P4 positions, with IBV 3CLpro prefers P4-Pro and SARS-CoV 3CLpro prefers P4-Val. By combining the most favorable residues at P3 to P5 positions, we identified super-active substrate sequences ‘VARLQ↓SGF’ that can be cleaved efficiently by all 3CLpro with relative activity of 1.7 to 3.2, and ‘VPRLQ↓SGF’ that can be cleaved specifically by IBV 3CLpro with relative activity of 4.3. Conclusions/Significance The comprehensive substrate specificities of 3CLpro from each of the group 1, 2a, 2b, and 3 CoVs have been profiled in this study, which may provide insights into a rational design of broad-spectrum peptidomimetic inhibitors targeting the proteases.
Introduction
A number of coronaviruses (CoVs) have been identified as causative agents of respiratory tract and gastroenteritis diseases in mammals and birds [1,2,3,4,5,6,7,8,9,10,11]. Sequence analysis suggests that these coronaviral strains can be classified into three main groups -alphacoronavirus (group 1), betacoronavirus (group 2), and gamacoronavirus (group 3) [12]. The sequence of severe acute respiratory syndrome coronavirus (SARS-CoV), discovered in 2003, was found to be diverse from any existing groups of CoVs. The group 2 CoVs are then further divided into 2a and 2b subgroups, with the original group 2 CoVs assigned to group 2a and SARS-CoV to group 2b [13,14]. Most of coronaviral strains are group 1 and 2a members. They include the four human coronaviruses (HCoVs) strains, NL63, 229E, OC43 and HKU1, that associate with up to 5% of total respiratory tract disease cases [15,16]. The most infamous strain in group 3 is infectious bronchitis virus (IBV), which can cause lethal infections in birds [17,18].
3C-like protease (3CL pro ), which is also named main protease, is responsible for the processing of the viral polyproteins into at least 15 non-structural proteins, most of which are constituents of the viral replication and transcription complex. The cleavage process can be acted in cis and in trans [19]. This enzyme is a good drug target for anti-coronaviral infection, as inhibiting the autocleavage process can inhibit viral replication and reduce virus-induced cytopathic effects on host cells [20,21,22,23]. A detailed knowledge of substrate specificity of 3CL pro is helpful in the rational design of inhibitors. Substrate specificity of SARS-CoV 3CL pro was extensively investigated after the outbreak of SARS in 2003. Fan et al. measured the protease activity against 34 single-substituted variants at P5 to P1' positions, while Goetz et al. profiled the specificity at P4 to P1 positions by using a fully degenerated library of tetrapeptide mixtures [24,25]. Chuck et al. profiled the substrate preference of SARS-CoV 3CL pro by measuring the activity of 3CL pro against substrate variants with single substitutions at P5 to P3' positions [26].
On the other hand, reports describing the substrate specificities of 3CL pro in group 1, 2a, and 3 are scarce. Only the activity of 3CL pro from HCoV-229E (group 1), transmissible gastroenteritis coronavirus (group 1) and mouse hepatitis virus (group 2a) against three to four of their own autocleavage sequences have been measured by Hegyi et al. [27]. Comprehensive study on substrate specificities of group 1, 2a and 3 3CL pro is lacking. Here, we profiled the substrate specificities of selected 3CL pro from group 1, 2a, 2b and 3 CoVs. Activities of 3CL pro from HCoV-NL63 (group 1), HCoV-OC43 (group 2a), SARS-CoV (group 2b) and IBV (group 3) against a substrate library of 1968 variants were measured by fluorescence resonance energy transfer (FRET) assay [26]. Similarities and differences in substrate specificities among different 3CL pro are discussed.
Results
Profiling substrate specificities of 3CL pro from group 1, 2a, 2b, and 3 CoVs We have previously created a 1968 substrate library by performing saturation mutagenesis at P5 to P3' positions on the wild type (WT) sequence (SAVLQQSGF), which corresponds to the autocleavage sequence at the N-terminus of SARS-CoV 3CL pro [26]. The values of k obs /[3CL pro ] of the proteases against this WT sequence were 443611, 124613, 18065 and 174619 mM -1 min -1 for HCoV-NL63 (group 1), HCoV-OC43 (group 2a), SARS-CoV (group 2b), and IBV (group 3), respectively. That all proteases can cleave the WT sequence efficiently justifies that we can use our substrate library to profile the substrate specificities of 3CL pro from other groups of CoVs. Based on the FRET assay we developed, we measured the activities of 3CL pro from HCoV-NL63, HCoV-OC43, SARS-CoV and IBV against the 1968 substrate variants ( Figure 1, Table S1) [26]. To identify the structural basis of substrate preferences for different CoVs, the protease activities were correlated with side chain volume [28], hydrophobicity [29], and a-helix and b-sheet propensities [30] as described [26]. The correlations were quantified in terms of correlation coefficients and p-values ( Figure 2, Table S2).
Differences in substrate specificities among 3CL pro
We then tested if the relative activities of 3CL pro from any CoV strains were significantly different from the other by analysis of variance. Substitutions that resulted in significantly higher relative activities (p,0.001) were indicated as filled symbol in Figure 1. IBV 3CL pro (Figure 1, triangles) was the most efficient in cleaving A4P and A4F with relative activities of 1.0960.24 and 0.5860.14, respectively, while SARS 3CL pro (Figure 1, diamonds) preferred A4V with relative activity of 1.3960.19. HCoV-OC43 3CL pro (Figure 1, squares) appeared to be the most versatile in accepting substitutions at P1 and P2 positions, and could cleave Q1H, Q1M, L2M and L2C, significantly better than 3CL pro from other strains. No significant differences were observed for other substitutions, suggesting that 3CL pro from different CoVs shares many similarities in substrate preferences.
Substrate preferences that are common to all 3CL pro
The most preferred P1 residue is Gln (Figure 1), which forms hydrogen-bonds with the side-chain of an invariant His residue and the backbone carbonyl group of an invariant Phe residue (His-163 and Phe-140 in SARS-CoV 3CL pro ) in the P1 binding pocket. Interestingly, our results showed that 3CL pro from all groups of CoVs can cleave His at P1 position reasonably well. The relative activities for 3CL pro from HCoV-NL63, HCoV-OC43, SARS-CoV, and IBV were 0.2660.08, 0.4760.08, 0.1960.03 and 0.2560.12, respectively (Table S1). Consistent with this observation, His is found natively at P1 positions in the polyproteins from group 1 and 2a CoVs (Table S3). Taken together, the ability to cleave His at P1 position is a conserved property for all 3CL pro . Moreover, we showed that all 3CL pro can cleave Q1M, albeit at an even lower rate, and all other substitutions resulted in undetected activity.
The protease activities correlate positively with the hydrophobicity of substituting residues at P2 position ( Figure 2). In fact, among the P2 variants, only L2M, L2C, L2F, L2I and L2V were cleavable, suggesting that P2 position favors hydrophobic residues. However, substitution with b-branched residues, Val or Ile, led to .10-folds decreases in the activity ( Figure 1, Table S1). Considering that Leu, Val and Ile share similar hydrophobicity and side chain volume, the large differences in activities suggest that b-branched residues are not preferred in all 3CL pro , probably due to steric clashes with the P2 binding pocket. Taken together, P2 position prefers hydrophobic residues without b-branch, and the most preferred residue is Leu.
At P3 position, the protease activities on Arg/Lys-substituting variants were 5 to 14 fold higher than that on Asp/Glusubstituting variants ( Figure 1, Table S1). This observation suggests that P3 position prefers positively charged residues over negatively charged one. In the active site of 3CL pro , there is no substrate-binding pocket for P3 residue. Molecular modeling showed that there is an invariant Glu residue (Glu-166 in SARS-CoV 3CL pro ) in the active site of 3CL pro that may form favorable charge-charge interactions with a positively charged residue at the P3 position, which may explain why Arg/Lys are favored over Asp/Glu at this position ( Figure S1). Moreover, no cleavage was observed for substrate containing Pro-substitution at P3 position.
The protease activities correlate negatively with side chain volume, and positively with the hydrophobicity of substituting residues at P4 position ( Figure 2). The correlations with hydrophobicity were more evident (with correlation coefficients .0.89) when only small residues (Ala, Asn, Asp, Cys, Gly, Ser, and Thr) with side chain volumes ,70 Å 3 ( Figure 3) were included in the analysis. This result suggests that as long as the side chain can fit into the P4 binding pocket, the protease activity is directly proportional to the hydrophobicity of the substituting residues. On the other hand, charged residues like Lys, Arg, His, Asp and Glu were not cleavable, presumably due to the unfavorable burial of charges in the hydrophobic P4 pocket.
In general, the activities of 3CL pro correlate positively with the hydrophobicity and b-sheet propensity of substituting residues at P5 position ( Figure 2). The correlations are significant (p,0.05) for group 2a, 2b, and 3 CoVs, but are weaker for group 1 CoV. Like the P3 position, there is no substrate-binding pocket for P5 residue. In the crystal structure of SARS-CoV 3CL pro in complex with a peptide substrate, the P5 residue adopts an extended b-strand conformation to avoid clashing of P5-P6 residues with the protease [31]. Residues with high b-sheet propensity may stabilize the extended conformation at P5 and improve enzyme-substrate interaction. As shown in Figure 1, a number of substitutions at P5 position resulted in a substrate better than the WT sequence (i.e. with relative activity .1). Consistent with the suggestion that P5 position favors residues with high hydrophobicity and b-sheet Figure 2. Correlation between 3CL pro activities and structural properties of substituting residues. The relative protease activities of 3CL pro from HCoV-NL63 (shaded, group 1), HCoV-OC43 (white, group 2a), SARS-CoV (black, group 2b) and IBV (grey, group 3), were correlated with structural properties of substituting residue properties, including side chain volume [28], hydrophobicity [29] and a-helix and b-sheet propensities [30]. propensity, Val-substitution consistently yielded substrates with higher than WT activities for all 3CL pro . On the other hand, negatively charged residues (Asp/Glu) were not favored at P5 position, with significantly lower activities (0.16 to 0.50).
At P1' position, the protease activities correlate negatively with side chain volume of substituting residues ( Figure 2). In fact, the relative activities for substrates with the smallest residues (Gly, Ala, Ser, and Cys) at P1' position were in the range of 0.64 to 1.40, which were consistently higher than those for other larger residues (Figure 1). At P2' position, all variants, except G2'P, could be cleaved with relative activities of 0.17 to 1.04 (Figure 1). The protease activities also correlate negatively with the side chain volume (Figure 2), but the difference in the protease activities was relatively small (Figure 1). At P3' position, no obvious substrate preference was observed.
The effect of combining multiple favorable substitutions
Our profiling analysis showed that all CoV 3CL pro prefer P5-Val and P3-Arg (Figure 1). To test if we can combine two favorable substitutions to create a more active substrate, we have created a doubly-substituted substrate variant 'VARLQQSGF'. The protease activities of HCoV-NL63, HCoV-OC43, SARS-CoV and IBV against the doubly-substituted sequence were 1.7060.07, 1.8760.17, 1.7060.12 and 3.2460.37, respectively ( Table 1). The results suggest that the increase in activity is additive, and the sequence 'VARLQQSGF' can represent a good broad-spectrum substrate for all 3CL pro .
On the other hand, our profiling analysis suggests that 3CL pro from SARS-CoV and IBV have different substrate preferences at P4 position -SARS-CoV prefers P4-Val (relative activity = 1.0960.24) while IBV prefers P4-Pro (relative activity = 1.3960.10) (Figure 1, Table S1). To see if we can exploit this distinct substrate preference at P4 position to create a substrate more specific for IBV 3CL pro , we have created the triply-substituted variant 'VPRLQQSGF'. The protease activity of IBV 3CL pro against this sequence was boosted to 4.3360.98, while that of the other strains were significantly reduced, demonstrating that this substrate sequence can represent a specific substrate-sequence for IBV 3CL pro (Table 1). Similarly, the protease activity of SARS-CoV 3CL pro against the triplysubstituted sequence 'VVRLQQSGF' was boosted to 2.5060.51, while that of the other strains were reduced (Table 1). Taken together, these results suggest that one can combine the substrate preference profiled in this study to create a better substrate sequences.
Discussion
This study provides the first comprehensive profiling of substrate specificities of 3CL pro from group 1, 2a, and 3 CoVs. We showed that the substrate specificities of these 3CL pro share many similarities to those of 3CL pro from SARS-CoV (group 2b) reported previously by us [26]. Table 2 summarizes the substrate specificities that are common to all 3CL pro . Although the substrate specificities for 3CL pro from different groups of CoVs share a number of similarities, unique substrate preferences were identified in this study. In particular, we showed that only IBV 3CL pro , but not other proteases, prefers P4-Pro (Figure 3). To understand the structural basis of this unique substrate preference, we compared the structures of IBV 3CL pro with other coronaviral 3CL pro . We noticed that strand-11 of IBV 3CL pro is positioned further away from the P4 and P5 substrate-binding site compared to other 3CL pro (Figure 4) [31,32,33]. This results in a wider substrate-binding pocket in IBV 3CL pro . We further docked the substrate variant A4P into the substrate-binding pocket of IBV 3CL pro . Due to the cyclic structure of Pro residue, the backbone Ø dihedral angle of the P4 residue is restrained to ca. 260u, which causes the substrate peptide to bend towards the strand-11 of 3CL pro . Such conformation of substrate is much better accommodated by IBV 3CL pro , which has a wider substrate-binding pocket near the P4 and P5 positions. This observation justifies why only IBV 3CL pro cleaves P4-Pro efficiently.
Similarities in substrate specificity suggest that it is feasible to create a broad-spectrum inhibitor that targets all 3CL pro . A broadspectrum inhibitor is desirable for a first line defense against coronaviral infection because CoVs are capable of generating novel strains with high virulence through high frequency of mutations and recombination [34,35,36,37].. Based on the autocleavage sequence of SARS-CoV 3CL pro (i.e. AVLQQ), Rao and co-workers designed broad-spectrum peptidomimetic inhibitors that can inhibit 3CL pro from different groups of CoVs [20]. Their results are consistent with our observation that the autocleavage sequence of SARS-CoV 3CL pro can be well cleaved by all 3CL pro . The substrate preferences profiled in this study will provide a rational basis to improve the broad-spectrum 3CL pro inhibitors. For example, by combining favorable substitutions at P3 to P5 positions, we identified a substrate sequence 'VARLQQSGF' that can be cleaved with high relative activities by 3CL pro from all groups of CoVs (Table 1). This substrate sequence may serve as a good starting point of the design of broadspectrum peptidomimetic inhibitors for 3CL pro .
Although it is generally accepted that substrate specificity provides insights into the design of peptidomimetic protease inhibitors, there are exceptions to the dogma that good peptidomimetic inhibitors should be derived from good substrate sequences. For example, Hilgenfeld and co-workers showed that the P2 position of peptide aldehyde inhibitors can accommodate aspartate or serine, which are poor substrates for SARS-CoV 3CL pro [38].
In the FRET assay developed by us, all 3CL pro can efficiently cleave the WT sequence of 'SAVLQQSGF' with activity of 120-440 mM 21 min 21 , and the activity can be further improved by 1.7 to 3.2 fold using the substrate sequence of 'VARLQQSGF'. Because the substrate sequences can be cleaved by all 3CL pro with high efficiency, one could use the FRET assay to screen for broadspectrum inhibitors targeting 3CL pro from all groups of CoVs.
Cloning, Expression and Purification of 3CL pro and the Substrate Library
Cloning, expression and purification of SARS-CoV 3CL pro were described previously [26]. Codon-optimized DNA sequences encoding HCoV-NL63 (GenBank AY567487) and HCoV-OC43 (GenBank AAX85666), and IBV (GenBank M95169) 3CL pro were purchased from Mr. Gene (http://mrgene.com). The coding sequences of 3CL pro from HCoV-NL63, HCoV-OC43 and IBV were sub-cloned and expressed in E. coli strain BL21 (DE3) pLysS as fusion proteins with N-terminal tags of poly-histidine-small ubiquitin-related modifier (His 6 -SUMO) or poly-histidine-maltose binding protein (His 6 -MBP). Protein expression was induced by addition of 0.1 mM of isopropyl b-D-1-thiogalactopyranoside. After overnight incubation at 25uC, cells were harvested by centrifugation and resuspended in buffer A (20 mM Tris, pH 7.8, 150 mM NaCl and 1 mM tris(2-carboxyethyl)phosphine) with 30 mM imidazole and disrupted by sonication. Soluble fraction was subject to immobilized metal ion affinity chromatography for purification as described for SARS-CoV 3CL pro [26]. The His 6 -SUMO or His 6 -MBP tags were removed by protease digestion using sentrin-specific protease 1 or factor Xa, respectively, followed by immobilized metal ion affinity chromatography. Native 3CL pro were finally purified by G75 size exclusion column and stored in buffer A. Elution profiles of size exclusion chromatography indicated that all 3CL pro purified were dimeric.
The construction, expression and purification of the substrate library were described previously [26]. In brief, the WT substrate sequence 'TSAVLQQSGFRKM' was inserted between the cyan fluorescent protein and the yellow fluorescent protein to create the substrate protein. Saturation mutagenesis was performed at each of the P5 to P3' positions to generate a substrate library of 1968 variants.
FRET assay for 3CL pro activity measurement
The protease activity of 3CL pro was measured by the FRET assay we developed previously [26]. Purified 3CL pro at 0.2 to 2 mM were mixed with 35 mM of the substrate protein in buffer A. Cleavage of the substrate protein leads to a decrease in fluorescence at 530 nm when the reaction mixture was excited at 430 nm. The fluorescence intensity, monitored by EnVision 2101 Multilabel Plate Reader, was fitted to single exponential [31,32,33]. The structure of WT substrate (magenta) is derived from crystal structure of SARS-CoV 3CL pro in complex with the autocleavage sequence (TSAVLQQSGFRKM) (PDB: 2Q6G) [31]. The structure of the A4P substrate variant (cyan) was modeled based on the crystal structure of IBV 3CL pro in complex with its own autocleavage sequence (PDB: 2Q6D) [31]. Note that strand-11 of IBV 3CL pro is positioned further away from P4 to P5 positions, resulting in a wider substrate-binding pocket. doi:10.1371/journal.pone.0027228.g004 decay to obtain the observed rate constant (k obs ). The protease activity against variant substrates was normalized against the WT activity to yield the relative activity. The assay was repeated in triplicate.
Correlation analysis
Structural properties of substituting residues, including side chain volume [28], hydrophobicity [29], and a-helix and b-sheet propensities [30], were correlated with relative activity to determine correlation coefficients (r) and p-values.
Supporting Information
Figure S1 Molecular modeling showing P3-Arg may interact with Glu-166 of 3CL pro . The model was based on the crystal structure of 3CLpro (grey) in complex with a peptide substrate 'TSAVLQQSGFRK' (yellow). P3-Val was replaced by P3-Arg using the program PyMOL. As shown, the invariant Glu-166 is in close proximity to P3-Arg, and may form favorable charge-charge interaction to P3-Arg. (TIF)
|
2014-10-01T00:00:00.000Z
|
2011-11-02T00:00:00.000
|
{
"year": 2011,
"sha1": "cc33658b1e2bfebb81adf4b6befe9cb37f6f6f16",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0027228&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc33658b1e2bfebb81adf4b6befe9cb37f6f6f16",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
248730191
|
pes2o/s2orc
|
v3-fos-license
|
Telling China's story through reports on scientists: A comparative analysis of the reports on Tu Youyou in Chinese and Western media
As the first native Chinese scientist to win a Nobel Prize in science, Tu Youyou, who is a researcher of the China Academy of Chinese Medical Sciences, is an iconic figure in the Chinese scientific community. Tu's story in Chinese and Western media reports has aroused heated discussion, creating a great opportunity for China to increase its voice in international communications. Through content analysis of relevant reports in Chinese and Western media, we found that, when reporting on the personal character of scientists, Western media politicize scientific and technological events. On the other hand, Chinese media tend to focus on the scientific research and stress a communal spirit among researchers, but the reports have the problem of similar contents and style. Based on this analysis, we proposed several suggestions on how to best tell China's story through reporting on scientists.
Introduction
In 2011, Tu Youyou, a tenured researcher of the China Academy of Chinese Medical Sciences, made her name on the international scientific stage when she won the Lasker Award in Clinical Medicine for the discovery of artemisinin. On 5 October 2015, Tu was also awarded the Nobel Prize in Physiology or Medicine for that discovery, becoming the first native Chinese scientist to win a Nobel Prize in science. Soon after, Tu Youyou, artemisinin and traditional Chinese medicine (TCM) became popular topics in both domestic and international media. This landmark event for China's scientific community has generated a chain effect in China's efforts to increase its voice in international communications and so deserves in-depth study.
First, Tu's achievements have demonstrated the prosperity and progress of Chinese science and technology. The Nobel Prize in Physiology or Medicine is the highest award received in the Chinese medical profession to date. Tu's winning of the award is a monumental achievement that reflects China's scientific and technological power, comprehensive national strength and international competitiveness. It is closely related to the peaceful development of the country and humanity as a whole. Furthermore, Tu's artemisinin research is inspired by TCM literature. The drug is widely used in the global fight against malaria, saving the lives of millions-a major contribution of Chinese medicine to the human race. It also plays a positive role in encouraging the people of the world to learn about and understand China.
Science and technology are the results of humans' perception and grasp of natural laws. Scientific discoveries and technological inventions can cross the barriers of ideology, political systems, religions and cultures and be shared by all humanity. Therefore, publicity about scientific and technological achievements and figures has the inherent advantage of breaking through cultural barriers and shaping China's image in a more acceptable way. How to make people in other countries recognize and trust China's scientific and technological achievements through media reports is an important task for China's external communications. Because Tu Youyou is a native Chinese scientist with notable influence in the international arena, an analysis of media reports about her could provide insights into China's efforts to increase the country's influence and tell the Chinese story through the promotion of scientific and technological achievements and figures.
Literature review
The mass media serves as an important channel to promote the scientific spirit and introduce scientists to the public. The earliest research on the image of scientists can be traced back to a 1999 study that pointed out that the main sources of Korean students' cognition about scientists were movies, news, autobiographies and cartoons (Song and Kim, 1999). In recent years, scholars outside China have focused more on the depiction of female scientists. Contemporary representations of scientists in the media are often examined with a particular emphasis on stereotypes related to gender and science as a profession (Mitchell and McKinnon, 2019).
Research on scientists' media image in China is at an early stage, having made a late start compared to other countries. Chinese scholars mostly take mainstream media in China as the research focus and explore the characteristics, evolution and factors influencing media-constructed images of scientists from a variety of theoretical perspectives. Wang (2018) analysed the discourse of reports in the People's Daily on Yuan Longping, a famous Chinese scientist, and found that the reports included publicity of values. To examine scientists' portrayals in the Chinese press and their changes in different social discourses, Xu and Wang (2020) analysed reports on scientists in the People's Daily from 1949 to 2019 based on semantic network analysis. Zhang (2016) researched reports in the People's Daily based on framing theory and concluded that various factors affect the media portrayal of scientists, including political, economic, media organization and cultural factors.
With the development of technology, scholars have begun to turn their attention to the role of new media in depicting the image of scientists. Social media have produced a lot of high-quality scientific communication products that even influence the agenda of traditional media (Wu and Zhang, 2016). However, the influence of mainstream media cannot be ignored. After examining image-building about scientists in new media, some scholars have found that new media's depiction of Chinese scientists is not enthusiastic and that its communicational effect is limited (Zhang, 2012). Others have combined scientists' image-building with international communication in their studies. Gong and Huang (2016) analysed reports about Tu Youyou in China Daily and suggested that they portrayed China as a socialist country with an advanced culture.
Most current research focuses on analysing comprehensive media but ignores the important role of specialized media. The results are mostly descriptive and lack quantification. Moreover, no research has explored how the Western media has reported on Chinese scientists. Therefore, in this study, we examine the reports of four representative Chinese and Western media sources and compare the ways they portray Chinese scientists. Through content analysis, we conducted an in-depth analysis of the topics, news sources and wording of those reports. We make suggestions about the international communication pattern of Chinese media based on our analysis.
Selection of research subjects
This study analyses the content of mainstream Chinese and Western media reports on Tu Youyou. For Chinese media, we chose the People's Daily and Science and Technology Daily. The People's Daily is one of the most influential mainstream media in China, and Science and Technology Daily is an authoritative and specialized medium focusing on science and technology. Both are circulated internationally and have a worldwide impact. For this study, we went through all the reports about Tu Youyou carried in those two newspapers to learn about how the case was covered by Chinese media. We analysed the weaknesses and strengths of China's external communication process and how the narrative of the Chinese story is conveyed.
For Western media, we chose The New York Times, which is one of the most authoritative international newspapers published in the US, and The Guardian, which is a comprehensive daily newspaper in the UK. We studied the number of reports on Tu Youyou carried in those two newspapers, as well as the types and themes of the reports. These two newspapers have a strong influence in the English-speaking world and have a strong focus on international affairs.
Composition of the research sample
For sample selection, we used '屠呦呦' ('Tu Youyou') as the keyword for searching in the CNKI newspaper database and the People's Daily graphic database . We used 'Tu Youyou', 'Chinese + malaria', 'Nobel Prize in Medicine' and 'Nobel + parasite' as the keywords for searching in the LexisNexis global news database. In that way, we located all reports on Tu Youyou carried in the People's Daily, Science and Technology Daily, The Guardian and The New York Times from 24 June 2003 to 31 December 2021 and obtained 55 valid samples after excluding duplicate and irrelevant reports.
Reports in Chinese media
Through an examination of relevant reports in the People's Daily and Science and Technology Daily, we found that there were only two reports about Tu Youyou in Chinese media before 2011. In 2011, the year when Tu won the Lasker Prize, the number of reports increased. In 2015, after Tu won the Nobel Prize, the number reached a high point. After that, the number declined year by year, until reaching another peak in 2021, with the broadcast of a TV series on the stories of the winners of the Medal of the Republic (Figure 1).
A search for '屠呦呦' in the CNKI journal database returned 30 reports on Tu Youyou from the People's Daily, from 13 October 2011 to 1 June 2021. Most of the reports were published in 2011 and 2015, when Tu won the two international awards. Of the 30 reports, 54% were on the front page or in the Headline News section, 13% were in the Science and Technology Insights section, and 17% were in the Health Times section ( Figure 2).
In the Science and Technology Daily, 12 reports were published from 24 June 2003 to 2 July 2021. Most of the reports were published in 2015 and 2016 after Tu Youyou received the Nobel Prize. Except for two front page reports on Tu's winning the Nobel Prize and her donation of her book Research on Malaria-treating Artemisinin (1971-1978 to the Nobel Museum, the other reports were all included in the Medical and Health Technology section (Table 1).
Reports in Western media
We located eight reports on Tu Youyou in The New York Times, from 13 September 2011 to 13 October 2015. Among them, two were published after Tu received the Lasker Prize in 2011, and six were published after Tu received the Nobel Prize in 2015. Four reports were in the Asia Pacific section, two in the Science section, and two in the Foreign section ( Table 2).
The Guardian produced five reports about Tu Youyou, from 5 October 2015 to 25 July 2018. Most of the reports were published after Tu received the award in 2015 and no reports have been published lately. Three reports were carried in the Science section, one in the Opinion section, and one in the Global Development section (Table 3).
Compared with Chinese media, Western media has paid little attention to Tu Youyou. Most of the reports in Chinese and Western media were published in 2015 and 2016 after Tu won the Nobel Prize, and there was little attention to her winning the Lasker Prize in 2011. Tu's winning of the Nobel Prize is a science and technology event, but Western media reports put disproportionate emphasis on its political characterization. Most of the reports in The New York Times were published in the Asia Pacific and Foreign sections, which diluted the scientific and medical significance of Tu's award.
Comparison of reporting formats in Chinese and Western media
The reporting format shows the media's attitude towards a particular event. As Table 4 shows, the reporting formats of Chinese media were more diverse, and commentary and in-depth coverage were frequently used to explore the significance of Tu Youyou's achievement. By contrast, news is the most common format of Western media reporting. Concentrated on Tu's Nobel Prize, those reports had a common format and reporting angle. News commentaries are the heart and soul of the media. They can influence and shape public opinion and inspire deep reflection on the issues behind events. In terms of the topics of commentaries, Chinese media have extensive interests and tend to look at issues from wider perspectives, such as the debate between Chinese and Western medicine, the weaknesses of China's scientific research system and the prospect of TCM. By contrast, commentaries in Western media are focused and brief. For example, one of the commentaries published by The Guardian described the contributions of the three Nobel Prize winners in 2015 to the eradication of global diseases, while another briefly introduced the achievements of five female scientists, including Tu Youyou. This is evidence of the limited attention Tu Youyou receives from Western media.
Comparison of information sources of Chinese and Western media
In news reporting, the media report based on the information provided by their sources. The choice of and emphasis on sources reflect the media's judgement of and attitudes towards a given event. Table 5 shows the differences in information sources cited by Chinese and Western media.
The sources of information cited by the People's Daily were extensive and comprehensive. The articles quoted a number of comments by scholars, officials and other stakeholders from China and other countries, reaffirming the inspirational role of TCM in Tu Youyou's artemisinin research and the significance of Tu's award to the development of TCM. Most of the quotes came from Chinese government officials and institutions, including the National Family Planning Commission, Peking University Health Science Center, the China Academy of Chinese Medical Sciences and the China Association for Science and Technology. The People's Daily also quoted extensive comments by African officials and members of Chinese medical teams in Africa, which showed the significant impact of artemisinin on people in other parts of the world. The Science and Technology Daily, with a clear scientific and technological focus, sourced most of its quotes from experts and scholars, who interpreted the significance of artemisinin from a professional medical perspective. Through interviews with members of Tu Youyou's research team, the coverage in the newspaper also noted the hard work of the research team and demonstrated the spirit of Chinese researchers.
Western media also quoted comments from scholars and experts. Unlike the words of recognition or congratulation that often appear in Chinese media, most of the comments quoted by Western media carried a negative or revealing tone. By questioning Tu Youyou's award or China's scientific research system, they shifted from an objective and neutral stand at the beginning of the reports and became McNeil Jr, 2012). He also raised questions on Tu's award of the Nobel Prize. A report from The New York Times took China's research system as its target. The author referred to the concern expressed by a professor at Beijing Jiaotong University that 'There are many problems in the institutions and mechanisms of scientific work in China' (Perlez, 2015). The report also named vague sources such as 'some analysts' and 'a journalist', attempting to create the impression that China's scientific research conditions are backward. It also concentrated on Tu's status as a scientist without a PhD degree and an overseas education, and not being named as an academician, to suggest faults in China's research reward mechanism. In Chinese media, Tu's award was reported as an event for national celebration. Chinese government officials were frequently quoted in the reports, which showed the significance of Tu's award from a broader perspective, but inevitably led to similar narratives. Compared with the 'straightforward presentation' of Chinese media, Western media tended to use Tu's original words in her earlier interviews with media, including her feelings about the award and her perception of her personal identity. Such an approach is more direct for establishing the identity of the protagonist in the news. For example, the words of Tu Youyou quoted by The Guardian spoke about her great sense of social responsibility and humanistic care: I saw a lot of children who were in the latest stages of malaria. Those kids died very quickly. It is scientists' responsibility to continue fighting for the healthcare of all humans. What I have done was what I should have done as a return for the education provided by my country. (Sample and Walker, 2015) 4.5 Comparison of the content of Chinese and Western media reports Chinese media reports had more diverse topics and far-reaching implications. The topics chosen by Chinese media were more extensive and diverse, and the reports looked into the deeper issues behind Tu Youyou's award, such as the inheritance of TCM, the collision between Chinese medicine and Western medicine, the current situation of China's academic environment and problems in China's research environment (Table 6). By contrast, Western media coverage was less diverse: all the reports focused on Tu's personal life and research experience.
Among the reports in Chinese media, TCM was a major area of interest for the following reasons.
First, Tu's award highlighted the collision between TCM and Western medicine and triggered a debate about Chinese and Western medicine in China. The People's Daily and Science and Technology Daily both published several commentaries to show that Chinese medicine and Western medicine are not in conflict or mutually exclusive. The research and development of TCM and modern technology are not contradictory. In a society of highly advanced science and technology, no scientific research can do without the application of modern technology, and no one contends that Chinese medicine would not accept modern technology. Tu's research has created a new option for activating traditions and starting a new journey. However, for medicine, what is most important is still its efficacy (Luo, 2015). Second, after Tu's award, the media started to pay attention to the inheritance and development of TCM. In October and November 2015, the People's Daily published a series of reports titled 'Reflections on Tu Youyou's Nobel Prize', which focused on the life of folk TCM practitioners, TCM research and TCM governance (Wang and Li, 2015).
Tu Youyou's awards of the Lasker Prize and Nobel Prize also inspired the media to reflect on problems in China's scientific research system.
First, China's science and technology system has problems at the management level. According to Li (2015), a chief researcher of the China Academy of Chinese Medical Sciences, the reason that Tu's research results from 30 years ago were recognized only today is excessive government intervention, privilege and hierarchy in the scientific research system. A commentary in the People's Daily pointed out that, although the conditions for scientific research have greatly improved, few highquality results have been produced because the whole scientific and technological community tends to seek quick success and instant benefit. In addition, the application process for scientific research projects is too complex and cumbersome because the projects are overseen by multiple government departments; the assessment mechanism puts too much emphasis on the number of published papers, and researchers are forced to put quantity before quality. Therefore, there is an urgent need to reform the research system and improve the academic environment (Bai, 2011a).
Second, Tu has made great achievements but has not been elected as an academician simply because she does not have a PhD degree or overseas educational background, which shows the unreasonable features of China's scientific research reward mechanism. Whether the election of academicians is objective and fair is not only about the dignity and credibility of academicians themselves, but also affects the professional ambitions and enthusiasm of all science and technology workers (Bai, 2011b).
Western media tended to politicize science and technology events in their reports, while Chinese media focused more on the scientific research. In the reports related to Tu Youyou, the results of China's scientific and technological development were distorted and incompletely presented by Western media. The New York Times and The Guardian both mentioned the background of Tu's artemisinin research in several reports. By underscoring the relationship between the artemisinin discovery and the Vietnam War, they defined the development of artemisinin as an aid to the Vietnamese communists, who were struggling in their battle against the US. Such a characterization reinforced the political attribute of artemisinin research. The New York Times reported that 'few people realize that in one of the paradoxes of history, the drug was discovered thanks to Mao Zedong, who was acting to help the North Vietnamese in their jungle war against the Americans' (McNeil Jr, 2012).
The Chinese media, on the contrary, presented the background to the research from the perspective of human life and health and pointed out that artemisinin was originally developed to bring down the number of malaria cases in the world: 'In the 1960s, when chloroquine lost its effect on malaria and humans were suffering from malaria, Tu Youyou accepted the challenging task of malaria drug research from the "523" office of the national research project on malaria control' . In the Chinese media's review of Tu's research history, the focus was on how Tu discovered artemisinin from TCM literature and how she overcame difficulties in her research by drawing on ancient wisdom. The connection between Tu and TCM was visible but was rarely underscored in Western media reports. Chinese reports focused on collective spirit, whereas Western reports emphasized personal character. In their reports on Tu Youyou, Western media paid more attention to her personality and described her profile from multiple perspectives. For example, in describing Tu's research experience, they not only portrayed her as a mother but repeatedly mentioned the sacrifice of her personal life to complete the task assigned to her by the government. The Guardian reported that, to observe the disease first hand, Tu was sent to Hainan Province, an island off the southern coast of China, and had to leave her 4-year-old daughter in the care of a Beijing nursery. When she came back, her daughter barely recognized her (Sample and Walker, 2015).
Western media also emphasized the identity of 'women scientists' in their reports. For example, a piece of news on the winners of the Nobel Prize in Physiology or Medicine specifically noted that Tu Youyou was the 12th woman to get the prize (Sample and Walker, 2015). In a 2018 commentary, The Guardian stated that there were many amazing and inspiring women in science, including five women scientists whose work and achievements made them great role models for us all, and Tu Youyou was one of them (Charman-Anderson, 2018).
The reports in Western media quoted Tu Youyou frequently, which presented a true picture. However, they did not mention the experience and feelings of other researchers in Tu's team and ignored the collective identity of Chinese scientists represented by Tu. The emphasis placed by Western media on Tu's individuality also overlooked the overall progress of science in China. This has to do with Western society's emphasis on individualism and the pursuit of personal values.
The reports in Chinese media focused more on the collective identity of Chinese researchers represented by Tu Youyou and underscored the important role played by the national research system and collaboration between different research teams. At different stages of the development of artemisinin, various research institutions in the country played unique roles, which together contributed to the success of the research. In their reports on Tu's research, Chinese media quoted the members of her research team to present the image of a united team. For example, whenever Tu talked about the research results, she always said, 'The success of the research is attributed to the hard work of everyone in the team' (Xia, 2019). A commentary in the Science and Technology Daily observed that the development of artemisinin was the result of the nationwide system and quoted Tu as saying, 'In the battlefield of global malaria control, the power of individuals is small. Only a well-organized army guided by a clear purpose can defeat malaria' (Fu, 2021).
Chinese reports also highlighted the spirit of perseverance, dedication and sacrifice of Tu and her team. In the 1970s, 'The research equipment was rudimentary, with no exhaust system or protective equipment. In addition to dizziness and eye swelling, the researchers also had symptoms like nose bleeding and skin allergies. Tu herself also suffered from toxic hepatitis. Even so, they did not stop at the discovery of artemisinin' . Although Tu's team has already achieved great things, the members continue to expand their research into new areas. In addition to the anti-malaria research, the team is exploring the efficacy of artemisinin for fighting cancer and treating lupus erythematosus. (Fu, 2021). The reports in Chinese media have demonstrated the great spirit of a generation of Chinese researchers, who are dedicated and devoted to serving the country. However, although the reports have sent a positive message to society, they remain a somewhat mechanical way to shape a positive image.
Promoting empathic communication by presenting the personalities of scientists from multiple perspectives
From the perspective of science communication, the public's perception and trust in science can be divided into 'instrumental trust in the effective application of scientific and technological achievements in society', 'symbolic trust in the objectivity and accuracy of scientific knowledge and methods' and 'ethical trust in the personality, competence and other virtues of scientists' (Liu, 2018). Chinese media are strong on the first two points. In their reports, they quoted domestic and international experts extensively, as well as those from the African region, providing a full picture of the contribution of Tu's research to global health. Compared with the reports in Western media, reports in Chinese media provided a more detailed account of how Tu combined TCM with Western technology and a more concrete illustration of the principles of artemisinin, thus reinforcing the scientific validity and effectiveness of the artemisinin therapy.
In shaping the personal image of scientists, mainstream Chinese media often like to take a macroscopic perspective and build their reports on key phrases such as 'benefiting the world', 'serving the country through scientific research' and 'carrying forward the spirit of Party members' to create an image of scientists who are dedicated to their work and the people and love their country. Although those reports are effective in conveying China's mainstream values and showcasing the progress of scientific research in China, the image created is too monotonous and abstract, lacking detail and vividness. Therefore, in their reports on scientists, the media should give equal importance to promoting collective values and creating individuals' profiles.
When reporting, one should avoid exalting scientists and should report at a 'human' level. Every scientist has his or her personality, be it playful, humorous, serious or reticent. One could use the words of the people around the scientist and his or her own words to form a full picture of the scientist's rich character. For example, in a series of articles commemorating Yuan Longping in May 2021, Shangguan News and other media chose details of the scientist's life as the focus, which struck a deep chord with the public. From Yuan Longping's wife, his hairdresser and his cat, 'Huahua', those episodes showed aspects of his everyday life, shaping the image of a frugal and easy-going scientist, which resonated powerfully with the audience. In another example, after the 2020 National Science and Technology Award was announced, the official Weibo account of the People's Daily posted a video of an episode in the award ceremony. After Gu Songfen received the award on stage, his wife Jiang Zefei moved through the crowd and came to his side. The touching scene of the couple holding their hands showed the emotional side of the scientists outside their research and brought them closer to the audience.
In the case of Tu Youyou, the media should focus on her female character. The status of women is one of the criteria for measuring the progress of a country and society; women's independence and freedom are a mark of a country's level of development. Therefore, a positive women's image will have a positive impact on the communication of the country's image. Tu's painstaking research and her courageous efforts in research projects make her an outstanding representative of resilient and progressive Chinese women. When reporting, the media can tell the story of hardworking Chinese women from the angle of women's progress, convey the message that women are no weaker than men, and, on that basis, shape China's image as a country of gender equality, freedom and fraternity, removing the stereotypes held by people of other countries and triggering the empathy of the international community.
Making less propaganda and breaking through cultural barriers
When reporting Tu's work, most of the Chinese media adopted positive communication. Their reports often focused on the significance and impact of Tu's award and the pride of the nation. In addition, Chinese media highlighted the scientists' Communist Party of China membership and their connection with the country's development. They frequently quoted words of praise by government officials in their articles, which makes a foreign audience resistant to the information in the articles. Western media, on the other hand, paid more attention to Tu's personal growth and research experience and quoted her own words extensively to present her image as a scientist Although the narrative style of Chinese media can have an inspiring effect when communicating to the domestic audience, it is difficult to break the cultural barrier when communicating across different cultures. There is a cross-cultural communication context and an environment of communication through new media. As some Chinese government agencies and official media still lack a strong voice and influence in international communication, one should use less propaganda in reporting. To make the communication content of Chinese media more acceptable to foreign audiences, one may start from the micro-level and plan relevant reports based on shared values. Chinese media could replace the repeated emphasis on scientists' achievements with a review of their research experience to reduce the resistance to communication brought about by cultural differences.
Producing high-quality journalism and meeting external challenges head-on
The negative reports of Western media on Tu's award mainly focused on three aspects. First, they doubted whether Tu as an individual could receive the award on behalf of the whole team. Second, they distorted the current situation of science and technology development in China. Third, they questioned the Chinese science and technology system.
In the face of the distortion of Tu's award and China's academic environment by Western media, Chinese media should take the initiative to publish reports and give timely responses to the questions by using facts. For instance, the People's Daily published a commentary on the question of whether it was fair to award the Lasker Prize solely to Tu. The author cited the examples of previous Nobel Prize winners and used statistics to show that giving the award to the chief scientist is a common principle followed by all international prizes in science. The author stressed that awarding the chief scientist is not meant to promote individualism but to recognize the unique contribution made by the chief scientist in the research. In the case of artemisinin research, if Tu had not discovered the method of extracting artemisinin, the subsequent determination of its structure and the modification of the drug would not have been possible (Bai, 2011c).
The above mentioned commentary was published after Tu was awarded the Lasker Prize. It was objective because it was based on facts. However, probably because of the low level of international attention to Tu at that time, the commentary did not have a wide impact internationally. Until 2015, Western media were still quoting the comments of scholars from Oxford University and Hong Kong University of Science and Technology in their Nobel Prize reports, suggesting that Tu was not individually eligible for the prize. Unfortunately, there has been no further response from Chinese media on the matter. Therefore, we believe that Chinese media should keep track of global public opinion, continue to produce high-quality commentaries and quote the words of scholars and experts with global influence in commentaries in order to respond to external questions. As for the problems that do exist in China's scientific research system, the media can play a supervisory role and push the government to take timely measures to resolve the problems and strengthen China's science and technology system.
Presenting the achievements of China's science and technology development through multi-channel communication
TCM. The awards to Tu Youyou turned the world's attention to the power of TCM. However, Chinese media focused their reporting on the confrontation between and integration of TCM and Western medicine, as well as the problems in TCM's development, and failed to use the opportunity to promote the values of TCM. Rather than discussing the survival or abolition of TCM, the media should focus on the content of TCM culture that reflects the common experience and wisdom of humanity, popularize TCM concepts such as 'yin and yang' and 'meridians and collaterals' in a way that is easily understood by the Western audience, and introduce the Chinese wisdom represented by TCM.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship and/or publication of this article.
Funding
This study was supported by the 'Enhancing the international communication capability of the scientific community to tell a better China story' research project organized by the National Academy of Innovation Strategy (grant no. 2021-hjs-010).
|
2022-05-13T15:18:01.213Z
|
2022-05-11T00:00:00.000
|
{
"year": 2022,
"sha1": "163a27cb917b5a0393f1b2af349afaec0712f606",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20966083221098774",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "7ff82175eeb8a5efbc7f2eb936d2cb0467f960f3",
"s2fieldsofstudy": [
"Education",
"Art"
],
"extfieldsofstudy": []
}
|
212628291
|
pes2o/s2orc
|
v3-fos-license
|
Spin-accumulation induced magnetic texture in a metal-insulator bilayer
We consider the influence of a spin accumulation in a normal metal on the magnetic statics and dynamics in an adjacent magnetic insulator. In particular, we focus on arbitary angles between the spin accumulation and the easy-axis of the magnetic insulator. Based on Landau-Lifshitz-Gilbert phenomenology supplemented with magnetoelectronic circuit theory, we find that the magnetic texture twists into a stable configuration that turns out to be described by a virtual, or image, domain wall configuration, i.e., a domain wall outside the ferromagnet. We show that even when the spin accumulation is perpendicular to the anisotropy axis, the magnetic texture develops a component parallel to the spin accumulation for sufficiently large spin bias. The emergence of this parallel component gives rise to threshold behavior in the spin Hall magnetoresistance and nonlocal magnon transport. This threshold can be used to design novel spintronic and magnonic devices that can be operated without external magnetic fields.
Introduction. -The use of propagating spin waves, or magnons, to transmit and process information has the potential advantage of lower energy consumption over electronic currents. Especially insulating ferromagnets (IFM), such as yttrium-iron garnet (YIG), are able to accommodate a spin current efficiently as the damping of the magnetic dynamics is relatively low [1]. This has raised an increased interest in the possibilities of magnonic devices and how these could replace current electronic devices [2,3]. Specifically, the behavior of magnons in magnetic domain wall textures can have promising applications [4,5].
A typical experiment achieves transfer of angular momentum into an IFM through a spin current from a normal metal (NM) lead, usually platinum, by generating a spin accumulation at the interface by the spin-Hall effect [1, [6][7][8]. The angle of this spin accumulation with respect to the magnetization at the interface determines the efficiency of spin current injection. In this paper we consider the effect of a sufficiently large spin bias which locally affects the magnetic texture and thereby the transfer of angular momentum. We propose an analytical solution for the magnetization texture of the IFM for a general orientation of the spin accumulation. Results for nonlocal magnon transport [9] and the spin Hall magnetoresistance [10,11] are derived. We find threshold behavior in both local and nonlocal setups for a critical magnitude of the spin accumulation. This threshold behavior may be employed as a useful functionality in novel spintronic and magnonic devices that, as a result, do not require a cumbersome external magnetic field to acces their different states. While threshold behavior is commonly associated with spin superfluidity [12,13], our results show a threshold that is related to a change in the stable magnetic texture, and not to a spin superfluid state. The magnetic texture n(x) (blue arrows) of the semi-infinite IFM nanowire (green region) with an easy-axis anisotropy in the z direction. In the NM (orange region) an electric current generates a spin accumulation µ with polar angle θµ at the interface (red arrow) that deforms the magnetic texture. (b): The opaque arrows in the NM region are virtual and illustrate that the magnetic texture is that of two oppositely oriented domains with the center of the virtual domain wall, xDW, outside the IFM. Such a virtual domain wall solution is found analytically for any magnitude and orientation of the spin accumulation.
Equations of motion. -A one dimensional semiinfinite IFM nanowire with an interface with a nonmagnetic metal at x = 0 is studied. At the interface a spin accumulation µ is generated, e.g. by means of the spin-Hall effect, which results in a boundary condition on the spin current in the ferromagnet. A possible configuration of the system is illustrated in Fig. 1 (a). Our aim is to determine the magnetic texture of the ferromagnet and its stability as a function of µ. We define n = M/M s as the unit vector in the direction of the magnetization, where M s is the saturation magnetization. The energy of our system is given by with V the volume of the IFM, A the spin stiffness, K > 0 the easy-axis anisotropy and n z =ẑ · n. We consider an easy z axis anisotropy, but the results apply to other easyaxis directions similarly. The Landau-Lifschitz-Gilbert (LLG) equation supplemented with spin-transfer torques and spin-puming terms that follows from magnetoelectronic circuit theory reads [14,15] ( The left hand side describes the damped time evolution of n, where α G is the dimensionless phenomenological Gilbert damping constant. The first term on the right hand side is the torque due to effective magnetic field H eff which is given by where γ > 0 is the gyromagnetic ratio. The second is the interfacial spin transfer torque and spin pumping respectively, where g ↑↓ is the interface spin flip scattering per surface area, i.e., the spin-mixing conductance, and s the spin density. The characteristic length scale of the ferromagnet is the exchange length λ = A/K and the ferromagnetic resonance frequency ω F = γK/M s sets the timescale. Finally, we define α(x) = α G + λδ(x)α , with α = g ↑↓ 4πλs , so that the LLG equation is written as We integrate the LLG equation around an infinitesimal interval around the interface to obtain the boundary condition on the spin current density: Furthermore, we have the boundary condition n →ẑ as x → ∞. Now we set out to obtain a solution to the bulk part of Eq. (4) and use that to satisfy the boundary condition (5).
Virtual domain wall solution. -It turns out that the stationary magnetization profile that obeys Eq. (4) and the boundary conditions is similar to a domain wall (DW) texture, but with the DW position outside of the ferromagnet: the DW is a stationary solution to the bulk part of the LLG equation, and the freedom of the DW position allows us to satisfy the boundary conditions. We refer to this situation as a virtual DW. Such a DW solution is written in spherical coordinates as n 0 =x sin θ cos ϕ +ŷ sin θ sin ϕ +ẑ cos θ, with ϕ a constant azimuthal angle throughout the nanowire and θ the polar angle given by Here x DW is the position of the DW. Next, we study the boundary condition Eq. (5) of the spin current. For convenience we switch to a local spherical basis whose radial unit vector is given by n 0 . It follows that λ∂ x n 0 = − sin θθ. Hence, where θ 0 = θ(0). This gives us two equations: To solve these equations, we express µ in rescaled cylindrical coordinates: µ z = µ ·ẑ/ ω F ; µ R = (µ ·x) 2 + (µ ·ŷ) 2 / ω F ; ϕ µ = arctan (µ ·ŷ)/(µ ·x). Then we write µ ·θ = µ R cos(ϕ − ϕ µ ) cos θ − µ z sin θ; (10) From Eq. (11), we obtain an expression for the azimuthal angle ϕ of the virtual DW in terms of ϕ µ and the polar angle θ 0 of the virtual DW at the interface: Note that ϕ is only properly defined when µ R = 0. Indeed, if µ R = 0 the boundary conditions fix sin θ = 0, i.e., the magnetization is homogeneous along the z direction and an azimuthal angle is ill-defined. By inserting Eq. (12) into Eq. (10), we rewrite Eq. (9) and take the square to obtain (α 2 µ 2 R − u)(1 − u) = α 2 µ 2 z u, with u = sin 2 θ 0 . This is solved for 0 ≥ u ≥ 1 to obtain the expression for x DW : These plots indicate the effect of the spin accumulation on the magnetic texture: As the spin accumulation increases, the virtual DW position approaches the interface, which will only be reached when θµ = π/2, i.e., when µ is perpendicular to the anisotropy axis. For |µ| ≤ ωF/α , µ is also perpendicular the magnetization at the interface. But in the regime |µ| > ωF/α there will be a finite parallel component of the spin accumulation.
where µ = |µ|/ ω F . Note that although the semi-infinite ferromagnet lies on the x ≥ 0 axis, a virtual DW texture, i.e., x DW ≤ 0, is the only physical solution, as this will minimize the energy of the system. This is seen directly from Eq. (1) as the gradient in the first term is maximal around the virtual DW position. The role of x DW is merely to configure the virtual DW profile in such a way that the boundary conditions are met. The behavior of the magnetic texture as a function of µ is plotted in Fig. 2. The figure demonstrates the effect of the spin bias on the magnetic texture in terms of the virtual DW position and the component of the spin accumulation that is parallel to the magnetization at the interface.
A remarkable feature is that for increasing |µ| the virtual DW position approaches the interface. Precisely when θ µ = π/2, the virtual DW position will reach the interface when |µ| = ω F /α . When |µ| increases further, the virtual DW position remains at the interface, but the azimuthal angle of the virtual DW now starts changing to pull the magnetization more parallel to the spin accumulation, resulting in the threshold behavior in the parallel component µ || = µ · n 0 | x=0 of the spin accumulation.
Spin Hall magnetoresistance. -When applying an electric current through a NM|IFM system, the electrical resistance depends on the orientation of the magnetization of the IFM with respect to the current direction. The electric current j e will generate a spin current j s x through the interface by the spin Hall effect. The magnitude of this current depends on the relative orientation of the magnetization of the IFM to the spin accumulation µ at the interface [10,11]: The spin current is maximized (minimized) when the spin accumulation and magnetization at the interface are perpendicular (parallel) as then the most (no) angular momentum is transferred. As a result the resistivity in the NM is maximal (minimal) due to the inverse spin Hall effect.
Considering Fig. 2 (a), we expect a threshold effect in this spin Hall magnetoresistance of the normal metal when the angle θ j between the electrical current trough the NM and the anisotropy axis vanishes. The applied electric field thus has a threshold value E c , such that |µ| > ω F /α , where the spin accumulation deforms the magnetic texture such that the transfer of angular momentum is reduced.
Following [11] we solve the coupled charge and spin current drift-diffusion equations as a function of the angle θ j between the electrical current and the anisotropy axis by inserting the boundary conditions for the spin current from Eq. (8), assuming that µ obeys a diffusion equation (see Appendix C). In the large thickness (along the x direction) limit for the NM and parallel current θ j = 0, the critical electric field for which the magnetic texture develops a component parallel to the spin accumulation, where Ec is the electric field that generates a spin accumulation |µ| = ωF/α at the interface. For E ≤ Ec and θj = 0 the conductivity is not affected by the change in magnetization of the IFM as the magnetization remains perpendicular to µ. For E > Ec the magnetization at the interface aligns more with µ and the spin Hall magnetoresistance decreases.
i.e., |µ| = ω F /α , is given by with θ SH the spin-Hall angle of the NM, l s the spin diffusion length, e > 0 the elementary charge and σ the electrical conductivity. To estimate this effect we consider a Pt|YIG interface where the critical electric field has a value of approximately 21 V/µm [1, [16][17][18]. In Fig. 3 we plot the normalized difference in resistance in the NM as a function of the applied electric field for a Pt|YIG interface. One clearly sees the threshold behavior of the resistance due to the change in magnetic texture as a function of the spin accumulation.
Magnon transport. -As we have seen, there is no transfer of spin when the spin accumulation and magnetization are parallel. Despite this, the IFM can accommodate the transfer of angular momentum by means of fluctuations (either thermal or quantum) in the form of spin waves, i.e., magnons. The magnons are injected and detected through spin-flip scattering at the interface with NM leads. The efficiency of the transfer of angular momentum is optimal when the spin accumulation is parallel to the magnetization at the interface. As a consequence, theshold behavior is expected in the nonlocal magnon transport signal.
A typical experiment that quantifies the magnon transport attaches a lead at some position x = d λ and measures the electric current generated by the inverse spin-Hall effect [2]. To consider magnons, we add a per-turbation to our stationary solution: where we make the anzats |δn θ (x, t)| 1 and |δn ϕ (x, t)| 1 are homogeneous along the y and z direction as we assume translation symmetry along the interface. The magnon field is defined as ψ = δn θ + iδn ϕ , and thermal fluctuations are modeled by adding a stochastic field h to the LLG Eq. (4) [9,19]. Fourier transforming ψ and h, we obtain a Schrödinger-like equation from the linearized LLG equation: where h = h θ + ih ϕ . The second term on the right hand side plays the role of a local potential with a minimum at the virtual DW position. The stochastic fields at the interfaces and in the bulk are combined into where each stochastic field obeys the fluctuation dissipation theorem [9]: and the temperature T is assumed constant and equal in the bulk and at the leads as we are only interested in the nonlocal transport due to the spin bias. In this way, magnon dissipation at the boundaries and in the bulk is considered.
The observable we are interested in is the average spin current injected into the right lead at x = d. We have where we defined j s = j s · n 0 | x=d . We use Green's functions to express ψ in terms of the stochastic field and find an analytical solution using two types of solutions for the bulk part of Eq. (16) [4,9]: ψ ω,± = (∓iλk(ω) + cos θ)e ±ikx ; with k(ω) = λ −1 (1 + iα G )ω/ω F − 1. Remarkably, these magnon modes are stable regardless of the orientation and magnitude of the spin accumulation. The result for the spin current at the right interface is written in the familiar Landauer-Büttiker form: For |µ| > ωF/α there is a finite spin current when the spin accumulation is perpendicular to the anisotropy axis. This threshold behavior is caused by the deformation of the magnetic texture, generating a parallel component of the magnetization as is seen in Fig. 2 (a).
In Fig. 4 the spin current injected into the right lead x = d is plotted as a function of the spin accumulation at the left lead x = 0, where the polar angle θ µ between the spin accumulation and the anisotropy axis is π/2. Our results show that for large bias, the spin accumulation affects the magnetic texture significantly. In particular, for |µ| > ω F /α there is a non-zero current even though the spin accumulation is perpendicular to the anisotropy axis. Such threshold behavior is also seen in experiments [12].
Conclusion. -We have shown that a spin accumulation at the interface of a NM with an IFM affects the magnetic texture and thereby moderates the transfer of angular momentum across the interface. The magnetic texture is found analytically and is interpreted as a virtual DW, where the DW position always lies outside or at the boundary of the ferromagnet.
Note that we do not fix the magnetization of the IFM at the interface as was done by Sitte et al. [20] where a conducting ferromagnet is considered. There the authors demonstrate that by fixing the magnetization at the interface there is a critical current above which DWs are injected into the nanowire. Similarly, DWs are injected into our IFM system for sufficiently large spin accumulation and when the magnetization is fixed (see Appendix A), which would physically correspond to a very large interface anisotropy.
Furthermore, we have shown that this interaction between the spin accumulation and the magnetization at the interface results in threshold behavior in spin Hall magnetoresistance and nonlocal magnon transport: When the spin accumulation exceeds the critical value ω F /α the spin Hall magnetoresistance drops suddenly when the electric current is parallel to the anisotropy axis. Moreover, above the critical value a finite nonlocal magnon current can be measured even when the spin accumulation is oriented perpendicular to the anisotropy axis. These results provide a novel route to control both local and nonlocal spin transport signals via the electric current, without the need for an external magnetic field. We provide a possible geometry for an experimental setup in Appendix D.
We have assumed that the system size is large relative to the exchange length λ. For a smaller system, the exchange energy of the magnet cannot compensate the spin transfer torque, which leads to spin-torque oscillator instabilities [21] that prevent the formation of the virtual DW texture. Furthermore, we assume that the contact size of the biased lead is small compared to the distance between the leads to ensure that the magnons do not form a Bose-Einstein condensate when µ || > ω F /α [22].
The electric field required to arrive at the threshold for a Pt|YIG bilayer is still two orders of magnitude higher than electric fields that have recently been applied in this kind of system [23], but the expression for the critical electric field (14) holds for any material, hence the threshold is more accessible for materials with a lower spin density, for example.
Remarkably, it is often argued that threshold behavior in nonlocal magnon transport indicates a metastable spin superfluid state [12,13,24]. However, we have demonstrated that even a stable magnetic texture may also lead to threshold behavior in the nonlocal magnon transport. We expect that an external magnetic field or a non-zero Dzyaloshinskii-Moriya interaction (DMI) might smoothen the threshold behavior as this will affect the azimuthal angle of the virtual DW.
In future research our theory can be applied to interpret experimental results on such threshold behavior. Moreover, the model can be extended to antiferromagnets. Furthermore, the model can be enriched by considering the effects of a weak magnetic field or DMI.
We acknowledge useful discussions with Julius Krebbekx and Geert Hoogeboom. R.D. is member of the D-ITP consortium, a program of the Dutch Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). This work is funded by the European Research Council (ERC). This work is part of the research programme of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organization for Scientific Research (NWO).
Appendix A: Domain wall injection for insulating ferromagnets
Recently it was shown by Sitte et al. [20] that passing an electrical current trough a conducting ferromagnetic nanowire injects domain walls (DWs) into the magnet when the magnetization at the interface is fixed. Physically, the situation corresponds to an interface anisotropy that is so large that it dominates all other terms and fixes the magnetization direction at the interface. Here we show that the same result can be achieved for an insulating ferromagnet (IFM) by means of a spin accumulation at the interface with a normal metal. We will attempt to follow the derivation by Sitte et al. as closely as possible.
The system of interest is a semi-infinite ferromagnetic nanowire with an easy axis along the wire. At the left end of the wire (x = 0) we fix the magnetization orientation n(0) =ẑ. The free energy of this system is given by where A is the exchange interaction, and K the anisotropy for hte one dimenasional system (note that in the main text we consider a three dimensional system with translation invariance). Π is a general form for the anisotropy, from which we only require Π(0) = 1, Π(1) = 0 and is monotonic and differentiable with Π (0) = 0. At x = 0 there is a spin accumulation µ = µ xx + µ yŷ (an accumulation in the z-direction does not contribute). The as in the main text, the LLG equation for this system reads Note that we set the gyromagnetic ratio γ > 0 by convention, and use the same notation as in the main text. We aim to determine the critical spin accumulation energy below which there is a stable solution n. Above this energy the dynamics will be slow and considered adiabatic, hence we will ignore dissipation. We thus reduce the LLG to where we defined with the Berry phase like term Ω µ satisfying Now we consider F eff as an action with corresponding Lagrangian We may also define a Hamiltonian density which should be conserved (w.r.t. x, i.e. translationally invariant). So evaluating at x → ∞ we have that Next, we consider the x component of the LLG and integrate to obtain For a static solutionṅ x = 0, and with the boundary conditions n y (0) = n z (∞) = ∂ x n z (∞) = 0 and n z (0) = 1 we use partial integration If we evaluate Eq. (A8) at x = 0, where we have ∂ x n ⊥ z, and insert the above result, we obtain the condition for a stationary solution. For µ x > ω F /α the texture thus becomes unstable, and we have checked numerically that domain walls are then injected into the insulating ferromagnet.
Appendix B: Properties of the domain wall profile As stated in the main article a stationary solution to the bulk part of the LLG equation is the domain wall, written in spherical coordinates as n DW =x sin θ cos ϕ +ŷ sin θ sin ϕ +ẑ cos θ, with ϕ a constant azimuthal angle throughout the nanowire and θ the polar angle given by Indeed sech 2 (−X) + tanh 2 (−X) = 1, so there must be some angle θ such that sin(θ) = sech(−X) and cos(θ) = tanh(−X). Now we will show that this is satisfied by Eq. (B2). This angle θ must satisfy (B5) Now we can find a and b such that tan(a) = e −X , and That is a = π 2 − arctan e X , and b = arctan e X . (B7) Now continuing Eq. (B5) where we used the difference formula of the tangens for the last identity of the first line. Thus we obtain that indeed θ = 2 arctan e X .
• The following identities hold We only need to show the first identity and the rest will follow trivially. Note that from Eqs. (B3) and (B4) we have e X = tan θ/2, so Hence, rewriting the above gives λ∂ x θ = −2 sin θ/2 cos θ/2 = − sin θ.
To show that n DW indeed satisfies the LLG equation we derive which is obtained readily with the help of Eqs. (B11) and (B13). As this is clearly parallel to n DW , all terms in the LLG equation vanish.
Appendix C: Spin Hall magnetoresistance
In this section we compute the resistance of a normal metal wire connected to a ferromagnetic insulator whose magnetization is parallel to the applied electric current. As the electric current will generate a spin accumulation at the interface, perpendicular to the magnetization, we expect an increase in the resistance once the applied voltage is above a threshold value, such that |µ| > ω F /α where the spin accumulation deforms the magnetic texture to allow for a nonzero spin current out of the normal metal.
Considering the spin current j s x flowing perpendicular to the interface (the vector part describes the orientation of the spin), the relevant spin diffusion equations are [11]: Here, σ is the electric conductivity of the normal metal in units Ω −1 m −1 , e > 0 the elementary charge and θ SH the spin Hall angle. µ e is the electric potential, so ∂ z µ e = F is the applied electric force. We rescale to make these equations dimensionless, definingx = x/λ,t = tω F , µ e = µ e / ω F andF = F λ/ ω F . The currents will be normalized asj e = j e /j e c , with j e c = ω F σ/λe, and j s = j s /j s c , with j s c = ω F sλ/2. Furthermore, we introduce the dimensionless constant c j = 2ej s c / j e c such that, omitting the tildes for clarity, Eqs. (C1) and (C2) reduce to The spin accumulation obeys the diffusion equation µ = ∂ 2 x µ/l 2 s , with l s the rescaled (w.r.t. λ) spin diffusion length. The solution is of the form with t the rescaled thickness of the normal metal (along the x axis). We have the following boundary conditions for Eq. (C4): where θ 0 = θ(x = 0). The latter condition is obtained from the LLG equation in the ferromagnet, as discussed in the main text. For this particular orientation of the spin accumulation (that is µ z = 0), the solution for the DW position reduces to x DW = 0, for |α µ y | ≥ 1; −arcsech |α µ y |, otherwise. (C8) Using Eq. (B3) we obtain sin θ 0 and thereby ϕ = ϕ µ + arcsin(sin θ 0 /α µ y ). Thus the boundary conditions at for |α µ y | ≤ 1. Otherwise where we approximated around µ x = 0. We can solve the system exactly using numerics, or, with this approximation, obtain an analytical expression for a − and a + , which allows us to determine the electrical current as a function of the applied electric force and study their nonlinear dependence. Furthermore, we generalize the result to the setup where the polar angle θ j between the electric current and the anisotropy axis is varied. Effectively, the boundary condition in Eq. (C7) gets rotated over the angle −θ j . The results are obtained numerically and discussed and illustrated in the main text.
Appendix D: Fluctuation assisted Transport
The observable we are interested in is the average spin current into the right lead at x = d λ. We use δn θ = (ψ + ψ )/2 and δn ϕ = (ψ − ψ )/2i. Similar for the stochastic field h r θ = (h r +h r )/2 and h r ϕ = (h r −h r )/2i. When taking the average, terms linear in any of the independent components of the stochastic field h vanish. Note furthermore that ψ does not depend on the radial component n 0 · h. Hence we find δn ×δ n = n 0 Im ψ ψ ; (D1) The superscript indicates that we are considering the stochastic field at the right lead. Thus, averaging over the spin current at the right lead, we will determine by working out to two terms on the right hand side separately. At the right interface we also have spin flip scattering, so now α( Starting from the equation of motion given in the main text we will use the magnon's Greens function, defined by the equation to express ψ in terms of the stochastic field: Recall from the main text that the stochastic fields obey the fluctuation dissipation theorem [9], yielding For our purposes, we only need to consider the system of equations The NM leads (orange) are attached perpendicular to each other in such a way that the injecting lead is parallel to the anisotropy axis (blue arrow) of the FMI (green). The theoretical model described in this paper is a good approximation if the distance between the leads d is much larger than the length of the leads l.
To find a solution for G ω (x, d) we attempt with k(ω) = λ −1 (1 + iα G )ω/ω F − 1 so that Eq. (D11) is satisfied. The remaining two equation are used to determine c + and c − analytically. We define the self energies We can Fourier transform the time coordinate and express the wave function in terms of the Greens function and the total stochastic field to write λ d α δn ×δ n · n 0 = − 2 dω 2π dxdx dx dx Σ r ω (x, x )G ω (x , x ) Σ ω (x , x )F ω + Σ l ω (x , x )F l ω + Σ r ω (x , x )F r ω G ω (x, x ); (D18) Where we introduced a matrix (of infinite dimension) notation, where we interpret a function f (x, x ) as a matrixf with components labeled by x and x . The matrix product is defined by integration (f ·ĝ) xx = dx f (x, x )g(x , x). The trace is defined by integrating over the diagonal components. Similarly, we express Where we inserted an identity in the first line and read off the inverse matrix Eq. (D5), to be inserted in the second line, with H(x, x ) = λ −1 δ(x − x )(λ 2 ∂ 2 x − cos 2θ). Note that the term proportional toω +Ĥ drops out, as it is purely imaginary because H is hermitian. Combining the two results we have Setting all temperatures equal yields the spin current as given in the main article. We propose a geometry for an experimental setup to measure the threshold behavior in the non-local magnon transport. A top view is illustrated in Fig. 5. The NM leads are attached perpendicular to each other and the magnetization anisotropy is parallel to the length of the injecting lead, so that the spin accumulation is perpendicular to the anisotropy axis for the injecting lead. At the detecting lead the magnetization will be perpendicular to the length of the lead.
|
2020-03-09T01:00:52.102Z
|
2020-03-06T00:00:00.000
|
{
"year": 2020,
"sha1": "5044fefb055553ac213290aa71d58a801a58ae17",
"oa_license": null,
"oa_url": "http://dspace.library.uu.nl/bitstream/1874/411104/1/PhysRevB.101.214440.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5044fefb055553ac213290aa71d58a801a58ae17",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
155427042
|
pes2o/s2orc
|
v3-fos-license
|
COMPARATIVE ANALYSIS ON THE EFFICACY OF AEROBIC CAPACITY IN CARDIAC REHABILITATION OBESE AND NON-OBESE PHASE II PATIENT
Background: Aerobic Exercise is a host of health benefits to reduce vigor risk and maintain body weight. The purpose of the present investigation is to determine the influence of aerobic exercises on body weight and Metabolic Equivalent of Task (MET)activity among cardiac rehabilitation phase II patients.The objective of the study is to investigate the impact of obesity on the efficacy of aerobic capacity. Methods: Fifteen obese (ten males, five females) and fifteen non-obese (eleven males, four females) participants of phase II cardiac rehabilitation were selected from a tertiary care hospital by their Body Mass Index (BMI). They were divided into two groups by simple random technique. Aerobic exercises were given for 12 weeks for post-CABG cardiac rehabilitation phase II obese and non-obese(healthy and overweight) patients. BMI and the Metabolic equivalent of task activity of cardiac rehabilitation phase II patients are measured by enrolling a patient in 12 weeks of aerobic exercises program. Results: The aerobic exercise shows a positive result in both obese and non-obese patients. Aerobic exercises improved metabolic equivalent of task in both obese(4.6667+_0.65134 ;< 0.05) and non-obese patients (4.6923+_.48038 ;< 0.05) significantly. But aerobic exercises were more effective in enhancing the efficacy of aerobic capacity in obese patients. Conclusion: It is evident that aerobic exercises are more effective for obese patients to maintain or reduces weight. Higher MET activity was observed in obese patients.
INTRODUCTION
Obesity and overweight have become a global problem with high prevalence in the current era [1,2]. Although the degree of Obesity has been associated with health-related quality of life [3,4] correspondingly Concomitant with significant medical comorbidities [5] and shortens lifespan [6]. Until now, BMI calculation is the only method being followed for the classification of obesity. According to the World Health Organization (WHO) classification of BMI, Participants with a BMI below 18.5 kg/m2 is labeled as underweight. Adults with BMI between 18.5 kg/m2 to 24.9 kg/m2 categorized as healthy weight population, 25 kg/m2 and 29.9kg/m2 reflect overweight population. An adult who has a BMI of 30kg/m2 or higher is consideredobese [7]. Prevalence of obesity is rising rapidly in developed [8,9] as well as in developing countries [1,10]. BMI is a predictive indicator for individual entering a cardiac rehabilitation program after Coronary Artery Bypass Grafting (CABG) surgery [11]. Cardiac rehabilitation is a structured program of education and activities guided towards lifestyle modification, increasing functional capabilities and peer support [12].A study by Blair and colleagues have revealed that sedentary adults incorporate short bouts of moderate-intensity activity into their daily routines as an approach for lifestyle modification [13].Regular aerobic exercise can bring remarkable changes not just to your body, your metabolism, and your heart, but also to your spirits. Aerobic exercises are one of the integral parts of cardiac rehabilitation and also known as cardio exercises.Aerobicexerciseisa physical exercise of low to high intensity that depends primarily on the aerobic energy-generating process. Aerobic means "relating to, involving, or requiring free oxygen "and refers to the use of oxygen to adequately meet energy demands during exercise via aerobic metabolism [14]. Peak VO 2 is either directly measured during the exercise test or estimated from the maximal exercise capacity in METS. Endorsing the appliance of aerobic exercises in post CABG patients means that, it will increase the health status of cardiac patients. A study shows that changes in body weight should be considered as an essential clinical outcome in cardiac rehabilitation program [15][16][17]. E Pamela in the year 2000 where Reybrouck T et al. in 1987 showed that two groups were enrolled in a cardiac rehabilitation program one with a sedentarylifestyle and categorize as obese participants and another one with an activelifestyle. In obese patients, there is a 10 % reduction in the mean maximal aerobic power per decade, the reduction with an active lifestyle being less than 5% [13,18]. It has recently been evidenced that aerobic fitness is the primary factor influencing future health outcome [19][20][21], although, the physiological basis of this concept remains unclear. The abovestudy, both obesity,and aerobic fitness are prone to risk factors for future health outcome, but it is unclear whether these effects are related to one another or independent risk factors [19]. Boyce et al.in 1997 found in a cross-over design trial that four months of exercise training by using stationary cycling in sixteen non-dialysis Chronic Kidney Disease subjects decreased blood pressure and increased peak oxygen consumption [20]. In another non-randomized trial, Clyne et al. found that exercise coaching for three months via bicycle ergometry in ten pre-dialytic Chronic Kidney Disease patients accrued highest exercise capability, increased thigh muscular [21]. A study by Ross E revealed that a program of diet plus lifestyle activity may propose similar health benefits and is an appropriate substitute to diet plus a structured aerobic exercise for obese women [22].Another study by Gorden M etal. concluded that fat mass does not have any effect on VO 2 max.Obesitydoes do not necessarily denote a reduced ability to consume oxygen for activities [23].Another study shows similar results regarding the functional level of cardiac rehabilitation patients,i.e., functional fitness declines in obese patients because of the inert load created by excess bodyfat [24].P Ekkekakis in 2006 reports that, as the intensity increases, overweight adults tend to experience more skeletal, muscular aches, and pains than normal weight adults [25]. Recent studies have also indicated significant effects of cardiac rehabilitation on reducing subsequent hospitalization costs following major Chronic Heart Disease events [26] and pooled data from several randomized studies indicate significant reductions in subsequent primary Chronic Heart Disease [27][28][29].Therefore, it is required to evaluate the independent roles of body fat in exercise conditioning in evaluating improvements that were observed in peak aerobic capacity following cardiac rehabilitation.The purpose of the study is to identify the effects of aerobic capacity on BMI changes in obese and non-obese phase II cardiac rehab patients and at the same time to evaluate the peak METs activity of obese and non-obese cardiac rehabilitation phase II patients. This study is going to be endorsing factors for effective weight reduction strategies to enhance the benefits of cardiac rehabilitation for a large number of obese post CABG patients.
METHODOLOGY
Post CABG 30 patients were enrolled in the Out Patient Department (OPD) phase II cardiac rehabilitation of a tertiary care hospital.The initial assessment comprised of height, weight, blood pressure, MET activity, heart rate,Oxygen saturation, baseline exercise test and ECG (through telemetry). The patients were divided into two groups of 15 patients each. One group having non-obese(normal & overweight) while the otherconsisted of obese patients. Both the groups received 18 exercise sessions/month, diet plan and 12 education classes.Each exercise session consisted of 10minutesof warm-up, 20 minutes of treadmill combined with 20 minutes of bicycling, 10minutes of cool down exercise followed by stretchings. The intensity of exercise varies among individual,and it ranges from 70 to 75% of the maximal heart rate obtained by exercise testing. Also, patients were encouraged to exercise four to five times per week (equivalent to 18 exercise sessions/month) in the outpatient department under the supervision of a cardiac rehabilitation specialist. Each exercise prescription was adjusted periodically to ensure a gradual increase in exercise performance. The dietary instruction was individualized,and monthly return visits with the dietitian were routine, particularly for obese patients and those who were less compliant with the diet. All patients were frequently encouraged by physicians, dietitians, exercise physiologists, and nurses to comply with both the dietary and exercise portions of the program. The recommended treatment was delivered in 54 sessions. At the end of 54 sessions, weight height, peak MET activity with heart rate and ECG were repeated to determine the changes in the obese and non-obese patient. The complete study has been completed in 3 months of duration. Informed consent has been takenbeforethe administration of the questionnaire from every participant. The objectives of the study explained and rationale was given for conducting this surveyto proceed with the survey. Data was analyzed on SPSS version 20. Frequencies and percentages were taken out for categorical variables. A paired sample t-test was conducted to determine whether BMI is significantly different before or after the intervention of both the groups. Independent sample tests were conducted to compare the differences of the group. A value of P<0.05 was used as an indicator of statistical significance. Inclusion Criteria: Post CABG, Phase II cardiac rehab patients, BMI 18.5-39.9 kg/m2, male and female ages between 40-62 years were selected for the study with their consent to be the part of the research. Exclusion Criteria: Patients having: BMI greater or equal to 40kg/m2, Myocardial Infarction, Ventricular Septal defect repair and patient on lipid-lowering drugs were excluded from the study. Patients who did not meet the criteria of phase II cardiac rehab program were also excluded.
Aerobic Exercise Training Protocol
The participants of the studies, who are assigned to the exercise group, have received instruction for walking properly and appropriate selection of shoe. An introductory session was completed to educate the patient on developing a walking program, thus familiarizing him/her with the lab and to operate and utilize treadmill. Each exercise training session included: 10 minutes of warm-up, 40 min aerobic activity,10 minutes cool-down and stretchings. Each training program was individualized and based on the results of the baseline exercise tests. Patients were not allowed to exercise at a heart rate beyond that achieved on the maximal exercise test.
RESULTS
The 30 patients participated in this study that was divided into two equal groups of N = 15 each, while they become part of studybased on BMI. Only those Participants were enrolled in the study whose BMI lies between 18.5kg/m2 to 39.9kg/m 2 . Participants having BMI between 18.5kg/ m 2 to 29.9kg/m 2 were allocated in group A,and BMI of 30-39.9kg/m 2 was placed in group B. All participants of group B completed 12 weeks of cardiac rehabilitation program phase II without any dropoutswhile two patients of group A were dropped out due to health hazards. Table.3:Data regarding perception of exhaustion in both groups. On a question regarding the type of aerobic activity performin the past two weeks by obese and non-obese cardiac rehab phase II patients. All obese patients perform walking as an aerobic activity while 92.3% of non-obese patient perform walking as an aerobic activity. On a question regarding using a stationary bike as an aerobic exercise, 83.3% of non-obese patients use stationary bikes for 1-3hrs/week while 40.0% of obese patients use a stationary bike as aerobic exercise. On a question regarding aquatic exercises 9.1% of non-obese patients perform the aquatic exercise for 30-60mints/week while no obese patient performs aquatic exercises while 30.8% of obese patients perform aquatic exercises for 1-3 hrs./week and no non-obese patient perform aquatic exercises for 1-3 hrs per week. DISCUSSION Regular aerobic physical activity increases exercise capacity and playa role in both primary and secondary prevention of cardiovascular disease.Inactivity is a risk factor for coronary artery disease. Exercise training increases cardiovascular functional capacity and decreases myocardial oxygen demand at any level of physical activity in a healthy population as well as in most patients with cardiovascular disease. The potential risk of cardiovascular disease can be reduced by physical activity. Joint flexibility strength and endurance exercise are an important part of a comprehensive exercise program. Use of lightweight seems beneficial in cardiac rehabilitation patients. This investigation demonstrates that body fat contributes to the "observed" improvement in METs activity by following exercise conditioning programs and cardiac rehabilitation. To our knowledge, there are few studies on the effects of exercise training,but this study has examined the comparative analysis of obese and non-obese post CABG phase II patients.
The key finding of this study is that, a program of 3 months of the aerobic exercise training program in post CABG obese patients shows marked increase in peak Mets activity in obese (4.6667±4.8038) as compared to non-obese (4.6923±0.65134) patients. A study by Goran M et al.in the year 2000 revealed similar results that, absolute VO-2 max was significantly higher in the obese (1.24±0.27 v/s 1.56±0.40),and VO 2 max relative to body weight was significantly lower (44.2±3.2 v/s 32.0±4.1) [30]. It seems that excess fat isincreasing the challenge by the obese individual. Based on recent work concerning fitness vs. fatness, it's time to re-examine the nature of the relationship between aerobic fitness and total body fatness in humans [31][32][33][34][35][36][37]. Previous analysis of this relationship by Toth et al. demonstrates that Fat-Free Mass acts as the covariate, rather than dividing VO 2 max by body weight or Fat-Free Mass, but this is still unclear that whether Fat Mass (FM) has any additional independent effects on VO2max [38]. In comparison with the obese patients, the non-obese patients had a significantlyhigher reduction in body weight that is (28.9167±2.70169), (21.9685±1.41287). The result from this study suggests that change in body weight should be considered as an essential clinical outcome in cardiac rehabilitation programs and substantial efforts should be directed towards maximizing weight reduction for overweight patients with chronic heart disease. These should include maximization of exercise-related energy expenditure in cardiac rehabilitation and adoption of hypocaloric diet using behavioral education. To date, several studies have provided evidence that high body weight, BMI, and adiposity are associated with a lower level of physical activity participation and lower adherence to activity programs [39][40][41][42]. 0verweight individuals seem less voluntarily than normal-weight ones to participate in and adhere to physical activity remain largely unknown despite the obvious practical importance of this question. Patients who experience weight loss during CR have an improved prognosis as compared to those who did not lose weight. There is emerging evidence to indicate meaningful weight loss is possible through individualized Cardiac Rehabilitation program directed toward higher caloric expenditure and a negative caloric balance. This is good news for people who understand the role of physical activity in weight control but dislike vigorous physical activity or believe that they lack time to exercise. For an overweight patient with a sedentary lifestyle, a diet combined with a lifestyle program of gradual and moderate-intensity physical activity helps to reduce weight and enhances weight management while improvising Cardio Vascular Disease risk profile. On a question regarding swimming as an aerobic exercise, only 9.1% of non-obese perform swimming 30-60 mints/ week as aerobic exercises. A study in 1981 by Sheldon magder concluded that in a patient with reduced exercise capacity swimming require near maximal efforts comparative healthy individual and individual who are not good at swimming achieve the same peak Vo2during swimming consume in cycling [43]. According to an estimate of cardiac death rate per 100, 00 hour of exercise ranges from 0-2.0/100,000 in general population and from 0.13/100, 00 to 0.61/100, 00 in cardiac rehabilitation programs. Falls and musculoskeletal injuries are additional risks associated with physical activity,but most of them do not require medical treatment. The incidence of such complications is comparatively low in patients with low-intensity activities.
CONCLUSION
The result of the present study in conjunction with previous findings highlights the importance of aerobic activity for both obese as well as non-obese (normal and overweight) patients. There is some considerable evidence that aerobic exercises including stationary bike and walking in addition to diet plan and education classes in a cardiac rehabilitation program improve weight maintenance. This study is going to help reducethe expense of obesity-related problems as it is going to be a treatment regimen for cardiac as well as non-cardiac patients.
|
2019-05-17T13:33:49.867Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "0e20d304b2c25d2572e6e97dcfd424c85902e119",
"oa_license": "CCBYNC",
"oa_url": "https://www.ijphy.org/index.php/journal/article/download/400/389",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "087367d4989132d51c58da0c30967459f0daf78e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23141960
|
pes2o/s2orc
|
v3-fos-license
|
Proteolysis of the Membrane Type-1 Matrix Metalloproteinase Prodomain
Membrane type-1 matrix metalloproteinase (MT1-MMP) exerts its enhanced activity in multiple cancer types. Understanding the activation process of MT1-MMP is essential for designing novel and effective cancer therapies. Like all of the other MMPs, MT1-MMP is synthesized as a zymogen, the latency of which is maintained by its inhibitory prodomain. Proteolytic processing of the prodomain transforms the zymogen into a catalytically active enzyme. A sequential, two-step activation process is normally required for MMPs. Our in silico modeling suggests that the prodomain of MT1-MMP exhibits a conserved three helix-bundled structure and a “bait” loop region linking helixes 1 and 2. We hypothesized and then confirmed that in addition to furin cleavage there is also a cleavage at the bait region in the activation process of MT1-MMP. A two-step sequential activation of MT1-MMP is likely to include the MMP-dependent cleavage at either P47GD↓L50 or P58QS↓L61 or at both sites of the bait region. This event results in the activation intermediate. The activation process is then completed by a proprotein convertase cleaving the inhibitory prodomain at the R108RKR111↓Y112 site, where Tyr112 is the N-terminal residue of the mature MT1-MMP enzyme. Our findings suggest that the most efficient activation results from a two-step mechanism that eventually is required for the degradation of the inhibitory prodomain and the release of the activated, mature MT1-MMP enzyme. These findings shed more light on the functional role of the inhibitory prodomain and on the proteolytic control of MT1-MMP activation, a crucial process that may be differentially regulated in normal and cancer cells.
Matrix metalloproteinases (MMPs), 2 a family comprised of 25 individual zinc-dependent proteolytic enzymes, are classi-fied as either soluble or membrane-tethered proteinases. The latter exhibit the presence of either a transmembrane domain or a glycosylphosphatidylinositol anchor (1) and because of their association with cell surfaces play key roles in pericellular proteolysis (2). MMPs are especially important to the aberrant proteolysis associated with pathologies including cancer, arthritis, and cardiovascular diseases (3). All individual members of the MMP family are synthesized as latent zymogens. The active site zinc of the MMP catalytic domain is coordinated with the three histidines of the active site and with the cysteine of the "cysteine switch" motif of the N-terminal prodomain (4).
To date, the crystal structure of only a few individual MMP zymogens, including MMP-1, MMP-2, MMP-3, and MMP-9, has been solved. The common characteristic of the prodomain structure of these four MMPs is the presence of the three helixes that are perpendicular to each other (5)(6)(7)(8). Hydrophobic interactions between the helixes maintain the stability of the three-helix bundle of the prodomain.
In the course of the two-step activation, the external active protease initially cleaves the susceptible bond at the "bait" region of the prodomain of MMP-1, MMP-2, and MMP-9. This event destroys the helix bundle and leads to the generation of the activation intermediate. The activation intermediate is processed further by either an external proteinase or autocatalytically resulting in the complete removal of the residual prodomain sequence (9). The activation process involves proprotein convertases, autocatalysis. and other activated MMPs and it may occur either intracellularly or extracellularly (10,11). The two-step proteolytic removal of the prodomain results in the activation of the latent MMP zymogen and the generation of a mature enzyme with full proteolytic activity.
Like all of the other members of the MMP family, MT1-MMP is synthesized as a proteolytically inert zymogen. Evidence suggests that the processing of the zymogen into the * The work was supported by National Institutes of Health Grants CA83017 and CA77470, Susan G. Komen Breast Cancer Foundation Grant BCTR0601231 (to A. Y. S.), and National Institutes of Health Grant RR020843 (to A. Y. S. and J. W. S.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. 1 To determine whether this hypothesis is correct, we performed an extensive biochemical analysis of the cleavage events that target the prodomain of MT1-MMP. As a result of these studies, we suggest that the activation of MT1-MMP involves a two-step mechanism in which the processing of the prodomain sequence at the P 47 GD2L 50 and P 58 QS2L 61 cleavage sites generates the intermediate form. The activation intermediate is then processed by a proprotein convertase cleaving at the R 108 RKR2Y 112 site and generating the fully activated enzyme of MT1-MMP.
MATERIALS AND METHODS
Antibodies, Reagents, and Cells-Rabbit polyclonal antibody (AB815) against the hinge region of MT1-MMP and the murine monoclonal antibody (clone 3G4) against the catalytic domain of MT1-MMP and the hydroxamate inhibitor GM6001 were from Chemicon (Temecula, CA). The mouse monoclonal antibody against a V5 epitope was from Invitrogen. Sulfosuccinimidyl-6-(biotinamido)hexanoate (EZ-Link sulfo-NHS-Long Chain(LC)-biotin) was from Pierce. Human glioma U251 cells were originally from ATCC (Manassas, VA). Cells were grown in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum. As a control, we used U251 cells stably transfected with the two original pcDNA3.1-zeo and-neo plasmids (mock cells). For MT1-MMP overexpression, U251 cells were transfected with the wild-type MT1-MMP (MT1-WT cells) and the MT1-MMP mutant constructs including the R89A mutant with the inactivated R 89 RPR 92 putative furin cleavage motif (R89A cells), the ARAA mutant with the inactivated furin R 108 RKR 111 cleavage site (ARAA cells), and the R89A/ARAA double mutant (R89A/ARAA cells). In addition, we used WT cells that stably co-expressed MT1-MMP with the ␣1-antitrypsin Portland furin inhibitor (PDX) (MT1/PDX cells). The MT1/PDX cells were initially transfected with MT1-MMP and then with PDX. All these cell lines were constructed and have been extensively characterized in our earlier works (23, 28 -30).
Cloning and Expression of the Recombinant MT1-MMP Catalytic Domain, the Prodomain, and the Soluble, Catalytically Inert MT1-MMP E240A Constructs-The MT1-MMP catalytic domain (MT1-CAT) was expressed and purified as described (31,32). The catalytically inert E240A full-length MT1-MMP mutant cDNA (29) was used as a template to clone the MT1-MMP prodomain (MT1-PRO) and the soluble catalytically inert MT1-MMP E240A (MT1-PRO-CAT-PEX) constructs. The MT1-PRO-CAT-PEX construct included the propeptide sequence (PRO), the inert (E240) catalytic domain (CAT), and the hemopexin (PEX) domains. To facilitate the isolation and detection in the samples, MT1-PRO-CAT-PEX was tagged with a His 6 tag both C-and N-terminal. In addition, a V5 epitope sequence was linked to the C-terminal His 6 tag sequence.
The MT1-PRO was purified from the supernatant fraction using a 1.6 ϫ 10-cm Co 2ϩ -chelating Sepharose Fast Flow column (Amersham Biosciences) equilibrated with PBS supplemented with 1 M NaCl. MT1-PRO was eluted with an imidazole gradient (10 -100 mM; 100 ml) in PBS, 1 M NaCl. The MT1-PRO fractions were concentrated using a 5-kDa cutoff concentrator (Millipore, Billerica, MA) and dialyzed against PBS containing 0.005% Brij35. A polyclonal antibody to the purified individual MT1-PRO was then raised in rabbits.
The MT1-PRO-CAT-PEX inert construct was purified from the inclusion bodies and then refolded to restore its native conformation. The inclusion bodies (10 mg of total protein) were washed in 10 mM Tris-HCl, pH 8.0, containing 1 M NaCl and 1% Triton X-100 and then dissolved in 10 mM Tris-HCl, pH 8.0, containing 6 M guanidine hydrochloride and 10 mM 2-mercaptoethanol. The soluble material was then refolded by a 50-fold dilution in 100 mM Tris-HCl, pH 8.0, supplemented with 1 mM CaCl 2 , 1 mM ZnCl 2 , 500 mM L-arginine monohydrochloride, and 20% glycerol. The refolded MT1-PRO-CAT-PEX was next concentrated using a 30-kDa cutoff concentrator (Millipore) and purified on a 1.6 ϫ 10-cm Co 2ϩ -chelating Sepharose Fast Flow column (Amersham Biosciences) equilibrated with PBS, 1 M NaCl. The construct was eluted with an imidazole gradient (10 -500 mM gradient; 100 ml) in PBS, 1 M NaCl, concentrated using a 30-kDa cutoff concentrator and dialyzed against PBS, containing 0.005% Brij35.
To identify the N-terminal sequence of the cleavage fragments, the catalytically inert MT1-PRO-CAT-PEX E240A construct (5 g) was co-incubated with MT1-CAT and furin (50 ng each) for 1 h at 37°C in 50 mM HEPES buffer, pH 6.8. Recombinant human furin was prepared in the S2 Drosophila expression system (Invitrogen) and purified to homogeneity (33). The reactions were separated by SDS gel electrophoresis followed by the transfer of the protein bands on the polyvinylidene difluoride membrane and N-terminal microsequencing of the resulting bands. Microsequencing was performed at ProSeq (Boxford, MA).
For the subsequent cleavage experiments, cellular MT1-MMP was immunoprecipitated, using AB815 antibody (1 g) and Protein G-agarose beads (20 l of a 50% slurry), for 12 h at 4°C from the cell lysate aliquots (1 mg of total protein each) of the confluent MT1/PDX cells. The lysis buffer (20 mM Tris-HCl, 150 mM NaCl, 0.1% SDS, 1% Triton X-100, 1% deoxy-cholate, 1% IGEPAL, pH 7.4) was supplemented with a protease inhibitor mixture set III (Sigma) (1 mM phenylmethylsulfonyl fluoride and 10 mM EDTA). The beads were collected by centrifugation and then washed in 50 mM HEPES, pH 6.8. The samples were incubated for 30 min at 37°C with MT1-CAT (20 ng) in 50 mM HEPES, pH 6.8, containing 10 mM CaCl 2 , 0.5 mM MgCl 2 , and 50 M ZnCl 2 . The digest samples were analyzed by Western blotting with 3G4 antibody and a TMB/E substrate (Chemicon) to identify the cleavage products.
Cell Surface Biotinylation-Cell surface-associated MT1-MMP was biotinylated by incubating cells (80 -90% confluence) for 30 min on ice in PBS containing 0.1 mg/ml EZ-Link NHS-LC-biotin. Excess biotin was removed by washing cells in ice-cold PBS and then quenched by incubating cells for 10 min in PBS containing 100 mM glycine. After washing with PBS, cells were lysed in 20 mM Tris-HCl, 150 mM NaCl, 0.1% SDS, 1% Triton X-100, 1% deoxycholate, 1% IGEPAL, pH 7.4) supplemented with a protease inhibitor mixture set III. MT1-MMP was precipitated from cell lysates using streptavidin-agarose beads and analyzed by Western blotting with the MT1-MMP antibody (3G4) followed by the goat secondary horseradish peroxidase-conjugated IgG and a TMB/M substrate (Chemicon).
Gelatin Zymography-Gelatin zymography was used to determine the efficiency of MMP-2 activation by cellular MT1-MMP. Cells were plated in the wells of a 48-well plate (Costar/ Corning) in serum-containing Dulbecco's modified Eagle's medium and grown to reach a 90% confluence. The medium was then replaced with serum-free Dulbecco's modified Eagle's medium supplemented with the purified MMP-2 proenzyme (100 ng/ml). After incubation for 12 h, the medium aliquots were analyzed by gelatin zymography on 10% acrylamide gels containing 0.1% gelatin (Novex) to detect the proenzyme and the activated species of MMP-2.
Structural Modeling of the Three Helix-bundled MT1-MMP
Prodomain-The multiple sequence alignment of the prodomain peptide sequences of several MMPs is shown in Fig. 1. There is a significant sequence homology of the helical regions and loops 2 and 3 in the peptide sequence of MMPs. In contrast, Activation of MT1-MMP DECEMBER 14, 2007 • VOLUME 282 • NUMBER 50 loop 1, which is the bait region in MMP-1, -2, and -9 (9), displays the least homology thus providing structural evidence of a unique means of the first proteolytic step of a two-step activation mechanism for each MMP. The second and the final activation step of MMPs including MT1-MMP involves cleavage at the C-terminal part of loop 3 (Fig. 1).
Because the crystal structure of the MT1-MMP proenzyme is not currently available, we used in silico modeling to model the spatial structure of the MT1-MMP prodomain. The available structures of the sequence homologous MMP-1, -2, -3, and 9 were used as a template. We also built the model of MT1-MMP using the structure of the MT1-MMP catalytic domain and the structures of the prodomain and the PEX domain of MMP-1, MMP-2, and MMP-3 (Fig. 1).
Computer modeling suggests that the triple helix bundle is highly conserved in MMPs including the MT1-MMP prodomain. In our model, in agreement with the solved structures for proMMP-1, proMMP-2, proMMP-3, and proMMP-9, the proMT1-MMP prodomain has a conserved 3-helix structure. Consistent with the exposure of the bait region in loop 1 in MMP-1, -2, -3, -9, and other MMPs, the loop 1 peptide sequence also appears highly accessible to the proteolytic attack in MT1-MMP. It has also been established that the MMP cleavage motifs predominantly exhibit the presence of the P3 Pro and a hydrophobic residue (especially Leu) at the P1Ј position. There are two potential cleavage motifs, P 47 GD2L 50 and P 58 QS2L 61 , in the loop 1 sequence of the prodomain of MT1-MMP. Because the furin cleavage of the R 108 RKR 111 loop 3 sequence represents the final step of MT1-MMP proenzyme processing, we hypothesized that the loop 1 bait sequence is the target of either the autolytic cleavage or this processing is performed by an external protease with an MMP cleavage specificity.
MT1-MMP Proteolysis of the Prodomain in Vitro-To determine whether the loop 1 sequence is susceptible to MT1-MMP autoproteolysis, we synthesized the peptides (Y 44 LPPGD2 L 50 RTHTQRSPQ 59 and H 53 TQRSPQS2L 61 SAAIAAM 68 ) that overlap the putative cleavage sites. We also synthesized the mutant peptides in which the cleavage sites were inactivated by the L50D and L61D mutations, respectively. The peptides were subjected to proteolysis by MT1-CAT and MMP-2. The MS analysis of the digest reactions were followed to determine the The secondary structure for the prodomain region is shown above the multiple sequence alignment. The similarity plot is shown below the multiple sequence alignment. The numbering starts from the signal peptide sequence. molecular mass and, consequently, the sequence of the cleavage products (Fig. 2). We determined that both MT1-CAT and MMP-2 efficiently cleaved the original peptides and generated the cleavage fragments of the expected size and the peptide sequence. Thus, the cleavage of the Y 44 LPPGD2 L 50 RTHTQRSPQ 59 resulted in peptides 44 -49 (YLPPGD) and 50 -59 (LRTHTQRSPQ). Similarly, peptides 53-60 (HTQR-SPQS) and 61-68 (LSAAIAAM) were detected following the cleavage of H 53 TQRSPQS2L 61 SAAIAAM 68 by MT1-CAT and MMP-2. In sharp contrast, the mutant L50D and L61D peptides exhibiting the inactivated MMP cleavage sites were completely resistant to proteolysis. GM6001, a hydroxamate inhibitor of MMPs, totally blocked the cleavage reactions (not shown).
To additionally confirm these results, we isolated the recombinant MT1-PRO and then subjected the construct to MT1-CAT proteolysis and MMP-2 proteolysis (Fig. 3). The digest reactions were analyzed by SDS electrophoresis in 10 -20% polyacrylamide gels in the Tris-Tricine system (Invitrogen) and also by MS. One major 6-kDa and several minor cleavage products were detected in the reactions. MS identified the presence of the cleavage fragments, the molecular mass of which correlated well with the N-terminal 1-49 fragment, the 50 -60 peptide, and the 61-111 C-terminal fragment of the MT1-PRO construct. GM6001 (2.5 M) completely blocked the cleavage reactions and only the intact MT1-PRO construct was identified in the reactions. Taken together, these results suggest that both of the P 47 GD2L 50 and P 58 QS2L 61 sites are susceptible in vitro to MT1-CAT and MMP-2 and, potentially, to additional MMPs.
MT1-MMP Proteolysis of the Inert, Soluble MT1-MMP Construct-To corroborate our cleavage data we isolated and then refolded the inert (E240A) soluble MT1-MMP construct (MT1-PRO-CAT-PEX) to restore its native conformation. The construct was efficiently cleaved by furin at the R 108 RKR 111 2Y 112 cleavage site and generated the mature MT1-MMP enzyme N-terminal sequence following from Tyr 112 . In the MT1-MMP proenzyme the N-terminal putative furin cleavage site (R 89 RPR 92 ) is a part of the cysteine switchmotif (R 89 RPRCGVPD 97 ; the switch motif sequence is underlined). It is well established that the cysteine-switch sequence region interacts with the active site histidines, maintains the latent proenzyme state of MT1-MMP, and in the native proenzyme is inaccessible to external furin (23). Because only the conventional R 108 RKR 111 2Y 112 cleavage site of the MT1-PRO-CAT-PEX construct was accessible to furin (Fig. 4), we concluded that following refolding the construct restored its native conformation.
We next subjected MT1-PRO-CAT-PEX to MT1-MMP proteolysis. The cleavage reactions were analyzed by SDS-gel electrophoresis and then the N-terminal sequence of the resulting cleavage fragments was determined by N-terminal amino acid microsequencing. As expected, the sequence of the intact MT1-PRO-CAT-PEX construct (Fig. 4A, upper band) commenced from the N-terminal MHHHHHHG 26 sequence (the His 6 tag sequence is underlined; amino acid numbering starts from the signal peptide sequence). The identified N-terminal sequence of the cleavage product (Fig. 4B, lower band) was LSAAIAAMXXF, thus suggesting that proteolysis at the P 58 QS2L 61 took place and that this proteolysis generated the fragment commencing from the N-terminal Leu 61 .
To analyze the cleavage products in more detail, the cleavage reaction aliquots were subjected to MS. The peptides, the molecular mass of which corresponded well to peptides 26-49 (MHHHHHHGSAQSSSFSPEAWLQQYGYLPPGD) and 50 -60 (LRTHTQRSPQS) were identified in the reactions. Taken together, these results suggest that autolytic cleavage of the prodomain sequence of MT1-MMP may involve proteolysis at either the P 47 GD2L 50 site or the P 58 QS2L 61 site or at both sites generating, as a result, an intermediate activation species of MT1-MMP.
Activation Intermediate of Cellular MT1-MMP-To additionally confirm the presence of the activation intermediate of MT1-MMP, we immunoprecipitated cellular MT1-MMP from MT1/PDX cells using the rabbit Ab815 antibody. These cells co-express MT1-MMP with the inhibitor of furin, ␣1-antitrypsin variant Portland (PDX). We specifically selected MT1/PDX cells for our analysis because of the high levels of the MT1-MMP zymogen in these cells. Earlier, we demonstrated that the expression of PDX significantly repressed the activity of the cellular furin-like proprotein convertases and, in a result, enhanced the levels of the MT1-MMP in MT1/PDX cells when compared with the PDX-negative glioma cells (30). The precipitated material (IP) was subjected to MT1-CAT hydrolysis. The reaction products were analyzed by Western blotting using the 3G4 monoclonal antibody that was raised against MT1-CAT (Fig. 5). The intact samples of cellular MT1-MMP demonstrated the presence of the major bands that represent the conventional MT1-MMP proenzyme and the enzyme and a minor band of the intermediate molecular weight. Following MT1-CAT proteolysis, the concentrations of the proenzyme were reduced with a concomitant increase in the levels of the intermediate.
To additionally confirm these results, we determined the status of MT1-MMP in the cells that stably express the wild-type proteinase and the proteinase mutants in which the furin cleavage motifs were inactivated by mutations (ARAA and R89A) (29). As controls, we used the cells stably transfected with the original plasmid (mock) as well as the MT1/PDX cells. To evaluate the status of the cell surface-associated MT1-MMP, cells were surface-biotinylated with membrane-impermeable biotin and then the biotin-labeled proteins were immunoprecipitated using streptavidin-agarose beads. MT1-MMP was identified in the precipitates using the 3G4 antibody. To determine the status of the total cell MT1-MMP pool, the cells were lysed and the lysate aliquots were analyzed by Western blotting with the 3G4 antibody (Fig. 5). The enzyme of MT1-MMP was predominantly observed in mock cells. In the cells expressing the wildtype MT1-MMP, there were three bands that represented the proenzyme and the intermediate and the The Analysis of the Double L50D/L61D MT1-MMP Mutant-To address the question if the cleavage at the bait region of MT1-MMP is necessary before furin cleavage to accomplish the efficient activation of MT1-MMP, we inactivated, by mutagenesis, the P 47 GD2L 50 and P 58 QS2L 61 cleavage sites of the bait region and, as a result, generated the double MT1-L50D/L61D mutant. We then expressed the mutant and the wild-type MT1-MMP in MCF-7 cells. We specifically selected MCF-7 for our studies because they do not naturally express MT1-MMP and MMP-2. In addition, we generated the rabbit polyclonal antibody to the recombinant prodomain of the MT1-MMP sequence. Because the prodomain is absent in the mature MT1-MMP enzyme, the availability of this antibody greatly facilitates the identification of the MT1-MMP proenzyme alone in cell samples. Fig. 5C shows that according to Western blotting with the antibody against the individual MT1-MMP prodomain the L50D/L61D mutant construct efficiently accumulates in the proenzyme form in MCF-7 cells suggesting that the mutant resists furin processing. In contrast, the insignificant levels of the MT1-MMP proenzyme were detected in cells expressing either the inert mutant (Glu 240 -V5) or the wild-type MT1-MMP (MT1-V5).
These results correlate well with the reduced efficiency of the L50D/L61D mutant in activating MMP-2. Thus, according to the gelatin zymography of medium aliquots the independent clones of MT1-L50D/L61D cells were significantly less efficient in generating the activated species of MMP-2 when compared with the wild-type construct. Consistent with our observations and many others, the inert E240A construct was incapable of MMP-2 activation (Fig. 5C). These results confirm that the processing of MT1-MMP at the P 47 GD2L 50 and P 58 QS2L 61 cleavage sites of the bait region facilitates either the subsequent cleavage of the R 108 RKR 111 2Y 112 site of the prodomain by furin or the release of the prodomain by MT1-MMP or both to generate the mature MT1-MMP enzyme with full proteolytic activity. Overall, our results suggest that a "furin alone, one-step mechanism" is less efficient in activating MT1-MMP when compared with the two-step activations that involves furin and an additional proteinase.
To address a question if other proteinases in addition to MT1-MMP are the activating enzymes for the Ser-Leu 61 or the Asp-Leu 50 cleavage, we compared, by using Western blotting with the 3G4 antibody to the catalytic domain, the activation pattern of the wild-type MT1-MMP with that of the inert Glu 240 mutant. Consistent with our other results, the proenzyme, intermediate, and mature enzyme species were each detected in the MT1-WT cell lysates. In turn, the activated mature enzyme was the predominant MT1-MMP form in E240A cells. Because MCF-7 cells do not naturally synthesize MT1-MMP and MMP-2 and because the activity of the Glu 240 mutant is nearly completely repressed by the mutation of the catalytically essential Glu 240 , these results suggest that the bait region of the prodomain of MT1-MMP may be accessible by and then sensitive to multiple cellular proteinases rather then only to MT1-MMP itself or MMP-2.
The Activity of MT1-MMP at Acidic pH-Overall, our results suggest that non-furin proteolysis generates the activation intermediate of MT1-MMP and that the relative level of the activation intermediate is highest inside the cellular compartment when compared with that present on the cell surface. Based on these data, we hypothesized that the intermediate is generated in the secretory vesicle in the course of secretion of the MT1-MMP proenzyme from the Golgi compartment to the plasma membrane. Because of the presence of the acidic interior in the secretory vesicles, we next determined if MT1-MMP is capable of exhibiting its proteolytic activity at acid pH. To determine whether MT1-MMP is active at acidic pH, we performed the refolding procedure of MT1-CAT at pH 5.0, 5.4, 6.0, and 7.0 and we then tested the proteolytic activity of the refolded MT1-CAT samples against ␣1-antitrypsin (AAT), a convenient and sensitive protein substrate of MT1-MMP (39). Fig. 6 shows that either the refolding of MT1-CAT or the cleavage reactions or both procedures were performed at pH 5.0, AAT was remaining in its intact form. The refolding at either pH 6.0 or 7.0 followed by the cleavage reactions performed at pH 5.4, 6.0, or 7.0 resulted in significant proteolytic activity of MT1-CAT. The refolding of MT1-CAT at pH 5.4 followed by the pH 5.4 cleavage reaction was sufficient to generate a high MT1-CAT activity that led to a significant level of AAT cleavage. We infer from these results that the acidic interior of the secretory vesicle is compatible with autoproteolysis of MT1-MMP and that self-proteolysis of MT1-MMP may occur in the secretory vesicle budding from the trans-Golgi compartment.
DISCUSSION
Because the discovery of MT1-MMP in 1994 -1995 and the findings showing its role in the activation of MMP-2 there has been a question: what is the mechanism of the activation of the activator (19,20)? Volumes of evidence that have been generated through the years suggest that MT1-MMP is a key player in tumor cell migration and that MT1-MMP is a likely drug target in multiple pathologies. A precise and complete understanding of the activation and regulation mechanisms of MT1-MMP is required for the design of effective therapies.
It is known that the R 108 RKR 111 2Y 112 motif of the prodomain sequence of the latent MT1-MMP proenzyme is processed by furin and several additional furin-like proprotein convertases in the course of the secretion pathway and that this single-step processing results in the mature enzyme sequence commencing from N-terminal Tyr 112 (25). It is also well established that the cysteine residue of the prodomain cysteineswitch motif maintains the latent status of the proenzyme by chelating the active site zinc (4). The cysteine-switch peptide sequence itself and the prodomain are inhibitory for MMPs including MT1-MMP (9,40,41). A single-step mechanism suggests that the excised prodomain likely remains associated with the mature proteinase. Thus, following cleavage of the ADAM12 prodomain in the trans-Golgi by a furin proteinase, the prodomain remains non-covalently associated with the mature molecule (42).
To inactivate the excised inhibitory prodomain and to liberate the processed active enzyme species from the inhibition of the prodomain, several members of the MMP family have adopted a two-step mechanism. This mechanism involves the cleavage of the prodomain peptide sequence both by an external proteinase and by autocatalysis. Based on these considerations and on the conserved three-helix bundle structure of the MMP prodomains (9), we have hypothesized that, in addition to a furin-dependent step, there is an additional and previously uncharacterized step in the MT1-MMP activation process. Accordingly, we suggest that detection of the activation intermediate of MT1-MMP that is generated as a result of the cleavage in the bait region of the prodomain will bring us a step closer to a full understanding of the activation mechanism of this proteinase.
The results of our biochemical studies supported our hypothesis. To prove our hypothesis, we used synthetic peptides, the recombinant prodomain, and the soluble MT1-MMP constructs, the furin-cleavage resistant MT1-MMP mutants, and the MT1-MMP mutants with the inactivated bait region cleavage sites. The proteolytic processing of these constructs was analyzed by N-terminal sequencing and MS of the resulting cleavage fragments. Our results provide substantial evidence that supports a two-step mechanism of MT1-MMP activation and prodomain sequence processing.
Our results suggest that there is a proteolytic processing of the bait region of the prodomain sequence of MT1-MMP (either at the P 47 GD2L 50 or the P 58 QS2L 61 or at both sites). This event results in the activation intermediate of MT1-MMP, the presence of which we have demonstrated using in vitro tests and cell-based assays. In agreement, the processing of the prodomain by furin was impaired in the L50D/L61D mutant in which both cleavage sites of the prodomain bait region were inactivated by mutations. The stepwise activation of MT1-MMP also involves the action of a furin proteinase cleaving the inhibitory prodomain at the R 108 RKR 111 2Y 112 site, where Tyr 112 is the N-terminal residue of the mature MT1-MMP enzyme. This two-step mechanism eventually facilitates the degradation of the inhibitory prodomain and the release of the activated, mature MT1-MMP enzyme. We believe that our findings shed more light on the potentially important functional role of the inhibitory prodomain and on the proteolytic control of MT1-MMP activation, a crucial process that may be differentially regulated in normal and cancer cells.
|
2018-04-03T05:01:01.519Z
|
2007-12-14T00:00:00.000
|
{
"year": 2007,
"sha1": "b48e2b6d9999a76f72dd524d38b1fa18d25152fb",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/50/36283.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "9e891c55c68829c4b93a13c5648781da531dd370",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
126127942
|
pes2o/s2orc
|
v3-fos-license
|
A GEOMETRIC PROGRAMMING APPROACH FOR OPTIMAL RESOURCE ALLOCATION TO CONTROL EPIDEMIC OUTBREAKS IN ARBITRARY NETWORKS
This paper proposed a control-strategies for nodes to control the spread of an epidemic outbreak in arbitrary directed graphs by optimally allocating their resources throughout the network. Epidemic propagation is well modeled as a networked version of the Susceptible-Exposed-Infected-Susceptible (SEIS) epidemic process. Using the Kolmogorov forward equations and mean-field approximation, we present a mean-field model to describe the spreading dynamics and prove the existence of a necessary and sufficient condition for global exponential stability. Based on this stability condition, we can derive another condition to control the spread of an epidemic outbreak in terms of the eigenvalues of a matrix that depends on the network structure and the parameters of the model. According to different control purposes and conditions, two types of control-theoretic decision can be considered: 1)given a fixed budget, find the optimal resource allocation to achieve the highest level of containment, 2)given a decay rate of epidemic, find the minimum cost to control the spreading process at a desired decay rate. A geometric program can be formulated to solve the optimal problems and the existence of solutions is also proved. Numerical simulations can illustrated our results.
Introduction
Development of strategies to control spreading processes in networks has brought much attention due to its applications in many relevant fields, including computer virus [1], public health [2][3], and information propagation over social network [4].The dynamic of spreading processes in networks not only depend on the epidemic model but also the structure of the contact network.In this context, the spread process is modeled by a variant of SIS epidemic model that includes a state of "expose".The individual infected rate and recovery rate can be modified within a feasible range by allocating resources in each node.Based on this model, an efficient convex framework can be formulated to solve the optimal problems.The dynamic behavior of spreading processes in networks has been widely studied.Meanfield models over arbitrary contact graphs were brought to the forefront in both continuous time [5] and discrete-time.[6] is a paper about the dynamic behavior of spreading processes in arbitrary contact networks for the case of discrete-time dynamic.[7] consider a continuous time SIS model over arbitrary graphs using mean field theory and provide a condition for globally asymptotically stable of the disease-free state.It should be noted that most models are developed base on undirected graphs, however, in practice, directed graphs may be more appropriate to the spread of diseases in human populations.Therefore, we study the spread processes in an arbitrary directed network of heterogeneous nodes.
Designing control strategies of spread processes in networks is a significant problem.Several papers have proposed some methods on different aspects of this problem.In [8], the authors proposed a semidefinite programming (SDP) to find the optimal strategies of resource allocation in an undirected network.[9] uses a linear-fractional optimization program to find the optimal investment on disease awareness in a social network.In [10], Borgs et al. provides a probabilistic analysis for the case of a given contact network to characterize the optimal allocation of a fixed amount of antidote.[11] provided a eigenvalue sensitivity analysis ideas to design optimal strategies of allocation resource to control the spread of a virus.The relationship between the recovery parameters and distributed approach is explored in [12].Our work is based on [13] and [14], in which a continuous-time time Markov processes, called the N-intertwined model, is used to analyze and control the spread of a SEIV epidemic model.This paper's layout is as follows.In section 2, some important notation and background need to be introduced.In section 3, we formulate two resource allocation problems for epidemic propagation in the network.In section 4, a convex optimization framework is used to efficiently solve the allocation problems both on strongly connected graphs and general directed graphs (not necessarily strongly connected).In section 5, simulations illustrate our results.
Notation and preliminaries
Some graph-theoretical nomenclature and the dynamic spreading model need to be introduced in this section.
Graph Theory
Let G = (V , E ) denote an directed graph, where V = {v 1 , •, v n } denote the set of n nodes and E ⊆ V × V denote the set of ordered pairs of nodes called directed edges.An edge from e j pointing towards e i denote as (e j , e i ).Let N in i = { j : (e j , e i ) ∈ E } denote the in-neighborhood of node i.A directed graph is strongly connected if and only if there is a directed path from e j to e i for every pair of nodes e i , e j ∈ V .
The adjacency matrix of a digraph G , denoted by U G = [u i j ], is an n × n matrix defined entry-wise as u i j = 1 if edge (e i , e j ) ∈ E , and u i j = 0 otherwise.Hence, the adjacency matrix U G = [u i j ] is always nonnegative.An nonnegative adjacency matrix is irreducible if and only if its associated graph is strongly connected.Given an n × n matric Z, the sets of eigenvalues and corresponding eigenvectors of Z are defined as λ 1 (Z), •, λ n (Z) and η 1 (Z), •, η n (Z), where we order eigenvalues in decreasing order of their real parts, i.e., R(λ λ 1 (Z) and η 1 (Z) are called the dominant eigenvalue and eigenvector of Z. ρ(Z) is defined as the spectral radius of Z and equal to the maximum modulus across all eigenvalues of Z.
It should be note that we only consider unweighted digraphs in this paper, hence, the adjacency matrix is always nonnegative.
Stochastic heterogeneous SEIS model
The Susceptible-Exposed-Infected-Susceptible epidemic model is a variant of the SIS model by including a state of "exposed" which the node has been exposed to the disease and is contagious, but is not aware of the contagion.This model is a continuous-time markov process and each node in the network can be in one out of three possible states.Consider a network of n individuals described by the adjacency matrix U G = [u i j ], and the parameters are defined as follows: β i : infection rate that the susceptible node i transitions to the exposed state by contact with its exposed neighbors.α i : infection rate that the susceptible node i become exposed by contact with its infected neighbors.
γ i : the rate at which the exposed node i be infected.
δ i : the recovery rate of node i.
Let v k (τ) ∈ {0, 1} and v j (τ) ∈ {0, 1}.Where v k (τ) = 1 indicates that node k is in the infected sate and other states of node k at time τ denote as v k (τ) = 0. Similarly, v j (τ) = 1 indicates that node j is in the exposed sate, and v j (τ) = 0 otherwise.Three possible types of stochastic transitions during the time interval [τ + ∆τ): a): Assuming node i is in the susceptible state at time τ.This node can switch to the exposed state during the small time interval [τ, τ + ∆τ) with a probability that depends on: (i) its infection rates β i and α i ; (ii) the strength of its incoming connections {u i j , for j ∈ N in i }; (iii) the states of its in-neighbors {v j (τ), for j ∈ N in i } and {v k (τ), for k ∈ N in i }.Formally, the probability of this transition is given by where ∆τ > 0 is an arbitrarily small time interval, and b): Assuming node i in exposed state, the probability of i transit to infected in the time interval [τ, τ + ∆τ) is given by c): Assuming node i is infected, the probability of i recovering back to the susceptible state in the time interval [τ, τ + ∆τ) is given by The Markov process with 3 n states described in the above is very hard to analyze due to the exponential size of the state space.Therefore, we use a mean-field approximation of its dynamics.This approximation is widely used in the field of epidemic analysis and control, since it performs numerically well for many realistic network topologies.
Let h i and g i denote the probabilities of node i to be exposed and infected, respectively.
Then, the probabilities of node i to be susceptible is 1 − h i − g i .Using the Kolmogorov forward equations and a mean-field approach, one can approximate the dynamics of the epidemic spread using a system of 2n ordinary differential equations, as follows: ) We can write the mean-field approximation equations in matrix form as where Proposition 1 Consider the heterogeneous SEIS epidemic model in (2.3) and assume that U G ≥ 0 and for some ς > 0, the disease-free equilibrium is globally exponentially stable, and ς is called exponential decay rate.
Problem formulation
To control the spread of an epidemic in a given network, the work of proposing an efficient optimization framework to find the optimal resource distribution is very significant.In this paper, we consider two types of resources: 1) preventive resource (e.g.vaccines to reduce infection rate β i , α i ), 2) corrective resources (e.g.antidotes to help increase recovery rates δ i ).
We can distribute resources to modify the parameters β , α, δ , and γ, within the feasible ranges as follows: where β i and α i are the minimum possible infection rates for node i.They can be achieved by allocating a large amount of vaccines at node i. βi and ᾱi are the maximal infection rates without any preventive resource for node i.Similarly, the δ i is a natural recovery rate of node i, and δi is the maximal recovery rate which is achieved with enough corrective resources.For convenience, let ∇ = {β , α, γ, δ } denote as the global set of parameters.
The cost of preventive and corrective resources
We define three cost functions, the vaccination cost function f i (β i ), g i (α i ) and treatment cost function h i (δ i ).The cost functions are node-dependent and present the following properties: (1) Assuming the vaccination cost f i (β i ) and g i (α i ) are monotonically decreasing in the interval [β i , βi ], [α i , ᾱi ], respectively.Antidote cost g i (δ i ) is monotonically increasing with regard to δ i .
(2) In the absence of investment, Apart from the above properties, we assume the cost functions have the following forms to obtain a tractable convex framework.
Note that we have normalized these cost functions to have values in the interval [0, 1].
Therefore, the cost functions of preventive resource, f i (β i ), g i (α i ) and the corrective resource cost, h i (δ i ), are twice differentiable and satisfies the following constrain: Notice that, since f i , g i are monotonically decreasing, h i is monotonically increasing, we have that f i < 0, g i < 0 and h i > 0. The results implies that f i > 0, g i > 0 and h i < 0. Therefore, the assumption is stronger than convexity.
Problem statements
In this section we present two types of resource allocation problems for SEIS model: (1) the budget-constrained allocation problem.Our aim is finding the optimal allocation of vaccination and antidotes to maximize the exponential decay rate ς when given the total budget T > 0.
(2) the rate-constrained allocation problem.Given the exponential decay rate ς , find the cost-optimal distribution of vaccines and antidotes to eradicate the disease with decay rate greater than equal to ς .
Problem 1(Budget-constrained allocation) Given the total budget T , this problem can be state as the following optimization problem: for all i = 1, •, n, where (3.3) is the budget constraint.
Problem 2 (Rate-constrained allocation) Given a desire decay rate ς , the rate-constrained allocation is formulated as follows: for all i = 1, •, n, where (3.8) constrains the decay rate to ς .
In the following section we propose an approach to solve these problems in polynomial time.
4.A convex framework for optimal resource allocation
A convex formulation can be use to solve both the rate-constrained allocation problem and the budget-constrained problem in unweighted, directed networks using GP.We first solve problems in the strongly connected digraphs, then extend the results to general digraphs.
Some important concepts need to be briefly reviewed.Denote n decision variables ξ = (ξ 1 , •, ξ n ), ξ i > 0. In the context of geometric programs, a monomial function h(ξ ) is defined as a real-valued function of the form h(ξ , where a > 0 and k i ∈ R for all i = 1, •, n.The sum of monomials is defined as the polynomial function, i.e., q(ξ n with a i > 0 and k j,i ∈ R for all j ∈ {1, •, n} and i ∈ {1, •, I}.
A geometric problem is an optimization problem of the form: where f (ξ ) and q i (ξ ) are polynomial functions, h i (ξ ) are monomials.
Note that polynomials and monomials are convex in log-scale, therefore f is a convex function in log-scale.The quasiconvex optimization problem GP can be transformed to a convex problem based on a logarithmic change of variable ϕ i = logξ i , and a logarithmic transformation of the objective and constraint functions.After this transformation, the GP in (4.1) takes the form as follows: minimize F(ϕ) where F(ϕ) = log f (exp ϕ) and Q i (ϕ) = log q i (exp ϕ).Also, assuming that n , the equality constraint after the logarithmic change in variable can be obtained In summary, (4.2) is a convex optimization problem in standard form and can be efficiently solved in polynomial time.
In the following section, we show how to transform our problems into GPs.In our transformation, the theory of nonnegative matrices and the Perron-Frobenius lemma are very useful.
GP for strongly connected digraphs
For a strongly connected digraph J , its adjacency matrix Z is irreducible.Then the follow lemma holds for the spectral radius of the adjacency matrix of any unweighted, strongly connected digraph.
Lemma 1 (Perron-Frobenius):Z is a nonnegative, irreducible matrix.Then, the following statements about its spectral radius ρ(Z) hold: (1) ρ(Z) > 0 is a simple eigenvalue of Z, (2) Zω = ρ(Z)ω, for some ω > 0, Based on the above results, we have the following result: Proposition 2: Consider the n × n nonnegative, irreducible matrix Z(x) with entries being either polynomials or 0 with domain ξ ∈ Ω, where Ω is defined as polynomials.Then, we can solve the following GP via minimize λ 1 (Z(ξ )): Based on the above results, assuming that the contact graph J is strongly connected, we can solve both the budget-constrained and the rate-constrained problems.
Theorem 1(Solution to the budget-constrained problem): For strongly connected digraphs, problem 2 can be solved by the following GP min ) First, note that the matrix L is not a nonnegative, we can define a nonnegative matrix from L by simply adding a constant k = max{γ i , δi Notice that the matrix L is nonnegative and k = max{γ i , δi } for i = 1, •, n.Then, according to proposition 2, we have that maximizing ς in (3.1) is equivalent to minimizing λ 1 ( L) under budget constraints.We can minimizing λ 1 ( L) by solving the following GP: for all i ∈ {1, •, n}.The matrix L is nonnegative and irreducible if the adjacency matrix U J corresponding to a strongly connected digraph.Therefore, applying proposition 2, the first constraint can be rewrite as the following two constraints: for all i ∈ {1, •, n}, where δi = k − δ i .After the above change of variables, the problem (2) rewrite as a standard GP form (4.1).We can find the optimal resources allocation in a heterogeneous network under the budget constraint.
Theorem 2(Solution to the rate-constrained problem): problem 2 can be solved by solving the following GP: for all i ∈ {1, •, n}, where δ Proof: The proof is similar to the one for theorem 1, so we omit it.
In this section, we have provided the solutions to problems for strongly digraphs.Then we show how to extend it to general connected digraphs.
GP for general connected digraphs
The adjacency matrix U is irreducible if and only if its graph is strongly contact.So the Perron-Frobenius lemma is not applicable to digraphs that are not strongly connected.For general digraphs, the statements in the P-F lemma are weaken, as follows: Lemma 2 (Perron-Frobenius):Z is a n × n nonnegative matrix.Then, the following statements about its spectral radius ρ(Z) hold: (1) ρ(Z) ≥ 0 is an eigenvalue of Z, (2) Zω = ρ(Z)ω, for some ω ≥ 0, Note that the components of ω in proposition 2 are strictly positive, however, the components of ω in lemma 2 are nonnegative.So if we want to use GP, this issue must be resolved.Defined the sets Θ = {i : η i = 0 and ω i = 0}.Hence the variables for i / ∈ Θ can be excluded from the GP's in theorems 1 and 2. Hence the allocation problems can be split into two different sets of decision variables.
Rate-Constrained Allocation Problem for General Digraphs: If i / ∈ Θ, the optimization problem holds for the variables: Thus, for f i , g i decreasing and h i increasing, it is obviously that the minimum investment for all i / ∈ Θ correspond to the optimal infection rates βi , ᾱi and optimal recovery rate δ i .
Theorem 3: On the other hand, for i ∈ Θ, the optimal solution can be obtained from the following GP: ) Theorem 4: Budget-Constrained Allocation Problem for General Digraphs: For i / ∈ Θ, it's easily to find the optimal spreading and recovery rates are βi , ᾱi and δ i .Since one of the nodes with zero eigenvalue.Thus, we can rewrite the budget-constrained allocation problem as following GP for general digraphs: for i ∈ Θ, where δ * i = k − δi * and λ * 1 (L) ≤ λ * − k.Theorem 3 and 4 provided the solutions to both two problems for general digraphs.
Simulation
We have developed an optimization program for determining optimal-cost parameter distributions such that the desired equilibrium is stabilized.In this section, we present how the geometric programming solve the optimization problem of resource allocation by simulating a In Fig. 1 we plot the cost functions which are given in subsection 3.1.Here the abscissa is the amount investment on the node n i and the ordinates are the infection (red and blue line) and recovery (black line) rates achieved by the investment.As we increase the amount invested on protection resources from 0 to 1, the infection rate of that node is reduced from ( βi , ᾱi ) to (β i , α i ) (red and line).Similarly, as we increase the amount invested on corrective resources at a node n i , the recovery rate grows from δ i to δi (black line).Fig. 2 demonstrates the performance achieved by using the rate-constrained allocation algorithm from Theorem 2. We have that in the optimal allocation some nodes receive no resources at all; some nodes receive only preventive or corrected resources, and some nodes receive a mixture of preventive and corrective resources.
Conclusion
We have presented a convex optimization framework to find the optimal allocation resources of the SEIS epidemic model on arbitrary directed graphs.A necessary and sufficient condition for global exponential stability can be derived from the eigenvalues of a matrix that depends on parameters of the model and the network structure.Furthermore, We have formulated optimization programs for determining optimal resources allocation and reformulating them as geometric programs that can efficiently solve the optimal resource allocation problem.For future work we plan to study the endemic equilibrium which the disease-free equilibrium is not globally asymptotically stable.
FIGURE 1 .
FIGURE 1.A plot of infection rate β i (in red), α i (in blue) and recovery rate δ i (in black) achieved at node n i after the investments on preventive and corrective resources are made on that node.
FIGURE 2 .
FIGURE 2. The optimal investment on preventive and corrective resources for ten nodes in a strongly connected digraph, where the abscissas are nodes and the ordinates are the amount invested on preventive and corrective resources.
|
2019-04-22T13:09:04.526Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "08a25185404709ca501343d600e59eecc4798ff1",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.28919/cmbn/3388",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "de0265611f5f87990987a8650e2f7dda5de54ec0",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
206315812
|
pes2o/s2orc
|
v3-fos-license
|
Inter-Landau-level Andreev Reflection at the Dirac Point in a Graphene Quantum Hall State Coupled to a NbSe2 Superconductor
Superconductivity and quantum Hall effect are distinct states of matter occurring in apparently incompatible physical conditions. Recent theoretical developments suggest that the coupling of quantum Hall effect with a superconductor can provide a fertile ground for realizing exotic topological excitations such as non-abelian Majorana fermions or Fibonacci particles. As a step toward that goal, we report observation of Andreev reflection at the junction of a quantum Hall edge state in a single layer graphene and a quasi-two dimensional niobium diselenide (NbSe2) superconductor. Our principal finding is the observation of an anomalous finite-temperature conductance peak located precisely at the Dirac point, providing a definitive evidence for inter-Landau level Andreev reflection in a quantum Hall system. Our observations are well supported by detailed numerical simulations, which offer additional insight into the role of the edge states in Andreev physics. This study paves the way for investigating analogous Andreev reflection in a fractional quantum Hall system coupled to a superconductor to realize exotic quasiparticles.
Proximity effect through Andreev reflection (AR) is the primary ingredient for engineering a topological superconductor, which is expected to be a breeding ground for new types of topological excitations [1][2][3][4][5][6][7][8].Discovery of graphene in the last decade [9], aided by developments in improving device quality by encapsulating with hexagonal Boron Nitride [10,11] (hBN), provides one of the best opportunities to extend the study of AR for Dirac electrons in proximity to superconductor [12][13][14][15][16][17][18][19].In these systems an incident electron from the single layer graphene (SLG) with a finite excitation energy combines with another electron below the Fermi energy (E F ) to form a Cooper pair at the junction (Fig. 1a-top).The AR and its transition from retro to non-retro reflection has been observed [17].More interestingly, when E F is aligned with the Dirac point, AR requires an inter-band process and is predicted to be specular (Fig. 1a-top), as observed recently in bilayer graphene [16].
Exotic physics is predicted to arise from the coupling between a superconductor and a topological quantum Hall (QH) state.In particular, this system has been proposed as a novel route for creating a variety of non-abelian anyons, which have been hailed as possible building blocks for future topological quantum computation [6,20,21].The physics of AR is predicted to alter dramatically in the QH regime [22][23][24], where electron transport occurs primarily through the chiral edge states, which themselves are topologically robust manifestations of the Landau Levels (LLs) in the interior of the sample.On the QH plateau, an incident chiral electron is expected to bounce back as an Andreev-reflected chiral hole propagating in the same direction as the incoming electron (Fig. 1a -bottom) [25], due to the sign reversals of both the charge and the mass.A difficulty in experimentally investigating this physics is the fact that high magnetic fields required for the QH effect are inimical to superconductivity.Important progress has recently been made in this direction.Supercurrent and Josephson coupling in QH regime at SLG-superconductor interface have been demonstrated at relatively low magnetic field (∼ 2T) [26][27][28].At high magnetic fields (∼ 10T) the superconducting correlations in QH edge has been realized recently [29].
In this work, we show that a coexistence of, and a coupling between, a QH system and a superconductor can be realized and studied in a system of SLG coupled to a NbSe 2 superconductor.Our results reveal that at high magnetic fields, when the breaking of the spin and valley symmetries generally fully splits the zeroth Landau level [30][31][32], AR manifests most strikingly through an anomalous conductance peak located precisely at the Dirac point (DP).We attribute this peak to inter-Landau level AR, and confirm its physical origin by detailed theoretical simulations.
Our devices consist of an SLG partially covered with a thin film of NbSe 2 (Fig. 1b).Details of the fabrication and measurement schemes are given in the Supplemental Material (SM) [33] Sec.SI1.We show results from three devices as a function of the back-gate voltage (V BG ), the source-drain bias voltage (V SD ), the temperature (T ) and the magnetic field (B).The highest mobility of 60,000 cm 2 /V.sec was obtained in device-3, where the carrier inhomogeneity (δn) due to charge puddles was ∼ (3-5) × 10 9 cm −2 which corresponds to Fermi energy broadening (δE F ) of ∼ 6-8meV [34].The characterization of several devices is shown in SM Sec.SI1 [33].Fig. 1c presents the Hall resistance, R xy , of device 2 at B = 10T, where the plateaus at 2e 2 /h and 1e 2 /h are clearly visible.From the B dependence of Shubnikov de Haas oscillations [35,36] the LL broadening of Γ ∼ 4.5 meV was obtained (SM Sec.SI3 [33]).The two-probe conductance (G) measured between SLG -superconductor contact at 9.8T is shown in Fig. 1d (device 1).The value of conductance on the plateaus is lower than the ideal value due to the contact resistance of ∼ 1.5 kilo-ohms at the SLG -NbSe 2 junction.In addition to different broken symmetries, an insulating state, i.e. a ν = 0 plateau, is observed at the DP as previously reported in the literature [37][38][39][40][41]. Using thermally activated carrier transport model we have determined the insulating gap of the ν = 0 plateau (SM Sec.SI5 [33]).Previous studies [40,41] have reported that the value of insulating gap of ν = 0 plateau depends on Γ, and the measured activation gap is nothing but the mobility gap, ∆E I = ∆E LL -Γ [36,42].At 10T, ∆E I ∼ 5 meV was measured for device 3 (SM sec.SI5 [33]), and activation plots at several B are shown in Fig. 1e.The details of the activation plots of device 1 and device 2 are shown in SM Sec.SI5 [33].
We begin by demonstrating that superconductivity in NbSe 2 survives up to high perpendicular magnetic fields where the uncovered graphene is comfortably in the QH regime.Fig. 1f shows the differential conductance (dI/dV) as a function of V SD , called the Andreev curve, for the values of V BG marked A and B in Fig. 1d on the ν=2 plateau.The existence of superconductivity is evident from the BCS like conductance peaks at about ±0.5 meV for device 1 at B = 9.8T.Similar features are observed for device 2 (SM Fig. SI4-5f and Sec.SI6 [33]).Bias spectroscopy (SM Sec.SI6 [33]) allows us to extract the low-T superconducting gap (2∆) as a function of magnetic field, which we show in Fig. 4a; the large error bars arise primarily due to the asymmetric nature of the Andreev curve (the possible origin of which is discussed below).The superconducting gap of NbSe 2 flake, 2∆ ∼ 2meV and T C ∼ 7K at 0T was directly characterized in our previous work (Fig. 3a of ref [17]), which is consistent with the 0T data in Fig. 4a.Fig. 1g shows the temperature dependence of the Andreev curves at B=9.8 T, which produces a T c ∼ 2K where the BCS peaks disappear.We can relate the T c to superconducting gap through 2∆ = 4.07k B T c ∼ 0.7meV (the factor 4.07 was determined in Ref. [43] for NbSe 2 ), which is close to that extracted from the Andreev curve at B=10T as shown in Fig. 4a.These observations -appearance of BCS peaks in the Andreev curve (Fig. 1f) in a QH plateau and excellent agreement with the T dependence predicted by the BCS theory (Fig. 1g) -demonstrate the coexistence of QH effect and superconductivity.It is noted that for bulk NbSe 2 , the critical magnetic field is H c2 ∼4-5T [44], but surface superconductivity (H c3 ) has been reported for up to B=7-8T [45]; the existence of superconductivity at the interface of SLG-NbSe 2 at high magnetic field is thus not unexpected.
We next come to AR.Some evidence for it can be seen from the fact that the conductance at the 2e 2 /h plateau is enhanced by ∼15% (Fig. 1h) when V SD is changed from -3mV, where no AR is expected (because |eV SD | > ∆), to zero, where AR is expected.For an ideal, fully transparent contact, one expects 100% enhancement due to AR; we attribute the smaller enhancement in our system to a non-fully transparent contact.Temperature dependence of conductance enhancement at ν = 2 is shown in SM Fig. SI4-5g [33].Conductance enhancement due to AR can also be seen by comparing the data below and above T C shown in SM Fig. SI4-5e [33].We note that the change in conductance for Andreev curve in Fig. 1f is around 10%.However, the change of conductance was higher ∼ 25-30% for device2 in the QH regime (at ν = 2 plateau) as shown in SM Fig. SI6-8 [33].At 0T the changes in Andreev curve was around 20% in device1 (SM Fig. SI6-7 [33]) and 45-50% in device2 (SM Fig. SI6-8 [33]).
Our most important finding is shown in Fig. 2, where a closer inspection of the conductance minimum reveals, completely unexpectedly, an anomalous peak.Further investigation brings out the following properties.First, the peak is seen precisely at the DP.Second, the peak is not seen above T C (compare Figs. 2d and 2c).Third, its amplitude decreases with decreasing temperature as well as increasing ∆E I , indicating that the peak is a finite temperature effect.Fig. 3a shows the 2D colormap of log(G) plotted as a function of V BG and B, which displays the appearance of the peak precisely at the DP and its continuous decrement with increasing B. Finally, the parameters for which the anomalous peak is observed in device 2 and device 3 are shown by the dashed enclosed areas in the phase diagram in Fig. 4a; for both the devices the highlighted regime where the peak is observed satisfies the condition, ∆E I < 2∆.
All of these facts are naturally explained in terms of a conductance peak originating from a new mechanism, namely finite temperature inter-Landau level AR, in which a thermally excited electron in the N = 0 LL band above the E F reflects as a hole in the N = 0 LL band below the E F , as shown schematically in Fig. 4b.Such a peak is expected to occur (i) precisely at the DP, (ii) at finite temperature but for T<T c , and (iii) for 2∆ ≥ ∆E I .We mention that V BG at the DP depends slightly on whether the sweep is up or down, causing two different values in Fig. 2b; in Fig. 3a, all data are for sweep in the up direction, and show that the peak position remains invariant.We also note the presence of certain secondary, sample-specific peaks away from the DP, but their amplitudes are smaller by two to three orders of magnitude.
To see the activated nature of anomalous peak we plot the area under the peak in Fig. 3b for device 2, and fit it to a thermally activated behavior.Fitting the peak height gives a similar gap, as shown for device 3 in the inset of Fig. 3b.Further details regarding the activation nature of the peak for all the devices are shown in SM Sec.SI8 and SI9 [33].Fitting the area in Fig. 3b using e −∆E eff /2k B T gives ∆E eff ∼ 0.25 meV.One may expect ∆E eff to be equal to the ∆E I (mobility gap), but the former is lower by a factor of ∼ 3.This finds a natural explanation by the fact that the temperature dependence of the resistance of SLG shows two distinct ∆E I differing by a factor of ∼ 3 (SM Sec.SI5 [33]): for example at B = 6T in device 2 for T > 2K we have ∆E I ∼ 0.8meV, but for T < 2K we have ∆E I ∼ 0.25meV, the latter being essentially in perfect agreement with the gap deduced from the anomalous peak at the DP.Similar results are obtained for device 3 as shown in SM Sec.SI5 [33].Although the existence of the smaller, or 'soft' gap around the E F in between the LLs at low temperature has been reported in the literature [42,[46][47][48], its origin is not well understood.We ascribe the 'soft gap' below 2K to disorder.
To further confirm the physics of the inter-Landau level AR we have performed extensive numerical calculations, where we consider a system of graphene in the QH regime connected to superconducting graphene.The physics of the ν = 0 insulator at high B has been the subject of many studies [37, 39-41, 49, 50] and two most likely mod- els are in terms of a canted antiferromagnet (CAF) or an isospin ferromagnet (IFM) [30,32], the band diagrams for which are schematically shown in the insets of Figs.5a and 5b.The insulating gap of the former originates from a splitting of the ν = 0 LL into Landau bands with chiral edge states, whereas for the latter it results from a coupling between the helical edge states.To keep the discussion general, we consider AR in both models.The calculated conductance as a function of chemical potential (E F ) is plotted in Figs.5a and 5b (SM-theory [33] for the details) for CAF and IFM, respectively.It shows a small conductance peak at the DP arising from inter-Landau level AR (insets of Fig. 5a and 5b).At finite temperatures, the conductance at the DP can be analytically expressed as where a is the probability of AR and C = dE F /dV BG .The experimental peak in Fig. 2c is fitted using the above equation with fitting parameters: a=0.35, ∆E I =0.5 meV, C=0.62 meV/V for T=1K and similar fitting is also shown for T = 0.75K.The fitting parameters are in general agreement with the experimental values (SMtheory [33]).
Before ending, a comment on the physical origin of the observed asymmetry of the Andreev curves (Fig. 1f and SM Sec.SI6 [33]) is in order.dI/dV depends on the joint density of states (DOS) of the two materials.Typically, a normal metal has large and essentially constant DOS whereas the quasiparticle DOS of the superconductor is symmetric around zero bias, producing a symmetric Andreev curve.The density of states in a QH edge, in contrast, is complicated in real materials and can be energy dependent, thus producing asymmetric Andreev curves [16,[51][52][53].We also note that due to the presence of the superconductor, the skipping orbits at the interface alternate between electron and hole-type orbits, whose centers are in general slightly offset (Fig. 1a bottom) [22,24], which results in an interference pattern.The fingerprints of the interference pattern can be seen as quasiperiodic conductance oscillations on the QH plateau as a function of the chemical potential (Fig. 1h and SM Sec.SI10 [33]).We refer the reader to previous literatures [16,22,24,[51][52][53][54][55] and the SM [33] for details.
In conclusion, our primary accomplishment is an unambiguous demonstration of AR in graphene quantum Hall effect, which manifests most dramatically through an anomalous finite-temperature conductance peak at the Dirac point.By a combination of experimental and theoretical studies, we have confirmed its origin as thermally induced inter-Landau level AR.
We thank Subhro Bhattacharjee, Tanmoy Das, H. R. Krishnamurthy, Subroto Mukerjee, Sumathi Rao, Sambuddha Sanyal, Ruchi Sexena, Vijay Shenoy, and Abhiram Soori for useful discussions.The authors acknowledge device fabrication and characterization facilities in CeNSE, IISc, Bangalore.A. D. thanks the Department of Science and Technology (DST), Government of India, under Grants No. DSTO1470 and No. DSTO1597.We also acknowledge the support by the U.S. Department of Energy, Office of Basic Energy Sciences, under Grant No. de-sc0005042 (J.K. J.) and the National Key R&D Program of China (Grant No. 2016YFA0401003) and NSFC [Grant No. 11674114 (X.L.)].
technique 1 .The contacts are made of Cr/Au(5nm/70nm) using the standard electron beam lithography technique followed by thermal deposition.Because NbSe 2 oxidizes when exposed to atmosphere, predefined contacts are made for NbSe 2 , and at the final stage, the exfoliated NbSe 2 is transferred within a few minutes.Device-2 and device 3 were top-hBN protected with another stage of transfer to achieve higher mobility.Highest mobility of 60,000 cm 2 /V.sec is achieved in device 3, where the carrier inhomogeneity (δn) is ∼ 3-5 × 10 9 cm −2 for electron side, which gives a Fermi energy broadening δE F ∼ 6-8 meV.
Measurement technique: Measurements were carried out in He3 cryostat as well as in a dilution refrigerator, with base temperatures of 240mK and 100mK, respectively.Standard lock-in technique is employed.All the measurements were performed using a low voltage bias of 20µV when measured in He3 cryostat, and 4µV when measured in dilution refrigerator.
Measurement scheme:
The different measurement schemes used in our experiment are shown in Fig. SI1-2.For the R XY measurement, Current was injected between contacts A-D, whereas voltage was measured between contacts B-C.For the activation plots, the resistance was measured in two probe configuration, where voltage was applied at contact A and the current was measured at contact C. The R XX (to extract the LL broadening) was measured by injecting current between contacts B-D and measuring voltage between contacts A-C.For the Andreev reflection related measurements, the voltage was applied at contact A and current was measured at contact D.
SI2 -QH response of device 2 and device 3
In Fig. SI2 we show the Hall resistance R XY in device-2.Well established quantum Hall plateaus are visible at B=2T indicating high device quality.Clear 1e 2 /h plateau is visible at 10T; this plateau is identifiable in Hall measurement for B greater than 6T.For device 3 the 1e 2 /h plateau is even visible at 3.8T as seen in Fig. SI2-f.
SI3 -extraction of LL broadening
We have also evaluated the LL broadening (Γ) in device 3 from the magnetic field dependence of the amplitude of Shubnikov de Haas oscillations as described in Ref 2, 3 .The average value of Γ was found to be ∼ 4.5 meV, which is comparable to that in the device in Ref V SD = 0mV, T=240mK Device-1, B = 9.8T, V SD =0mV
SI5 -Insulating gap at ν = 0
We have performed two probe measurement in device-2, in Au-SLG-Au configuration, to further understand the effects of broken valley and spin symmetries in graphene.In Fig SI5(a) the two probe gate response at T=100mK shows ν=0 and ν=1e 2 /h plateaus at several B. SI5-b summarizes the insulating gap at ν=0 plateau for device 2. SI5-c show the activation plots for device 1 at B=9.8T with insulating gas of ∼ 1 meV.The activation plot of device 3 at several magnetic fields is shown in SI5-d and e, where insulating gas of ∼ 5 meV is observed at B=10T.As mentioned in the manuscript the insulating gap (mobility gap) depends on the quality of the device, particularly on LL broadening.SI5-f and g show two distinct insulating gaps for T > 2K and T < 2K in device 2 and device 3, respectively.Although the smaller soft gap around the E F between the LL at low temperature has been known in the literature 4-7 , exact origin for it in graphene is not clearly known.We ascribe the soft gap below 2K to disorder.To evaluate the superconducting gap we have performed G versus V SD (Andreev curve) measurement, i.e., bias spectroscopy.G(V SD ) shows monotonic behavior for T > T C , but begins to show non-monotonic features inside the superconducting gap below T C .For ideal contacts with high transparency, theory of Andreev reflection predicts that the conductance should double within the superconducting gap, but in practice it can give a smaller enhancement depending on contact transparency.Fig. SI6-7 shows the differential conductance for device-1 at zero magnetic field; the conductance dip located precisely near zero bias, which is an usual signature of Andreev reflection, is not present at 10K.Though the Andreev curve in zero magnetic field (Figs.SI6-7, SI6-8) is highly symmetric, it becomes asymmetric at finite magnetic fields.We discuss the possible sources of asymmetry below.
Andreev curve at higher B:
At high magnetic fields, the ν=2 QH plateau is ideal for detecting the superconducting gap.The reason is that on a QH plateau, the QH edge states are ideally dissipationless and all of the voltage drop occurs at the interface of SLG and SC.In both the devices, the G versus V SD plot at high magnetic fields produces either a zero-bias peak with dips on either side or a zero-bias dip with peaks on either side.The distance between the peaks or the dips yields the superconducting gap.It can be seen from Fig. SI5 that the superconducting gap decreases with increasing magnetic field, as expected.
Origin of asymmetry in the high B Andreev curve: The differential conductance across a junction depends on the joint density of states (DOS) of the two materials.In the case of a normal metal-superconductor junction, the normal metal has a large and essentially constant DOS, whereas the quasiparticle density of states in a superconductor is symmetric around zero bias.A convolution of these two results in a symmetric Andreev curve.In the presence of a magnetic field, when the chemical potential is in a QH plateau (between the Landau level), the density of states corresponding to the edge channels is quite complicated in a realistic sample, and can be energy dependent, leading to an asymmetric Andreev curve.Below T c , another physics becomes relevant, namely the physics of conductance oscillations, predicted theoretically 8, 9 and observed experimentally 10, 11 , which can further contribute to asymmetry of the Andreev curves.The underlying physics is that these oscillations depend on the wave vector (and thus the energy) of the incident electron, and therefore are not symmetric in source-drain bias.The physics of proximity induced oscillations is discussed in more detail in section SI10.As discussed in the previous section, the center of a plateau (ν = 2) is the best place to observe the effect of superconductivity in bias response.To evaluate the T C we have carried out bias measurements at the centre of ν=2 plateau at various temperatures.Fig SI7(a-d (c), the log of area is seen to depend linearly on 1/T, which is a signature of an activated nature of the underlying process.The activation gap is estimated to be ∼ 180µeV.Fig. 3a of the manuscript shows the phase diagram for device 2 and device 3, indicating the parameter range for the observation of the anomalous peak at the Dirac point.In Device-1 we could observe the peak at high magnetic field of 9.8T due to the fact that the mobility gap (∼ 0.9meV) and superconducting gap (∼ 1meV) are comparable at B = 9.8T.SI8-d shows activation nature of device 2, which has been plotted using peak height rather than area and shows an activation gap of ∼ 185µeV, close to the value fitted with area as mentioned in the main text.SI8(e) shows the anomalous Dirac point peaks at several temperatures for device 3.In SI8(f) the area under the peak (device 2) is plotted as a function of ∆E I , which also shows activated nature.The zero bias conductance on the 2e 2 /h plateau exhibits (Fig. SI10) reproducible quasi-periodic oscillations of amplitude ∼ 0.2e 2 /h as a function of V BG .Such oscillations are absent for T > T c (see SI4) as well as above superconducting gap, which strongly suggests that these are a manifestation of the Andreev physics.
SI7 -T C determination
Along the junction interface (Fig. SI10(b)) the centers of the electron and the hole trajectories (classically, the radii of skipping cyclotron orbits) are offset by a distance d of the order of the magnetic length l B = /eB, thereby defining an area ∼ d × W (W being the sample width), which can give rise to periodic oscillations as function of chemical potential(µ) and magnetic field(B) 8, 9 .we have also observed similar conductance oscillations as a function of magnetic field (Fig. SI10(c)).Because of this Aharonov-Bohm like effect the Andreev reflection at the interface of graphene QH and superconductor is more intriguing and its effect is observed in our experiment in form of both conductance oscillations and peak or dip in differential conductance plot, which is consistent with the literatures 10, 11 .We note, however, that the effect of disorder cannot be ruled out and might be responsible for the absence of nice periodic oscillations, as expected from the interference physics discussed above.In the metal-superconductor junction, the electrons and holes on the metallic side are coherently coupled to those in the superconductor by Andreev reflection at the metal-superconductor interface, which is the origin of the superconducting proximity effect.Here, we theoretically demonstrate that such electron-hole coherence is maintained even when the metal is in the quantum Hall regime, where only the chiral edge states are available at the Fermi energy.
Our model is as follows.A planar interface is considered, located at x = 0 between a semi-infinite region x > 0 occupied by a NbSe2 superconductor and a semi-infinite graphene region (x < 0).A uniform magnetic field is applied in the z direction, which we assume is screened for x > 0 by the superconductor due to the Meissner effect.We thus assume an abrupt change of the magnetic-field strength at the interface: B(x) = B(1 − Θ(x)), where Θ(x) is the step function.In the basis ψ i = (c i↑ , c i↓ ) T , the Hamiltonian of the graphene in the presence of a magnetic field can be generally written as Here H 0 is the Hamiltonian for perfect graphene in the presence of a transverse magnetic field.M z denotes the Zeeman splitting due to the applied external magnetic field.At the maximal magnetic field in our experiment (10T) we have M z ≈ 1meV, which is of the same order as the superconducting gap.The H 0 Hamiltonian does not capture the physics at ν = 0, because it produces a gapless spin ferromagnet, rather than a gapped insulator as observed experimentally.We incorporate this physics by adding an anisotropic term Hamiltonian H I , which is due to either the sublattice anisotropy ( i=a = − i=b ) 3 or the in plane anisotropy (u i=a = −u i=b ) 12 .This term makes the system fully gapped at Dirac point.The sublattice anisotropy i and u i lead to the so called isospin ferromagnet (IFM) and canted anti-ferromagnet (CAF) separately.
We first consider Andreev reflection near the ν = 0 Landau level.Although the origin of the observed insulating gap in this regime is still being debated, we find that both the IFM and CAF models produce a sharp conductance peak at Dirac point.This peak serves as a smoking gun signature of Andreev reflection at QH-superconductor interface, because the system near ν = 0 Landau levels can only have interband Andreev reflection which is independent of the specific form of the insulating gap.Let us now explain the basic physics.
Figure 1 .
Figure 1.(Color Online) (a) (top) AR in graphene at B = 0.The red (blue) dashed line shows retro (specular) AR. (bottom) Classical picture of AR at the interface of QH edge state and superconductor based on skipping orbit.The electron and hole orbits have the same chirality for intra-band process.(b) Schematic of the experimental measurement setup of hBN protected graphene devices.For Rxy measurement current is injected between A and D, voltage is measured between B and C. For the two probe conductance measurement of the SLG-NbSe2 junction voltage is applied at A, and current is measured at D. (c) Rxy of device 2 at B = 10T showing symmetry broken QH plateaus.(d) Two-terminal gate response of device 1 between Au-SLG-NbSe2 at B = 9.8T and VSD = 0mV.(e) Activation plot for device 3 at the Dirac point for different magnetic fields; the corresponding insulating gaps are shown on the figure.We note that the resistance changes by up to three orders of magnitude over the range of the fits.(f ) dI/dV as a function of VSD measured in device 1 at B = 9.8T on the ν = 2 LL at the positions A and B marked in fig(d); BCS peaks are present at 240 mK (red) but not at 10K (black).(g) 2D colormap of normalized dI/dV versus VSD as a function of temperature at B = 9.8T for device 3. Superconductivity vanishes at around 2K.The black dashed line is the theoretical temperature dependence of BCS gap.The cut lines are shown at 240mK and 2.5K.(h) The gate responses of device 1 for 6T at VSD = 0 (black) and for |eVSD| > ∆ (red).The former has enhanced conductance.
Figure 2 .
Figure 2. (Color Online) (a) The anomalous conductance peak at the DP shown in several devices on a log scale.(b) Conductance peak in device 2 at different magnetic fields shows the decrement of the amplitude with increasing B. (c) The conductance peak amplitude increases with increasing temperature.The red dashed lines in the last two panels display fitting of the peak line shape with Eq. 1.(d) No conductance peak at the DP is seen for T > TC .
Figure 3 .
Figure 3. (Color Online) (a) 2D colormap of log(G) in device 3 plotted as a function of VBG and B showing the presence of the anomalous peak precisely at the DP, which vanishes above 5.6T.(b) Area of the peak plotted as a function of 1/T showing activated behavior with an effective gap of ∆E eff ∼ 248µeV.In the inset, amplitude of conductance peak in device-3 is used to show the activated behaviour, which gives ∆E eff ∼ 150µeV.
Figure 4 .
Figure 4. (Color Online) (a) An experimental phase diagram in energy and magnetic field.Filled black squares are the superconducting gaps measured using bias spectroscopy as a function of B. The filled red squares and filled purple hexagons show the insulating gaps of device-2 and device-3 as a function of B, where the thick lines are the guide to the eye.The anomalous conductance peak at the DP is observed in the region enclosed by the dashed black ovals.(b) Schematic of inter-Landau level AR process at the DP.
Figure 5 .
Figure 5. (Color Online) Panel a shows numerical results based on canted antiferromagnetic (CAF) model, and the panel b for the isospin ferromagnet (IFM) model.The chemical potential is quoted in units of the hopping parameter t.The band diagram and the peak at the Dirac point are shown as insets.
Figure SI3- 4 :
Figure SI3-4: (a) Longitudinal resistance of device-3 plotted as a function of B showing conventional Shubnikov de Haas (SdH) oscillations.(b) Oscillation amplitude as a function of 1/B, slope is extracted from the linear fit.(c) Evaluated LL broadening (Γ) at different carrier concentrations.
Figure SI4- 5 :
Figure SI4-5: ((a-e) Two probe conductance of SLG-NbSe 2 junction in device-1 (Au-SLG-NbSe 2 ) as a function of the backgate voltage at several values of magnetic field, showing "quantized" conductance plateaus.Panel (e) has a comparison of the conductances at different bias voltages as well as different temperatures.(f) Two probe conductance of SLG-NbSe 2 junction in device-2 at B=4T showing "quantized" conductance plateaus.The inset shows the G-vs.-VSD plot at the V BG value marked by vertical dashed black line.The red line shows the conductance at T = 100mK and black line at T = 5K.The zero bias enhancement of conductance is correlated with the onset of superconductivity.(g) The conductance enhancement is quantified as a function of temperature at ν = 2 plateau (black curve).In contrast, the suppression of conductance at ν=0 plateau is shown as a function of temperature (red curve).
Figure SI5- 6 :Figure SI6- 7 :
Figure SI5-6: (a) Two probe conductance of SLG in device-2 (Au-SLG-Au) as a function of the backgate voltage at different magnetic fields, showing quantized conductance plateaus.A clear ν=0 plateau is observed at B=8 and 10T.(b) Arrhenius plot in device 2 showing ν=0 insulating gap at different magnetic fields.(c) Arrhenius plot in device 1 at B=9.8T.(d-e) Arhenius plot in device 3 at several magnetic fields.(f) Arrhenius plot in device 2 at B=6T showing two slopes corresponding to ∆E I ∼ 800 and 250 µeV for above and below 2K, respectively.(g) Arrhenius plot in device 3 at B=5T showing two slopes corresponding to ∆E I ∼ 1.19 meV and 290 µeV for above and below 2K, respectively.
Figure SI6- 8 :
Figure SI6-8: G vs V SD measurement in device-2 at T=100mK at different magnetic fields.BCS like features are evident.The separation between the conductance peaks/dips marked by vertical dotted lines yields the BCS gap (2∆) of the superconductor.
Figure SI7- 9 :
Figure SI7-9: (a-d) 2D colormap of the normalised conductance (G(V SD )/G(V SD = -2 mV)) as a function of V SD and T at different magnetic fields.(e) T C as a function of magnetic field.In (b) the black dashed line shows the theoretical temperature dependence of superconducting gap calculated from BCS equation using parameters 2∆(T = 0) = 0.8meV and T C = 2K.
) shows the 2D colormap of the normalized conductance as a function of V SD and T at different values of B. The colormaps show vanishing superconductivity above a critical temperature.Similar measurement at other magnetic fields produces T C as a function of magnetic field, shown in fig SI7(e).SI8 -Activated nature of the peak at Dirac point In Fig SI8(a) the anomalous peak at the Dirac point, observed at B=10T in device-1, is shown at several temperatures.We have fitted the experimental data to a Lorentzian to extract the area under the peak, shown in Fig. SI8(b) as a function of T. The error bars indicate the quality of the fit.From the Arrhenius plot shown in Fig SI8
Figure SI9- 11 :
Figure SI9-11: 2D colormap of log(G) plotted as a function of V BG and B in device 2 (a) and device 3 (b), respectively.The evolution of the anomalous Dirac peaks with the increasing insulating gap is clearly visible.
Figure SI10- 12 :
Figure SI10-12: (a) conductance plotted as a function of V BG (left) showing quasi periodic oscillations on ν = 2 QH plateau at B=9.8T and the differential conductance plotted as a function of V SD (right) at peaks or dips marked in the gate response curve.(b) Classical skipping orbits of electron and hole at the interface; vertical lines show the center of the orbits.(c) conductance plotted as a function of B (left) showing similar quasiperiodic oscillations on ν = 2 QH plateau at V BG = −10.5Vand the differential conductance plotted as a function of V SD (right) at peaks or dips marked in the G vs B curve.
Figure SI11- 13 :
Figure SI11-13: Left panel: Band dispersion for an isospin ferromagnet (IFM).The breaking of the sublattice symmetry through ia = − ib = (a and b indicating the two sublattice of the graphene nanoribbon) provides a band gap at 0th Landau level.Right panel: The differential conductance of the QHEsuperconductor junction with different chemical potentials.Blue, red and Green curves corresponds to the three chemical potentials marked by horizontal blue, red and green dashed lines in the left panel.
|
2018-08-31T20:23:07.000Z
|
2018-07-19T00:00:00.000
|
{
"year": 2018,
"sha1": "19b2627c137603749bd5b96491a38aaa5524dde6",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevLett.121.086809",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "19b2627c137603749bd5b96491a38aaa5524dde6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
264426610
|
pes2o/s2orc
|
v3-fos-license
|
The Emergence of Anisotropic Superconductivity in the Nodal-line Semi-metal TlTaSe2
TlTaSe2 is a non-centrosymmetric quasi-2D crystal semi-metal hosting nodal-line topological features protected by mirror-reflection symmetry. Here, we investigated the superconducting properties of TlTaSe2 using the first-principles anisotropic Migdal-Eliashberg theory. The Fermi surface hosts well gapped multiband features contributed by the Ta 5d and Tl 6p orbitals. Moreso, anisotropic superconducting gaps were found to exist at 2.15 and 4.5 meV around the in-plane orbitals, coupling effectively with the in-plane phonons of the Ta and Tl atoms. Using the Allen-Dynes-modified McMillan formula, we found a superconducting transition temperature of 6.67 K, accompanied by a robust electron-phonon coupling constant {\lambda} of 0.970. This investigation provides valuable insights into the mechanisms underlying anisotropic superconductivity in TlTaSe2.
Introduction
The interplay of anisotropic superconductivity and nontrivial topological states can manifest emergent quantum electronic behaviors in materials.[1,2] Anisotropy in superconductors manifests as discernible variations in superconducting characteristics at the Fermi surface, primarily attributable to the intrinsic anisotropy found within their electronic states and lattice vibrational modes.This anisotropy exerts a significant influence on a spectrum of superconducting properties, encompassing the critical magnetic field, penetration depth, and the magnitude of the energy gap linked to Cooper pairs, ultimately governing the material's behavior in diverse magnetic and physical environments.[1][2][3] However, the underlying mechanisms driving superconductivity in materials are electron-phonon interactions, which are generally considered isotropic in metals.[4,5] Recent studies suggest that electron-phonon (el-ph) coupling can be anisotropic because of phonon anharmonicity [6], islands of isolated electronic states, and van Hove singularities around the Fermi surface.[5] Intercalated layered transition metal dichalcogenide compounds have potential anisotropic superconducting properties.[7][8][9][10] Previous studies demonstrated that TlTaSe2 is a topological nodal-line semi-metal with robust nontrivial topological surface states.[11] Exploring materials with anisotropic superconductivity and robust topological features is crucial for advancing the fundamental understanding of topological superconductivity and potentially discovering novel applications in superconducting technologies.[2,3] Herein, we investigate the superconducting properties and electronic structures of TlTaSe2.
Electronic band structure analysis reveals the material's topological nodal line semi-metal nature, with a distinct drum-like feature in the bands indicating its properties.Surface state calculations employing the Wannier method revealed robust topological surface states on the 001 crystallographic surfaces.From the study of the el-ph interactions, strong anisotropic el-ph coupling exists at the multiple states around the Fermi surface.The strength of this interaction, which governs the superconducting transition temperature and gap structure, was analyzed through phonon dispersion and electron-phonon coupling strength.This investigation provides valuable insights into the mechanisms underlying anisotropic superconductivity in TlTaSe2.
Computational methods
As implemented in Quantum Espresso, the first-principles calculations were performed within the density functional theory using norm-conserving pseudopotential.[12] [13] A kinetic energy cutoff of 60 Ry was used in the self-consistent calculations.We sampled the Brillouin zone with a gamma-centered k-mesh of 15 × 15 × 7. We also used the wannier90 code [14] to obtain the materials' maximally localized functions; were the surface Green's function [15] was used, as implemented in the WannierTools [14] package, to calculate the surface state dispersion of the materials.The dynamical matrices and linear variation of the self-consistent potential were calculated within the density-functional perturbation theory [16] on the irreducible set of regular 3 3 q-point meshes.The electronic wave functions for the Wannier interpolation within the EPW were calculated on uniform and gamma-centered k-point meshes of size 6 3 .To solve the Eliashberg equations, we evaluate the electron energies, phonon frequencies, and electron-phonon matrix elements in a fine grid using the method of Giustino et al. [17].The fine grids contain 12 3 q-points and 24 3 k-points uniform gamma-centered grids.The Eliashberg function α 2 F(ω) and cumulative contribution to the electron-phonon coupling strength λ(ω) were obtained from [18] Where λqv is phonon wave vector q with v mode resolved electron-phonon coupling constant and NEf is the density of states at the Fermi level.
We used the Allen-Dynes formula, equation ( 3) to estimate the Tc, where ωlog is the logarithmic averaged phonon frequency
Crystal structure TlTaSe2
TlTaSe2 has a non-centrosymmetric crystal structure that belongs to the P-6m2 (187) space group.[11] A unit cell hosts one atom each of Tl, Ta, and two atoms of Se.The Tl layer is intercalated between layers of two tantalum dichalcogenide layers, with Tl atoms aligned with Ta atoms in the vertical direction, as shown in Figure 1
Electronic structure TlTaSe2
The electronic band structures of TlTaSe2, both in the presence and absence of spin-orbit coupling (SOC), are shown in Figures 1(d
Superconducting properties of TlTaSe2
The electronic states residing at the Fermi level have substantial significance in dictating the superconducting behavior observed in materials.Anisotropic superconductivity is a distinctive phenomenon in various materials that can emerge due to Fermi surface sheet anisotropy.The anisotropic superconductivity observed in TlTaSe2 arises from the complex interplay between the distinct Fermi surfaces and phonon contributions of its constituent atoms.This interrelationship gives rise to intriguing and tunable superconducting behavior that is pivotal for understanding the material's unique properties.The phonon contributions originating from the atomic vibrations of the Tl, Ta, and Se atoms play a crucial role in determining the anisotropic nature of the superconducting state.[24,25] These atomic vibrations, characterized by different frequencies, reflect the distinct masses of the atoms, with heavier Tl and Ta atoms primarily contributing to lower-frequency vibrations and lighter Se atoms dominating the higher-frequency range.This imparts pronounced anisotropy to the superconducting properties of TlTaSe2.
Conclusions
In summary, the anisotropic superconductivity observed in TlTaSe2 can be viewed as an intricate interplay between the distinct Fermi surface sheets and the phonon contributions of its constituent atoms.This relationship underscores the material's versatile and tunable superconducting properties, making it an intriguing platform for exploring anisotropic superconductivity and its potential applications in emerging technologies.The intricate interplay between electronic structure, phonon dynamics, and Fermi surface topology deepens our understanding of superconducting mechanisms and opens doors to engineering novel materials with tailored anisotropic properties for advanced electronic and quantum devices.
FIG. 1.(a) Side and top views of TlTaSe2 crystal structure.(b) Brillouin zone with high-symmetry
|
2023-10-24T06:42:53.100Z
|
2023-10-21T00:00:00.000
|
{
"year": 2023,
"sha1": "10a17fb3169b17ea95db6697eefb36190e0b3ad7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "10a17fb3169b17ea95db6697eefb36190e0b3ad7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
236247616
|
pes2o/s2orc
|
v3-fos-license
|
Use of colored agrotextiles and length of stay in the cultivation of yellow melons
The coverage of plants with agrotextiles of different colors and length of stay may influence the productivity of the crop. The objective of this study was to evaluate the influence of the use of colored agrotextiles and their length of stay on the cultivation of melon plants in the conditions of the semiarid region of Paraíba. The experiment was fulfilled out at the Experimental Farm of the Federal University of Campina Grande, located in the municipality of São Domingos – PB. The treatments were distributed in a randomized block design in a 4 x 4 factorial scheme, with four repetitions and consisting of four colors of agrotextile (orange, white, gray, and blue) and four lengths of stay (15, 18, 21, and 24 days after transplanting). The following characteristics were evaluated: active photosynthetic radiation, average temperature, number of fruits per plant, average fruit mass and total productivity. The use of colored agrotextiles associated with the length of stay promoted a change in the production characteristics of the yellow melon fruits. The highest productivity, number of fruits, and mass of the melon fruits were obtained when the plants were covered with the orange-colored agrotextile at 15, 18, and 24 days after transplantation, respectively. The long stay of the agrotextile affected the content of total soluble solids.
Introduction
Melon is a plant grown in several countries and has been recorded in Europe, Asia, Africa, and North and South America (Silva et al. 2018). According to FAO (2019), melon production in the world in 2017 was 31,948.349 million tons. Of this total, Brazil produced 540,229 thousand tons of fruit and, in the Northeast region of Brazil, mainly in Chapada do Apodi (RN) and Baixo Jaguaribe (CE), stood out as the central producing locations corresponding to 72.4% of the melon produced in the country (IBGE, 2019).
This species is considered a C3 cycle plant; however, it demands solar radiation and temperature. In this context, the semiarid region of Northeast Brazil stands out as promising for the cultivation of melon due to the record of high radiation levels that can exceed 3,000 h of sunlight in the year (Pereira et al. 2015). On the other hand, excessive sunlight on plants can be prejudicial to photosynthesis because the efficiency of the photosynthetic process is severely reduced in these conditions, after all, when the leaves are exposed to more light than they can use, that is, above from the saturation point, the photosynthetic apparatus is damaged and becomes inactive due to photoinhibition (Brant et al. 2011).
It is of fundamental importance to adopt technologies that mitigate the detrimental effects of high levels of solar radiation on plants. Melon producers use white-colored agrotextile. However, there is a need for research that can assess the association of agrotextile colors and the length of stay on the characteristics of plants such as growth and fruit production.
For Taiz and Zeiger (2017), the color spectrum can positively or negatively influence the functionality of plant organs. Thus, changes in the microclimate introduced in the cultivation of melon by the use of agrotextile with different colors and time of permanence, especially to the reduction of solar radiation, wind speed, air temperature, and increased absolute humidity, can decrease the evaporative demand in the plants and allow the plants to increase stomatal conductance and, therefore, CO2 assimilation in comparison with the open field ones in the initial growth phase (Haijun et al. 2015). Saraiva and Rodrigues (2012) evaluated Pepino Taiko's initial development that the colored meshes (blue, red, and black) influence the initial development and the physiological and metabolic activities. Costa et al. (2010) also observed changes in the anatomy of the leaves, such as less thickness of the adaxial epidermis and palisade parenchyma, fewer stomata and morphology such as the height of plants of the species Ocimum selloi as a function of the color spectrum of the meshes (red and blue).
Thus, the covering of the plants in the cultivation line with colored meshes can alter the physiological responses of the plants, modifying the production of photoassimilates and the source: drain off the plants. The length of stay of the agrotextile can also influence the development of the crop; Santos et al. (2015), in a study with the culture of melon, observed that the treatments with the use of line cover influenced the firmness of the pulp and showed a linear increase of 35.2% compared to the control at 30 days, while the total acidity was titratable, soluble solids and total soluble sugars decreased 33.3, 8.9, and 42.1%, respectively, with the increase in the coverage time in the plants. The objective of this experiment was to evaluate the melon production characteristics according to the association of the colored agrotextile and length of stay.
Methodology
The work was carried from October 2018 to January 2019 at the Experimental Farm of the Federal University of Campina Grande (UFCG) in São Domingos -Paraíba, Brazil. A randomized block design in a 4 x 4 factorial with four replications was used. The treatments consisted of four colors (orange, white, gray, and blue) and four lengths of stay of agrotextile (15,18,21, and 24 days after transplantation -DAT). The parcel area contained ten plants and a functional area with eight plants, respectively.
On October 30, 2018, the sowing took place in expanded polystyrene trays of 128 cells filled with commercial substrate indicated for the production of vegetable seedlings. The transplant was carried out when the seedlings presented the second final leaf emitted, using the spacing of 2.0 x 0.5 m with one plant per hole a hybrid of yellow melon, from the Inodorus group of the company Feltrin ® .
The experimental area was 1600 m², in which a plowing and subsequent opening of the planting furrows spaced by 2.0 m, raising the swaths with 0.20 m the height and width of 0.30 m adopted for the preparation of the soil. The management of planting and cover fertilization was carried out according to soil analysis and the recommendations for the crop (Cavalcanti et al. 2008). In planting fertilization, 120 kg ha -1 of P2O5 were applied in 100% to the foundation using simple superphosphate, together with 10% of N and K2O, in the forms of urea and potassium chloride in the amount of 120 kg ha -1 , respectively. Three days after the transplantation, fertilization of cover initiated, where 90% N and K2O were used via fertigation with daily applications for seven subsequent weeks. In each fertigation, the following nutrients and quantities were applied weekly: 1st week = 5.0% N and 10.0% K2O; 2nd week = 10.0% N and 10.0% K2O; 3rd week = 15.0% N and 15.0% K2O; 4th week, 5th and 6th weeks = 20.0% N and 18.0% K2O; 7th week = 10.0% of N and 11.0% of K2O.
The melon was kept in a closed tunnel with the agrotextile of different colors with a width of 1.38 m and a weight of 15g cm -2 , being removed according to the proposed treatments. Irrigation was performed using the localized method with drippers spaced 0.5 m apart and a flow rate of 2.0 L h -1 . After removing the agrotextile, manual weeding and preventive phytosanitary control were performed.
The harvest was carried out on January 15, 18, and 21, 2019. The fruits were harvested when they had an intense yellow color and uniform size. Weekly assessments were carried out until the removal of agrotextile at 24 days DAT of photosynthetically active radiation (RFA) with an Accupar LP-80 model ceptoometer, temperature, and relative air humidity with an HT-210 digital thermohygrometer. The following characteristics were evaluated in the harvest of the melon fruits: number of fruits per plant by counting all the fruits produced by the util area; average fruit mass (g.fruit -1 ) by weighing the fruits of the valuable area of each treatment on a digital scale divided by the number of fruits, the total productivity of the fruits produced in the valuable area of the plot (t ha -1 ) by the estimate for 1 ha -1 at the experimental level and total soluble solids (SST) using of a portable digital refractometer, model Atago PAL -1 , obtaining the values in percentage.
The collected data were submitted to analysis of variance in the SAEG 9.0 software at the level of 5% probability. For the colors of the agrotextile, the Tukey test at 5% probability was used, and for the length of stay of the agrotextile on the plants, regression analysis using the Table Curve 2D software.
Results
There was a significant effect of the interaction of the factors of the color of the agrotextile and length of stay on the plants for characteristics of the number of fruits per plant, average fruit mass, and total productivity p<0.05. In the variables related to climate, only the reduction in radiation and temperature below the agrotextile was performed.
In this work, it is possible to observe that the blue, gray, orange and white agrotextile caused a reduction of 73.3, 65.9, 61.5, and 36.2% in the photosynthetic active radiation below the agrotextile, respectively. This fact, possibly, led to a greater reduction in temperature below the agrotextile of 27.3% and 23.5% in the colors blue and gray. However, the use of white agrotextile was more effective in reducing the temperature below (22.7%) than that of orange (20.5%), possibly due to the greater reflectance and less heat absorption in the white-colored agrotextile (Table1). DAT, the most significant number of fruits were obtained with the cover with white agrotextile, and at 24 DAT there were no significant differences in the number of fruits in the plant regardless of the color of the agrotextile ( Table 2).
The average fruit mass was higher when the plants were covered with gray at 15 DAT, with the plants covering with blue agrotextile and gray at 18 DAT, and orange at 24 DAT. At the same time, at 21.0 DAT, there was no significant difference in the number of fruits in the plant due to the use of the agrotextile in its different colors (Table 2). The study of the residence time within each color of the agrotextile is shown in Figure 1. We can see that the number of fruits per plant showed a quadratic response in the different colors with maximum estimated values of 2.6, 1.9, 2, 1 and 1.4 fruits.plant -1 obtained at 16.4, 18.2, 15, and 15 DAT in orange, white, gray, and blue agrotextile colors. Research, Society and Development, v. 10, n. 6, e44610615951, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i6.15951 6 Source: Authors.
As for the average fruit mass, an increasing linear response recorded with estimated maximum values of 2,031.6 and 1,595.9 g.fruit -1 at the 24 DAT of the permanence of the orange and white agrotextile. Comparing the removal of agrotextile 15 DAT, there was an increase the mass of the fruit of 30.3 and 16.4%, respectively. The gray-colored and inversely proportional behavior was found to increase the agrotextile permanence time until 24 DAT with an estimated value of 1,651.8 g.fruit -1 obtained at 15 DAT this reduction in the average fruit mass was 6.8%. Lastly, was blue-colored agrotextile obtained a quadratic response with an estimated maximum value of 1,815.1 g.fruit -1 to 18.8 DAT of permanence on the plants (Figure 2).
In addition to the plant population in the area, the number of fruits per plant and the average fruit mass are two variables that significantly influence the formation of melon productivity. In the evaluation of this characteristic, we can observe greater total productivity of the yellow melon of the Inodorus group with the permanence of the agrotextile on the plants until 15 DAT when with orange and gray colors was used, at 18 DAT with greater productivity with the agrotextile of orange color, at 21 DAT with higher productivity with the colors orange and white. In comparison, at 24.0 DAT, there were no significant differences in total productivity due to the use of agrotextile with its different colors in melon cultivation yellow in the semiarid conditions of Paraíba (Table 3). Research, Society and Development, v. 10, n. 6, e44610615951, 2021 (CC BY 4.0) | ISSN 2525-3409 | DOI: http://dx.doi.org/10.33448/rsd-v10i6.15951 8 Regarding the content of total soluble solids, no significant difference was observed in the extract extracted from the fruit pulp due to the use of different colors (Table 9). The stay of the agrotextile was 24.0 DAT, and it was still in the initial flowering stage where the presence of fruits was not observed. Thus, the increase in the leaf area of plants grown under different colors don changed the growth of the fruits in terms of mass; however, it had less influence on the accumulation of soluble solids from the fruits. When taking into account their length of permanence, it was found that for total soluble solids, a quadratic response was found, with an estimated maximum value of 12.5%, obtained at 17.0 DAT, which generated an increase of 0,1% in total soluble solids; from that time the agrotextile remained under the plants, which allowed a reduction in its value of 8.5% to the 24.0 DAT (Figure 8).
Discussion
Among the factors related to climate, radiation and temperature are two climatic variables essential for plants' proper growth and development. Despite being a C3 plant, the melon tree needs high levels of radiation and temperature to reach its maximum production potential. This significant reduction in radiation and temperature levels is directly related to the color since the agrotextile in the colors blue and gray compared to the white color shows a condition of more shading, affecting the passage of the sun's rays and, consequently, the temperature. According to Mahmood et al. (2018), the shading effect of color screens reduces the amount of radiant energy received and can reduce air temperature.
Because of the results found, we can observe that the permanence of the agrotextile at 15 and 18 DAT in orange color increased the number of fruits in the plant and reduced its average mass. An inverse result was observed when the plants were covered with gray in which a smaller number of fruits were observed in the plant and a higher average mass at 15 and 18 DAT.
At 21 DAT, the coverage with white agrotextile stood out in increasing the number of fruits; however, there was no difference between the other agrotextile colors regarding the average mass. At 24 DAT, the result was the opposite, where no significant differences were observed in the number of fruits in the plant, but the mass of the fruit was more significant when the plants were covered with orange-colored agrotextiles. According to Pereira et al. (2017), the most significant number of fruits per plant generates competition between these and the vegetative organs, decreasing the latter's growth; however, this effect can be reduced when there is no limitation of solar radiation.
These results have a direct effect related to the color orange, which has a wave range (590-620 nm) very close to the color red (620-750 nm) (Taiz;Zeiger, 2017). Tafoya et al. (2018) studied the effects of colored screens on cucumber cultivation and observed that the blue and red nets provided a greater density of the photon flow. According to these authors, light levels favored photosynthesis, which led to an increase in biomass production, which implies a larger leaf area and greater efficiency in the transport of photoassimilates and greater reserve capacity assimilates for later use in the filling of fruits.
In This minor reduction in the number of fruits per plant of 37.6% observed in plants that received the blue colored agrotextile may be due to the better condition in the initial growth phase until 18 DAT in incident radiation, temperature, and its larger leaf area. However, Taiz and Zeiger (2017) state that the blue color provides better efficiency in stomatal opening and CO2 uptake and consequently conversion into photoassimilate for fruit about other material colors.
It is observed that the use of agrotextile independent of the colors used when remaining on melon plants until 24 DAT provided a reduction in the number of fruits. This behavior may be associated with the shading of the plant's canopy during the vegetative phase, responsible for the accumulation of photoassimilate that will be used in the production of flowers and, consequently, fruits (Pereira et al. 2017).
An enormous mass of the fruit in plants that received the orange and white agrotextile at 24 DAT was due to the lower number of fruits per plant. In melon, the increase in the number of fruits per plant contributed to a reduction in the average mass due to competition for assimilates (Queiroga et al. 2009). In addition, the larger leaf area (data not shown) in plants that received the orange agrotextile cover at 24 DAT may have increased the production and transport of photoassimilates aimed at fruit growth. This fact may also have occurred due to the longer exposure time to the orange color, which presents a waveband (590-620 nm) very close to the red color (620-750 nm), which the plant uses most of the energy received ( Taiz; Zeiger 2017).
In the coverage of plants with white agrotextile, there was a smaller reduction in radiation and temperature of 36.2 and 22.7%, respectively, which led to a lower gain in the mass of the fruit of 16.4%. Mahmood et al. (2018) report that the whitecolored networks showed high transmittance to solar radiation (above 60.0%), together with their significant reflectance (24.0%) and low absorption (13.0% to 15.0%).
The more intense colored agrotextile, gray and blue, provided higher values of fruit mass when they remained on the plants for a shorter period of 15 and 18.8 DAT, respectively. This fact may be associated with modifying the spectral quality through the colored agrotextile used, thus acting as a physiological tool modifying the microenvironment of the cultures (Ilić;Fallik, 2017).
Plants covered with agrotextile in gray and blue colors were subjected to a more shading of 65.9 and 73.3% and a more significant reduction in temperature below the coverage of 27.3 and 23.5%, respectively. These climatic data show the importance of studying shading in melon crops, especially under conditions of excessive solar radiation and its effects on plant growth and development. Pereira et al. (2017), working with the melon culture under a white agrotextile, observed that removal of the line cover during the beginning of plant growth at 20 DAT when compared to removal performed at 24.0, 28.0, and 36.0 DAT it was advantageous for the crop, as it allowed the plant to be exposed to higher solar radiation raising the average mass of the fruit.
Crop productivity depends on other factors, such as the number of fruits per plant and the mass of the fruits. In general, it was observed that the highest values of crop productivity were registered when the plants were covered with orange agrotextile at 15 and 18 DAT due to the more significant number of fruits and 21 and 24 DAT due to their more mass average in these conditions. In this case, it is evident that when the number of fruits in the plant is increased, a reduction in its mass is expected due to the greater intraspecific competition in the plant, especially between fruits in formation.
As observed in the orange-colored agrotextile, higher productivity values were also observed with the permanence of the agrotextile for up to 15 DAT when the plants were covered with gray colors due to the greater mass of the fruit observed in this condition, at 21 DAT with white due to the more significant number of fruits per plant. However, the cover of the plants with blue colored reduced the productivity of the crop until 21 DAT, mainly due to the lower averages of the number of fruits per plant. Lastly, at the 24 DAT of the permanence of the agrotextile on the plants, the productivity did not vary significantly due to the different colors due to the non-variation of the number of fruits per plant, despite the higher average mass obtained from fruits of plants covered with orange colored agrotextile.
Oren-Shamir et al. (2001) observed that colored meshes present a difference in the transmittance spectrum of active photosynthetic radiation. Thus, the wavelength influences the growth and development of the plant, as it is directly connected with the production of photoassimilate that is allocated for the fixation and growth of the fruit.
In addition, the level of shading and quality of light may have influenced the lower yield of the crop in plants covered with blue agrotextile and the higher yield of the crop in plants covered with orange agrotextiles. According to Ombódi et al. (2015), the photoselective shading of red and yellow nets markedly increases productivity, improves the fruits of different vegetables, and reduces the infestation of pests and diseases. According to Taiz and Zeiger (2017), the blue mesh shows a transmittance peak in the blue-green region (400-540nm), while the red-orange mesh has higher transmittance for wavelengths greater than (590-620nm), which is a more suitable band for the cultivation of melon.
In the coverage with agrotextile with lighter colors, in other words, orange and white, where the reduction of radiation was lower below the coverage, greater productivity was found. In plants covered with more intense colors, gray and blue, the shading affects crop productivity more. According to Stanghellini et al. (2011), the restriction of solar radiation and affecting the components of the energy balance, such as sensitive and latent heat flows, can influence the growth, development, and production of crops.
In all colors of the agrotextile it was evidenced that its permanence until 24 DAT reduced the productivity of the melon with a more significant effect on the cover with blue colors. This fact may have occurred due to the longer time and level of shading that the plants were under the protection of the agrotextile. This lower light intensity probably reduced photosynthesis in the plant, with effects on the crop's productivity.
The total soluble solids content of the melon fruits was reduced by 8.5% with the delay in removing the agrotextile, as the plants that remained longer time shaded. Such observed behavior is probably due to the more extended period of permanence at 24.0 DAT, allowing the plant to receive less radiation, which contributed to the reduction of photosynthetic activity, less growth, production, and translocation of photoassimilates directed to the sweetening of the fruits.
In the proposed treatments, the average values of soluble solids obtained were above the minimum required by importers, 8.0% (FFV-23). It is essential to highlight that even the later withdrawal of the agrotextile did not leave the soluble solids below the import requirement. Thus, in markets demanding more sweet fruits, this can be a differential factor of the product with the consumer.
FAO (1990) cites the global solar radiation level of 8.4 MJ m -2 .day -1 as a light compensation point for vegetables. This value is sufficient to guarantee the minimum production of photoassimilates necessary for the maintenance of the plant. The photic saturation point, which establishes the limit level of photosynthetically active radiation up to which an increase in CO2 assimilation occurs, must be observed. In the present experiment, under conditions of levels of photosynthetically active radiation below the necessary, occurring in the different colors of the agrotextile, especially in the cover with the blue color, the lower limit can restrict photosynthesis and, above, they can promote the excessive temperature increase in the plant, with adverse effects on the transpiratory and photosynthetic rate and thereby reduce melon productivity.
Conclusion
The production characteristics of yellow melon fruits were changed due to the use of colored agrotextiles associated with the length of stay under the plants; The highest productivity, number of fruits, and mass of the melon fruits were obtained with the permanence of the agrotextile for 15.0, 18.0, and 24.0 days after the transplant, respectively, when the plants were covered with the orangecolored agrotextile.
Local farmers use the white-colored agrotextile, however, this coloration of the material proved to be less effective than the orange color in terms of increasing crop productivity.
The soluble solids content was affected due to the length of stay, but the colors did not affect the soluble solids content.
|
2021-07-26T00:06:20.737Z
|
2021-06-06T00:00:00.000
|
{
"year": 2021,
"sha1": "fdd304832d64eef4272896cbd7fabc45bb35e954",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/15951/14280",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7b9ee70c939012ffb42e12d7775179d08d7899df",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
9144643
|
pes2o/s2orc
|
v3-fos-license
|
On the Payoff Mechanisms in Peer-Assisted Services with Multiple Content Providers: Rationality and Fairness
This paper studies an incentive structure for cooperation and its stability in peer-assisted services when there exist multiple content providers, using a coalition game theoretic approach. We first consider a generalized coalition structure consisting of multiple providers with many assisting peers, where peers assist providers to reduce the operational cost in content distribution. To distribute the profit from cost reduction to players (i.e., providers and peers), we then establish a generalized formula for individual payoffs when a"Shapley-like"payoff mechanism is adopted. We show that the grand coalition is unstable, even when the operational cost functions are concave, which is in sharp contrast to the recently studied case of a single provider where the grand coalition is stable. We also show that irrespective of stability of the grand coalition, there always exist coalition structures which are not convergent to the grand coalition under a dynamic among coalition structures. Our results give us an incontestable fact that a provider does not tend to cooperate with other providers in peer-assisted services, and be separated from them. Three facets of the noncooperative (selfish) providers are illustrated; (i) underpaid peers, (ii) service monopoly, and (iii) oscillatory coalition structure. Lastly, we propose a stable payoff mechanism which improves fairness of profit-sharing by regulating the selfishness of the players as well as grants the content providers a limited right of realistic bargaining. Our study opens many new questions such as realistic and efficient incentive structures and the tradeoffs between fairness and individual providers' competition in peer-assisted services.
A. Motivation
The Internet is becoming more content-oriented, and the need of cost-effective and scalable distribution of contents has become the central role of the Internet. Uncoordinated peerto-peer (P2P) systems, e.g., BitTorrent, have been successful in distributing contents, but the rights of the content owners are not protected well, and most of the P2P contents are in fact illegal. In its response, a new type of service, called peer-assisted service, has received significant attention these days. In peer-assisted services, users commit a part of their resources to assist content providers in content distribution with objective of enjoying both scalability/efficiency in P2P systems and controllability in client-server systems. Examples of application of peer-assisted services include nano data center [1] and IPTV [2], where high potential of operational cost reduction was observed. For instance, there are now 1.8 million IPTV subscribers in South Korea, and the financial sectors forecast that by 2014 the IPTV subscribers is expected to be 106 million [3]. However, it is clear that most users will not just "donate" their resources to content providers. Thus, the key factor to the success of peer-assisted services is how to (economically) incentivize users to commit their valuable resources and participate in the service.
One of nice mathematical tools to study incentivecompatibility of peer-assisted services is the coalition game theory which covers how payoffs should be distributed and whether such a payoff scheme can be executed by rational individuals or not. In peer-assisted services, the "symbiosis" between providers and peers are sustained when (i) the offered payoff scheme guarantees fair assessment of players' contribution under a provider-peer coalition and (ii) each individual has no incentive to exit from the coalition. In the coalition game theory, the notions of Shapley value and the core have been popularly applied to address (i) and (ii), respectively, when the entire players cooperate, referred to as the grand coalition. A recent paper by Misra et al. [4] demonstrates that the Shapley value approach is a promising payoff mechanism to provide right incentives for cooperation in a single-provider peer-assisted service.
However, in practice, the Internet consists of multiple content providers, even if only giant providers are counted. In the multi-provider setting, users and providers are coupled in a more complex manner, thus the model becomes much more challenging and even the cooperative game theoretic framework itself is unclear, e.g., definition of the worth of a coalition. Also, the results and their implications in the multiprovider setting may experience drastic changes, compared to the single-provider case.
The grand coalition is expected to be the "best" coalition in the peer-assisted service with multiple providers in that it provides the highest aggregate payoff. To illustrate, see an example in Fig. 1 with two providers (Google TV and iTunes) and a large number of peers. Consider two cooperation types: (i) separated, where there exists a fixed partition of peers for each provider, and (ii) coalescent, where each peer is possible to assist any provider. In the separated case, a candidate payoff scheme is based on the Shapley value in each disconnected coalition. In the coalescent case, the Shapley value is also a candidate payoff scheme after a worth function of the grand coalition is defined, where a reasonable worth function 1 can be the total optimal profit, maximized over all combinations of peer partitions to each provider. Consequently, the total payoff for the coalescent case exceeds that for the separated case, unless the two partitions of both cases are equivalent. Shapley value is defined by a few agreeable axioms, one of which is efficiency 2 , meaning that the every cent of coalition worth is distributed to players. Since smaller worth is shared out among players in the separated case, at least one individual is underpaid as compared with the coalescent case. Thus, providers and users are recommended to form the grand coalition and be paid off based on the Shapley values. However, it is still questionable whether peers are willing to stay in the grand coalition and thus the consequent Shapleyvalue based payoff mechanism is desirable in the multiprovider setting. In this paper, we anatomize incentive structures in peer-assisted services with multiple content providers and focus on stability issues from two different angles: stability at equilibrium of Shapley value and convergence to the equilibrium. We show that the Shapley payoff scheme may lead to unstable coalition structure, and propose a different notion of payoff distribution scheme, χ value, under which peers and providers stay in the stable coalition as well as better fairness is guaranteed.
B. Related Work
The research on incentive structure in the P2P systems (e.g., BitTorrent) has been studied extensively. To incapacitate free-riders in P2P systems, who only download contents but upload nothing, from behaving selfishly, a number of incentive mechanisms suitable for distribution of copy-free contents have been proposed (See [5] and references therein), using game theoretic approaches. Alternative approaches to exploit the potential of the P2P systems for reducing the distribution (or operational) costs of the copyrighted contents have been recently adopted by [1], [4]. To the best of our knowledge, the work by Misra et al. [4] is the first to study the profit-sharing mechanism (payoff mechanism) of peer-assisted services.
Coalition game theory has been applied to model diverse networking behaviors, where the main focus in most cases (e.g., [4]) was to study the stability of a specific equilibrium i.e., the grand coalition in connection with the notion of core. Recently, Saad et al. [6], [7], discussed the stability and dynamics of endogenous formation of general coalition structures. In particular, [7] introduced a coalition game model 1 In Section III-A, we establish that this definition is derived directly from an essential property of coalition. 2 To be discussed formally in Section II-C for self-organizing agents (e.g., unmanned aerial vehicles) collecting data from arbitrarily located tasks in wireless networks and proved the stability of the proposed algorithm by using hedonic preference (and dominance). In this paper, we use the stability notion by Hart and Kurz [8] (see also [9]) to study the dynamics of coalition structures in peer-assisted services. The stability notion in [8] is based on the preferences of any arbitrary coalition while the hedonic coalition games are based on the preferences of individuals. Other subtle differences are described in [10].
C. Main Contributions and Organization
We summarize our main contributions as follows: 1) Following the preliminaries in Section II, in Section III, we describe and propose the cooperative game theoretic framework of the peer-assisted service with multiple providers. After defining a worth function that is provably the unique feasible worth function satisfying two essential properties, i.e., feasibility and superadditivity of a coalition game, we provide a closed-form formula of the Shapley value for a general coalition with multiple providers and peers, where we take a fluid-limit approximation for mathematical tractability. This is a non-trivial generalization of the Shapley value for the single-provider case in [4]. In fact, our formula in Theorem 1 establishes the general Shapley value for distinguished multiple atomic players and infinitesimal players in the context of the Aumann-Shapley (A-S) prices [11] in coalition game theory. 2) In Section IV, we discuss in various ways that the Shapley payoff regime cannot incentivize rational players to form the grand coalition, implying that fair profit-sharing and opportunism of players cannot stand together. First, we prove that the Shapley value for the multiple-provider case is not in the core under mild conditions, e.g., each provider's cost function is concave. This is in stark contrast to the single-provider case where the concave cost function stabilizes the equilibrium. Second, we study the dynamic formation of coalitions in peer-assisted services by introducing the notion of stability defined by the seminal work of Hart and Kurz [8]. Finally, we show that, if we adopt a Shapley-like payoff mechanism, called Aumann-Drèze value, irrespective of stability of the grand coalition, there always exist initial states which do not converge to the grand coalition. 3) In Section V, we present three examples stating the problems of the non-cooperative peer-assisted service: (i) the peers are underpaid compared to their Shapley payoffs, (ii) a provider paying the highest dividend to peers monopolizes all peers, and (iii) Shapley value for each coalition gives rise to an oscillatory behavior of coalition structures. These examples suggest that the system with the separated providers may be even unstable as well as unfair in a peerassisted service market. 4) In Section VI, as a partial solution to the problems of Shapley-like payoffs (i.e., Shapley and Aumann-Drèze), we propose an alternative payoff scheme, called χ value [12]. This payoff mechanism is relatively fair in the sense that players, at the least, apportion the difference between the coalition worth and the sum of their fair shares, i.e., Shapley payoffs, and stabilizes the whole system. It is also practical in the sense that providers are granted a limited right of bargaining. That is, a provider may award an extra bonus to peers by cutting her dividend, competing with other providers in a fair way. More importantly, we show that authorities can effectively avoid unjust rivalries between providers by implementing a simplistic measure. After presenting a practical example of peer-assisted services with multiple providers in delay-tolerant networks in Section VII, we conclude this paper.
II. PRELIMINARIES Since this paper investigates a multi-provider case, where a peer can choose any provider to assist, we start this section by defining a coalition game with a peer partition (i.e., a coalition structure) and introducing the payoff mechanism thereof.
A. Game with Coalition Structure
A game with coalition structure is a triple (N, v, P) where N is a player set and v : 2 N → R (2 N is the set of all subsets of N ) is a worth function, v(∅) = 0. v(K) is called the worth of a coalition K ⊆ N . P is called a coalition structure for (N, v); it is a partition of N where C(i) ∈ P denotes the coalition containing player i. For your reference, a coalition structure P can be regarded as a set of disjoint coalitions. The grand coalition is the partition P = {N }. For instance 3 (N, v). A value of player i is an operator φ i (N, v, P) that assigns a payoff to player i. We define φ K = i∈K φ i for all K ⊆ N .
To conduct the equilibrium analysis of coalition games, the notion of core has been extensively used to study the stability of grand coalition P = {N }: Definition 1 (Core). The core of a game (N, v) is defined by: If a payoff vector φ(N, v) lies in the core, no player in N has an incentive to split off to form another coalition K because the worth of the coalition K, v(K), is no more than the payoff sum i∈K φ i (N, v). Note that the definition of the core hypothesizes that the grand coalition is already formed ex-ante. We can see the core as an analog of Nash equilibrium from noncooperative games. Precisely speaking, it should be viewed as an analog of strong Nash equilibrium where no arbitrary coalition of players can create worth which is larger than what they receive in the grand coalition. If a payoff vector φ(N, v) lies in the core, then the grand coalition is stable with respect to any collusion to break the grand coalition.
B. Shapley Value and Aumann-Drèze Value
On the premise that the player set is not partitioned, i.e., P = {N }, the Shapley value, denoted by ϕ (not φ), is popularly used as a fair distribution of the grand coalition's worth to individual players, defined by: Shapley [13] gives the following interpretation: "(i) Starting with a single member, the coalition adds one player at a time until everybody has been admitted. (ii) The order in which players are to join is determined by chance, with all arrangements equally probable. (iii) Each player, on his admission, demands and is promised the amount which his adherence contributes to the value of the coalition." The Shapley value quantifies the above that is axiomatized (see Section II-C) and has been treated as a worth distribution scheme. The beauty of the Shapley value lies in that the payoff "summarizes" in one number all the possibilities of each player's contribution in every coalition structure. Given a coalition structure P = {N }, one can obtain the Aumann-Drèze value (A-D value) [14] of player i, also denoted by ϕ, by taking C(i), which is the coalition containing player i, to be the player set and by computing the Shapley value of player i of the reduced game (C(i), v). It is easy to see that the A-D value can be construed as a direct extension of the Shapley value to a game with coalition structure. Note that both Shapley value and A-D value are denoted by ϕ because the only difference is the underlying coalition structure P.
Axiom 3 (Additivity, ADD). For all coalition functions
Recall that the basic premise of the Shapley value is that the player set is not partitioned, i.e., P = {N }. It is well-known [12], [13] that the Shapley value, defined in (1), is uniquely characterized by CE, CS, ADD and NP for P = {N }. The A-D value is also uniquely characterized by CE, CS, ADD and NP (Axioms 1-4), but in this case for arbitrary coalition structure P [14]. In the literature, e.g., [6], [15], the A-D value has been used to analyze the static games where a coalition structure is exogenously given. Definition 2 (Coalition Independent, CI). If i ∈ C ⊆ N , C ∈ P and C ∈ P ′ , then φ i (N, v, P) = φ i (N, v, P ′ ).
From the definition of the A-D value, the payoff of player i in coalition C(i) is affected neither by the player set N nor by coalitions C ∈ P, C = C(i). Note that only C(i) contains the player i. Thus, it is easy to prove that the A-D value is coalition independent. From CI of the A-D value, in order to decide the payoffs of a game with general coalition structure P, it suffices to decide the payoffs of players within each coalition, say C ∈ P, without considering other coalitions C ∈ P, C = C(i). In other words, once we decide the payoffs of a coalition C ∈ P, the payoffs remain unchanged even though other coalitions, C ′ ∈ P, C ′ = C, vary. Thus, for any given coalition structure P, any coalition C ∈ P is just twofold in terms of the number of providers in C: (i) one provider or (ii) two or more providers, as depicted in Fig. 1.
Yet another reason why CI attracts our attention is that it enables us to define the stability of a game with coalition structure in the following simplistic way: Definition 3 (Stable Coalition Structure [8]). We say that a coalition structure P ′ blocks P, where P ′ , P ∈ P(N ), with respect to φ if and only if there exists some C ∈ P ′ such that In this case, we also say that C blocks P. If there does not exist any P ′ which blocks P, P is called stable.
Due to CI of the A-D value, all stability notions defined by the seminal work of Hart and Kurz [8] coincide with the above simplistic definition, as discussed by Tutic [9]. Definition 3 can be intuitively interpreted that, if there exists any subset of players C who improve their payoffs away from the current coalition structure, they will form a new coalition C. In other words, if a coalition structure P has any blocking coalition C, some rational players will break P to increase their payoffs. The basic premise here is that players are not clairvoyant, i.e., they are interested only in improving their instant payoffs in a myopic way. If a payoff vector lies in the core, the grand coalition is stable in the sense of Definition 3, but the converse is not necessarily true (see Fig. 2).
D. Comparison with Other Values
In a particular category of games, called voting games or simple games, Banzhaf value as well as the Shapley value (also known as Shapley-Shubik index in this context) has been used in the literature (See, e.g., [16] and references therein). While the Shapley value has been extensively studied in many papers, there are no similar results for the Banzhaf value. For instance, the Shapley value is proven to lie in the core for a special type of games, called convex games, whereas there is no equivalent result for the Banzhaf value. Moreover, the Banzhaf value violates the efficiency axiom CE in Section II-C for a certain coalition structure P = {N }, leading to inefficient sharing of the grand coalition worth.
As compared with Aumann-Drèze value, a new value, referred to as Owen value (See, e.g., [15,Chapter 8.8] or [17, Chapter XII]) has emerged based on an alternative viewpoint on coalition, where a coalition forms not to share the coalition worth, but only to maximize their bargaining power with regard to division of the worth of the grand coalition. In other words, players form a labor union (coalition) to obtain a better bargaining position leading to a larger payoff, implying that the coalition efficiency axiom CE is also violated. A delicate premise of this approach is that players must form the grand coalition, the worth of which is in fact the largest worth in superadditive games (See Definition 5), and bargain with each other at the same time. Also, in the context of P2P systems, whether it is more reasonable to nullify CE so that a portion of a worth of a coalition (peers and providers) C ∈ P becomes transferrable to other coalitions C ′ ∈ P, C = C ′ , remains an open economic question.
III. COALITION GAME IN PEER-ASSISTED SERVICES
In this section, we first define a coalition game in a peerassisted service with multiple content providers by classifying the types of coalition structures as separated, where a coalition includes only one provider, and coalescent, where a coalition is allowed to include more than one providers (see Fig. 1). To define the coalition game, we will define a worth function of an arbitrary coalition S ⊆ N for such two cases.
A. Worth Function in Peer-Assisted Services
Assume that players N are divided into two sets, the set of content providers Z := {p 1 , · · · , p ζ }, and the set of peers H := {n 1 , · · · , n η }, i.e., N = Z ∪H. We also assume that the peers are homogeneous, e.g., the same computing powers, disk cache sizes, and upload bandwidths. Later, we discuss that our results can be readily extended to nonhomogeneous peers. The set of peers assisting providers is denoted byH ⊆ H where x := |H|/η, i.e., the fraction of assisting peers. We define the worth of a coalition S to be the amount of cost reduction due to cooperative distribution of the contents by the players in S in both separated and coalescent cases. Separated case: Denote by Ω η p (x(S)) the operational cost of a provider p when the coalition S consists of a single provider p and x(S) · η assisting peers. Since the operational cost cannot be negative and may decrease with the number of assisting peers, we assume the following to simplify the exposition: Note that from the homogeneity assumption of peers, the cost function depends only on the fraction of assisting peers. Then, we define the worth functionv(S) for a coalition S having a single provider as: where Ω η p (0) corresponds to the cost when there are no assisting peers. For a coalition S with no provider, we simply havev(S) := 0. For notational simplicity, x(S) is henceforth denoted by x, unless confusion arises.
Coalescent case:
In contrast to the separated case, where a coalition includes a single provider, the worth for the coalescent case is not clear yet, since depending on which peers assist which providers the amount of cost reduction may differ. One of reasonable definitions would be the maximum worth out of all peer partitions, i.e., the worth for the coalescent case is defined by: for a coalition with at least two providers, and v(S) :=v(S) for a coalition S with at most one provider. The definition above implies that we view a coalition containing more than one provider as the most productive coalition whose worth is maximized by choosing the optimal partition P * among all possible partitions of S. Note that (3) is consistent with the definition (2) for |Z ∩ S| = 1, i.e., v(S) =v(S) for |Z ∩ S| = 1.
Five remarks are in order. First, as opposed to [4] wherê v({p}) = ηR − Ω η p (0) (R is the subscription fee paid by any peer), we simply assume thatv({p}) = 0. Note that, as discussed in [15, Chapter 2.2.1], it is no loss of generality to assume that, initially, each provider has earned no money. In our context, this means that it does not matter how much fraction of peers is subscribing to each provider because each peer has already paid the subscription fee to providers ex-ante.
Second, Ω η p (x) may not be decreasing because, for example, electricity expense of the computers and the maintenance cost of the hard disks of peers may exceed the cost reduction due to peers' assistance in content distribution, e.g., Annualized Failure Rate (AFR) of hard disk drives is over 8.6% for threeyear old ones [18].
Third, the worth function in peer-assisted services can reflect the diversity of peers. It is not difficult to extend our result to the case where peers belong to distinct classes. For example, peers may be distinguished by different upload bandwidths and different hard disk cache sizes. A point at issue for the multiple provider case is whether peers who are not subscribing to the content of a provider may be allowed to assist the provider or not. On the assumption that the content is ciphered and not decipherable by the peers who do not know its password which is given only to the subscribers, providers will allow those peers to assist the content distribution. Otherwise, we can easily reflect this issue by dividing the peers into a number of classes where each class is a set of peers subscribing to a certain content.
Fourth, it should be clearly understood that our worth function (3) does not encompass more than just the peer-partition optimization. That is, we speculate that cooperation among providers might lead to further expenses cut by optimizing their network resources. We recognize the lack of this 'added bonus' to be the major weakness in our model.
Lastly, it should be noted that the worth function in (3) is selected in order to satisfy two properties. First of all, it follows from the definition of v in (3) that no other coalition function v ′ (·) can be greater than v(·), i.e., v(·) ≥ v ′ (·) because v is the total cost reduction that is maximized over all possible peer partitions to each provider.
Definition 4 (Feasibility). For all worth function
The second property, superadditivity, is one of the most elementary properties, which ensures that the core is nonempty by appealing to Bondareva-Shapley Theorem [15, Theorem 3.1.4].
Definition 5 (Superadditivity).
The following lemma holds by the fact that a feasible worth function cannot be greater than (3), i.e., the largest worth. Lemma 1. When the worth for the separated case is given by (2), for the coalescent case, there exists a unique worth function that is both superadditive and feasible, given by (3).
Proof: Suppose we have a superadditive worth v ′ . Firstly, it follows directly from the assumption (the worth function for the separate case is (2) In light of this lemma, we can restate that our objective in this paper is to analyze the incentive structure of peerassisted services when the worth of coalition is feasible and superadditive. This objective in turn implies the form of worth function in (3).
B. Fluid Aumann-Drèze Value for Multi-Provider Coalitions
So far we have defined the worth of coalitions. Now let us distribute the worth to the players for a given coalition structure P. Recall that the payoffs of players in a coalition are independent from other coalitions by the definition of A-D payoff. Pick a coalition C without loss of generality, and denote the set of providers in C byZ ⊆ Z. With slight notational abuse, the set of peers assistingZ is denoted bȳ H. Once we find the A-D payoff for a coalition consisting of arbitrary provider setZ ⊆ Z and assisting peer setH ⊆ H, the payoffs for the separated and coalescent cases in Fig. 1 follow from the substitutionsZ = Z andZ = {p}, respectively. In light of our discussion in Section II-B, it is more reasonable to call a Shapley-like payoff mechanism 'A-D payoff' and 'Shapley payoff' respectively for the partitioned and nonpartitioned games (N, v, {Z ∪H, · · · }) and (N, v, {Z ∪ H}) 4 .
Fluid Limit: We adopt the limit axioms for a large population of users to overcome the computational hardness of the A-D payoffs: which is the asymptotic operational cost per peer in the system with a large number of peers. We drop superscript η from notations to denote their limits as η → ∞. From the for n ∈H. assumption Ω η p (x) > 0, we have Ω p (x) ≥ 0. To avoid trivial cases, we also assume Ω p (x) is not constant in the interval x ∈ [0, 1] for any p ∈ Z. We also introduce the payoff of each provider per user, defined as ϕ η p := 1 η ϕ η p . We now derive the fluid limit equations of the payoffs, shown in Fig. 3, which can be obtained as η → ∞. The proof of the following theorem is given in Appendix A. Theorem 1 (A-D Payoff for Multiple Providers). As η → ∞, the A-D payoffs of providers and peers under an arbitrary coalition C =Z ∪H converge to (FluidAD1) in Fig. 3 where The following corollaries are immediate as special cases of Theorem 1, which we will use in Section V. It also establishes the A-D values for distinguished multiple atomic players (the providers) and infinitesimal players (the peers), in the context of the Aumann-Shapley (A-S) prices [11] in coalition game theory.
Our formula for the peers is interpreted as follows: Take the second line of (FluidAD2) as an example. Recall the definition of the Shapley value (1). The payoff of peer n is the marginal cost reduction v(S ∪ {n}) − v(S) that is averaged over all equally probable arrangements, i.e., the orders of players. It is also implied by (1) that the expectation of the marginal cost is computed under the assumption that the events |S| = y and |S| = y ′ for y = y ′ are equally probable, i.e., P(|S| = y) = P(|S| = y ′ ). Therefore, in our context of infinite player game in Theorem 1, for every values of ux along the interval [0, x], the subset S ⊆Z ∪H contains ux fraction of the peers. More importantly, the probability that each provider is a member of S is simply u because the size of peers in S, ηux, is infinite as η → ∞ so that the size of S is not affected by whether a provider belongs to S or not. Therefore, the marginal cost reduction of each peer on the condition that both providers are contained in S becomes −u 2 dM {p,q} Ω dx (ux). Likewise, the marginal cost reduction of each peer on the condition that only one provider is in the coalition is
IV. INSTABILITY OF THE GRAND COALITION
In this section, we study the stability of the grand coalition to see if rational players are willing to form the grand coalition, only under which they can be paid their respective fair Shapley payoffs. The key message of this section is that the rational behavior of the providers makes the Shapley value approach unworkable because the major premise of the Shapley value, the grand coalition, is not formed in the multiprovider games.
A. Stability of the Grand Coalition
Guaranteeing the stability of a payoff vector has been an important topic in coalition game theory. For the singleprovider case, |Z| = 1, it was shown in [4,Theorem 4.2] that, if the cost function is decreasing and concave, the Shapley incentive structure lies in the core of the game. What if for |Z| ≥ 2? Is the grand coalition stable for the multi-provider case? Prior to addressing this question, we first define the following: (1) = Ω p (0). To understand this better, note that the above expression is equivalent to the following: which implies that there is no difference in the total cost reduction, irrespective of whether the provider p is in the provider set or not. Interestingly, if all cost functions are concave, there exists at least one noncontributing provider.
To prove this, recall the definition of M Z Ω (·): Since the summation of concave functions is concave and the minimum of a concave function over a convex feasible region Y (x) is an extreme point of Y (x) as shown in [19,Theorem 3.4.7], we can see that the solutions of the above minimization are the extreme points of {(y 1 , · · · , y |Z| ) | i∈Z y i ≤ x, y i ≥ 0}, which in turn imply y i = 0 for |Z| − 1 providers in Z. Note that the condition |Z| ≥ 2 is necessary here. We are ready to state the following theorem, a direct consequence of Theorem 1. Its proof is in Appendix B.
Theorem 2 (Shapley Payoff Not in the Core). If there exists a noncontributing provider, the Shapley payoff for the game (Z ∪ H, v) does not lie in the core.
It follows from Lemma 2 that, if all operational cost functions are concave and |Z| ≥ 2, the Shapley payoff does not lie in the core. This result appears to be in good agreement with our usual intuition. If there is a provider who does not contribute to the coalition at all in the sense of (6) and is still being paid due to her potential for imaginary contribution assessed by the Shapley formula (1), which is not actually exploited in the current coalition, other players may improve their payoff sum by expelling the noncontributing provider.
The condition |Z| ≥ 2 plays an essential role in the theorem. For |Z| ≥ 2, the concavity of the cost functions leads to the Shapley value not lying in the core, whereas, for the case |Z| = 1, the concavity of the cost function is proven to make the Shapley incentive structure lie in the core [4, Theorem 4.2].
B. Convergence to the Grand Coalition
The notion of the core lends itself to the stability analysis of the grand coalition on the assumption that the players are already in the equilibrium, i.e., the grand coalition. However, Theorem 2 still leaves further questions unanswered. In particular, for the non-concave cost functions, it is unclear if the Shapley value is not in the core, which is still an open problem. We rather argue here that, whether the Shapley value lies in the core or not, the grand coalition is unlikely to occur by showing that the grand coalition is not a global attractor under some conditions.
To study the convergence of a game with coalition structure to the grand coalition, let us recall Definition 3. It is interesting that, though the notion of stability was not used in [4], one main argument of this work was that the system with one provider would converge to a full sharing mode, i.e., the grand coalition, hinting the importance of the following convergence result with multiple providers. The proof of the following theorem is given in Appendix C.
Theorem 3 (A-D Payoff Doesn't Lead to Grand Coalition).
Suppose |Z| ≥ 2 and Ω p (y) is not constant in the interval y ∈ [0, x] for any p ∈ Z where x = |H|/|H|. The following holds for all p ∈ Z and n ∈H. than that in all coalition T ∪H for {p} T ⊆ Z. In plain words, a provider, who is in cooperation with a peer set, will receive the highest dividend when she cooperates only with the peers excluding other providers whereas each peer wants to cooperate with as many as possible providers. It is surprising that, for the multiple provider case, i.e., |Z| ≥ 2, each provider benefits from forming a single-provider coalition whether the cost function is concave or not. There is no positive incentives for providers to cooperate with each other under the implementation of A-D payoffs. On the contrary, a peer always looses by leaving the grand coalition.
Upon the condition that each provider begins with a singleprovider coalition with a sufficiently large number of peers, one cannot reach the grand coalition because some singleprovider coalitions are already stable in the sense of the stability in Definition 3. That is, the grand coalition is not the global attractor. For instance, take P = {{p} ∪ H, · · · } as the current coalition structure where all peers are possessed by provider p. Then it follows from Theorem 3 that players cannot make any transition from P to {Φ ∪ H, · · · } where Φ ⊆ Z is any superset of {p} because provider p will not agree to do so.
V. CRITIQUE OF A-D PAYOFF FOR SEPARATE PROVIDERS
The discussion so far has focused on the stability of the grand coalition. The result in Theorem 2 suggests that if there is a noncontributing (free-riding) provider, which is true even for concave cost functions for multiple providers, the grand coalition will not be formed. The situation is aggravated by Theorem 3, stating that single-provider coalitions (i.e., the separated case) will persist if providers are rational. We now illustrate the weak points of the A-D payoff under the singleprovider coalitions with three representative examples. We can see that each peer n will be paid 21/32 ( ϕ
B. Instability of A-D Payoff Mechanism
The last example illustrates the A-D payoff can even induce an analog of the limit cycle in nonlinear systems, i.e., a closed trajectory having the property that other trajectories spirals into it as time approaches infinity.
Example 3 (Oscillation).
Let us consider a game with two providers and two peers where N = {p 1 , p 2 , n 1 , n 2 }. If {n 1 }, {n 2 } and {n 1 , n 2 } assist the content distribution of p 1 , the reduction of the distribution cost is respectively 10$, 9$ and 11$ per month. However, the hard disk maintenance cost incurred from a peer is 5$. In the meantime, if {n 1 }, {n 2 } and {n 1 , n 2 } assist the content distribution of p 2 , the reduction of the distribution cost is respectively 6$, 3$ and 13$ per month. In this case, the hard disk maintenance cost incurred from a peer is supposed to be 2$ due to smaller contents of p 2 as opposed to those of p 1 .
For simplicity, we omit the computation of the A-D payoffs for all coalition structures and stability analysis (see Appendix of [20] and Table 1 in [20] for details). We first observe that the Shapley payoff of this example does not lie in the core. As time tends to infinity, the A-D payoff exhibits an oscillation of the partition P consisting of the four recurrent coalition structures as shown in Fig. 6, where, for notational simplicity, we adopt a simplified expression for coalitional structure P: a coalition {a, b, c} ∈ P is denoted by abc and each singleton set {i} is denoted by i. The evolution of coalition structure is governed by a simple rule: if there exist blocking coalitions (See Definition 3), then arbitrary one of them will be formed.
Let us begin with the partition {p 1 , p 2 n 1 n 2 }. Player p 1 could have achieved the maximum payoff if he had formed a coalition only with n 1 . However, player n 1 will remain in the current coalition because he does not improve away from the current coalition. Instead, Player n 2 breaks the coalition p 2 n 1 n 2 so that n 2 and p 1 can form coalition p 1 n 2 for their benefit. As soon as the coalition p 2 n 1 n 2 is broken, p 1 betrays n 2 to increase his payoff by colluding with n 1 . It is not clear how this behavior will be in large-scale systems, as reported in the literature [9].
VI. A FAIR, BARGAINING, AND STABLE PAYOFF MECHANISM FOR PEER-ASSISTED SERVICES
The key messages from the examples in Section V imply that the A-D value of the separate case gives rise to unfairness, monopoly, and even oscillation. Also, it turns out that some players' coalition worth exceeds their Shapley payoffs which they are paid in the grand coalition (Theorem 2). Thus, the Shapley payoff scheme does not seem to be executable in practice because it is impossible to make all players happy, unequivocally. That being said, the fairness of profit-sharing Fig. 7. Fluid χ payoff formula for multi-provider coalitions. and the opportunism of players are difficult to stand together. Then, it is more reasonable to come up with a compromising payoff mechanism that (i) forces players to apportion the difference between the coalition worth and the sum of their fair shares, (ii) grant providers a limited right of bargaining, and (iii) stabilize the whole system. We will use a slightly different notion of payoff mechanism, called χ value, originally proposed by Casajus [12].
A. An Axiomatic Characterization of χ Value
The χ value is characterized by a similar set of axioms used for the A-D value. The only difference is that (i) NP is weakened to GNP, causing a deficiency in axiomatic characterization, which is made up by WSP: Axiom 5 (Grand Coalition Null Player, GNP).
Axiom 6 (Weighted Splitting, WSP). If P ′ is finer than P (i.e., C ′ (i) ⊆ C(i), ∀i ∈ N ) and j ∈ P ′ (i), The cornerstone of χ value is the very observation that, as the grand coalition P = {N } is broken into two or more coalitions, player i now has another option to ally with other coalitions than C(i) ∈ P and this outside option must be assessed. To allow the assessment of the outside options, it is inevitable to weaken NP (See Section II-C) to GNP, by satisfying only which, a player may receive positive payoff so far as he contributes to the worth of the grand coalition, even though he does not to that of the current coalition, i.e., NP. In the end, it is all about how to valuate the outside option, the χ value's choice of which is to stick to the Shapley value by equally dividing the difference between the coalition worth and the sum of Shapley values, i.e., WSP for P = {N }.
Recalling the definition ϕ K (N, v) = i∈K ϕ i (N, v) in Section II-A, we present the following theorem (see [12], [21] for the proof): Theorem 4 (χ Value). The χ value is uniquely characterized by CE, CS, ADD, GNP and WSP as follows: where ϕ i is Shapley value of player i for non-partitioned game (N, v) = (N, v, {N }).
B. Fluid χ Value for Multi-Provider Coalitions
Recall N = Z ∪ H,Z ⊆ Z,H ⊆ H and x = |H|/η. To compute the χ payoff for the multiple provider case, we first establish in the following theorem 5 a fluid χ value in line with the analysis in Section III-B with the limit axioms: Theorem 5 (χ Payoff for Multiple Providers). As η tends to infinity, the χ payoffs of providers and peers under an arbitrary coalition C =Z ∪H converge to (FluidChi) in Fig. 7 where the Shapley payoffs ϕ Z i (1) are given in (FluidAD1) in Fig. 3.
To intuitively interpret χ value, it is crucial to know the roles of Axiom WSP and its weights w i . In our context, because of fairness between peers, it is more reasonable to set w i = 1 for i ∈ H. It does not make sense to differentiate payoffs between peers due to the peer-homogeneity assumption in Section III-A. On the contrary, we will clarify in Sections VI-C and VI-D why the weights of providers w i , i ∈ Z do not necessarily have to be 1. The essential difference between A-D value and χ value lies in WSP. Interpretation of WSP: It implies that, if peer i loses, say ∆ i , when the coalition structure changes, e.g., from the grand coalition P = {N } to a finer coalition structure P ′ = {N }, the provider p ∈ C(i) will lose ∆ i × w p . There are two implications of this weighted splitting. First, since the payoff of each player i is computed based on the baseline, i.e., the Shapley value, and the surplus or deficit incurred by formation of the coalition C ′ (i) are equally distributed for w p = 1, χ value leads to a fair share of the profit. Secondly, now a provider may bargain with peers over the dividend rate by setting w p to any positive number. We elaborate on these two implications in the following subsections.
C. Fairness: Surplus-Sharing
On the basis of the first implication of WSP, χ value is fairer than A-D value in the following sense: Definition 7 (Surplus-Sharing). A value φ of game (N, v, P) is surplus-sharing if the following condition holds: if the coalition worth of coalition C ∈ P is greater than, equal to, or less than the sum of Shapley values of players in C, i.e., i∈C φ i (N, v, P) i∈C ϕ i (N, v), then the payoff of player i ∈ C is greater than, equal to, or less than the Shapley value of player i, respectively, i.e., φ i (N, v, P) ϕ i (N, v), for all i ∈ C and for all C ∈ P.
Since we proved in Theorem 3 that, for |Z| ≥ 2, the payoff of provider p in coalition {p} ∪ H exceeds her Shapley value and that of peer n ∈ H is smaller than his, it is clear from this definition that A-D value is not surplus-sharing for |Z| ≥ 2, whereas χ value is surplus-sharing for any Z, e.g., see (7) of the χ payoff, whenever the coalition worth is larger than the Shapley sum of players in the coalition, all players in the coalition are paid more and vice versa. For instance, we can see from Fig. 8 that if the coalition is formed by provider q and x > 0.5625 fraction of peers, all members of the coalition are paid more than their respective Shapley payoffs.
As shown in Fig. 9, the monopoly phenomenon of Example 2 for the case of A-D payoff is still observed for the case of χ value. Regarding Example 1, as shown in Fig. 8, χ payoff even induces the monopoly by q, which did not exist for the case of A-D payoff.
D. Bargaining over the Dividend Rate
Another implication of WSP is that a provider bargains with peers over the division of the profit and loss by setting w i to a nonnegative real value. For instance, consider the case when the coalition worth exceeds the Shapley sum of players in the coalition, e.g., v(C(p)) > ϕ C(p) in (7), where p ∈ Z is the only provider in coalition C(p). In this case, a provider may award an extra bonus to peers by setting w p < 1, or make more profit by setting w p > 1. For the coalition worth smaller than the sum of Shapley payoffs, a provider may compensate peers for loss by using w p > 1. Setting w p = 1 guarantees the fair profit-sharing between provider p and peers, whereas provider p may be willing to use w p = 1 for bargaining.
Although w p can be viewed as a flexible knob to balance the fairness of the system and the bargaining powers of providers, regulators need to control the providers by introducing upper and lower bounds on w p which may depend on whether v(C(p)) > ϕ C(p) or not, because w p have opposite meanings for the two cases. For example, providers may use weights satisfying the following condition: Two bounds, w p and w p can be viewed as a preventive measure taken by the authorities to avoid unfair rivalries between providers.
Adopting non-identical weights w p = 0.1 and w q = 3, we revisit Example 1. Unlike Fig. 8 where provider q monopolizes all peers because χ {q} q (1) and χ {q} i (1) for i ∈ H is the biggest possible payoffs for q and any peers, the monopoly for this set of weights is broken as shown in Fig. 10. Now providers p and q will possess 0.6994 and 0.3006 fraction of peers, respectively. It is remarkable that the χ payoffs are still surplus-sharing as in Figs. 8 and 9.
E. Stability of Coalition Structures
The χ value of the game in Example 3 with equal weights w i = 1, for all i ∈ N , is shown in Table I. As discussed in [12], NP is not suitable for a value reflecting outside options. For example, let us consider the partition {p 1 p 2 , n 1 , n 2 }. For the case of the A-D value, payoffs of both providers p 1 and p 2 are 0. However, as we observe from Example 3, the best p 1 can do is to ally with n 1 to reduce her operational cost by v({p 1 , n 1 }) = 5 whereas the best p 2 can do to reduce hers by v({p 2 , n 1 , n 2 }) = 9. In other words, p 1 should release p 2 so that p 2 can create her worth because p 2 has a worthier outside option, to reflect which, χ value implementation "punishes" p 1 by giving her a negative payoff χ p1 = −1.
We also observe from Table I that players who can be better off by leaving the current coalition are paid more than others. For example, consider the partition {p 1 n 2 , p 2 , n 1 }. For the case of A-D payoffs, p 1 and n 2 received the same payoff 2 (See Table 1 in [20]). However, in Table I, n 2 is paid more than p 1 because n 2 has the potential for creating the worthiest coalition p 1 p 2 n 1 n 2 or p 2 n 1 n 2 , i.e., v(·) = 9. Though n 2 will not be able to break the partition {p 1 n 2 , p 2 , n 1 } according to the stability defined in Definition 3, n 2 is paid more than p 1 essentially for its assessed potential. In this case, the final form of coalition structure after its endogenous evolution is the state {p 1 n 2 , p 2 n 1 }. There are now two absorbing states {p 1 n 1 , p 2 , n 2 } and {p 1 n 2 , p 2 n 1 }, as shown in Table I, which are stable in the sense of Definition 3. On the contrary, there does not exist any stable state for the case of A-D payoff as shown in Fig. 6 (See also Section V-B and Table 1 in [20]). A more general result [12, Theorem 6.1] is that, if we adopt χ value to distribute the profit of the peer-assisted services, the system always has at least one stable coalition structure, irrespective of the number of providers. It it also remarkable that the following theorem holds without any restriction on operational cost Ω p (·), whereas we assumed that Ω p (·) is nonincreasing in Section III. Theorem 6 (Stability of χ Payoff). For χ value, there always exists a stable coalition structure P.
Also, it follows from [12,Corollary 6.4] that the instability of the grand coalition cannot be improved:
Corollary 3 (Stability of Grand Coalition Preserved).
The grand coalition of χ value is stable if and only if the Shapley value lies in the core.
To summarize, even if we adopt χ value, the instability of the grand coalition for the Shapley payoff which we observed in Theorem 2 remains unchanged. However, it is guaranteed that there exists a stable coalition structure for χ value.
VII. APPLICATION TO DELAY-TOLERANT NETWORKS
In this section, we present a concrete example of the peerassisted services in delay-tolerant networks where mobile users share certain contents with each other in a peer-to-peer fashion [22]: whenever two mobile users meet, a user whose content is more recent pushes it to the other whose content is outdated. We consider here a single class case, using the method in [22].
We assume that there exist two providers, p and q, whose contents differ. Users who are subscribing to the content of a provider are assumed to assist the provider in any case. The fraction of users subscribing to each provider is denoted by x 0 p and x 0 q . As discussed in Section III-A, we also assume that a non-subscribing user is allowed to assist at most one provider. Suppose that the content providers p and q push content updates to users, who are assisting providers, with the rate µ p and µ q , respectively, and every user meets other users with the aggregate rate λ. Then it follows from the analysis in [22, Section 5.1] that, if x p ≥ x 0 p fraction of users are assisting provider p, for a user who is subscribing to provider p, the expected age of the content and the outage probability that the age is larger than G max p are: The above two expressions can be easily derived by using integration by parts. A provider may guarantee subscribers a certain level of quality of service by imposing constraints such as (i)Ḡ p ≤ 1min or (ii) P C p ≤ 0.01 for G max p = 10min, of which we use the former here.
For instance, the cost function of provider p can be computed by solving the following optimization problem over µ p : min µp x p µ p subject to :Ḡ p ≤ g p where x p µ p corresponds to the average cost per user. The solution of this problem yields provider p's cost function: where we dropped the subscript p from x p . Suppose x 0 p = 0.4 and x 0 q = 0.3. If providers p and q use g p = 5/λ and g q = 10/λ, i.e., provider p has decided to maintain a lower average age of the content than that of provider q, we get the cost functions Ω p (x + x 0 p )/λ and Ω q (x + x 0 q )/λ as shown in Fig. 11. By computing the equations in (FluidAD2) and (FluidChi), it is not difficult to see that provider p monopolizes the remaining fraction of users, 1 − x 0 p − x 0 q = 0.3, whether we adopt the A-D payoff or χ payoff. Nonetheless, users can receive more under the χ payoff than under the A-D payoff due to the surplus-sharing property discussed in Section VI-C.
VIII. CONCLUDING REMARKS AND FUTURE WORK
A quote from an interview of BBC iPlayer with CNET UK [23]: "Some people didn't like their upload bandwidth being used. It was clearly a concern for us, and we want to make sure that everyone is happy, unequivocally, using iPlayer." In this paper, we have first studied the incentive structure in peer-assisted services with multiple providers, where the popular Shapley value based scheme might be in conflict with the pursuit of profits by rational content providers and peers. The key messages from our analysis are summarized as: First, even though it is fair to pay peers more because they become relatively more useful as the number of peerassisted services increases, the content providers will not admit that peers should receive their fair shares. The providers tend to persist in single-provider coalitions. In the sense of the classical stability notion, called 'core', the cooperation would have been broken even if we had begun with the grand coalition as the initial condition. Second, we have illustrated yet another problems when we use the Shapley-like incentive for the exclusive single-provider coalitions. These results suggest that the profit-sharing system, Shapley value, and hence its fairness axioms, are not compatible with the selfishness of the content providers. We have proposed an alternate, realistic incentive structure in peer-assisted services, called χ value, which reflects a trade-off between fairness and rationality of individuals. Moreover, the weights of χ value can serve as a flexible knob to enable providers to bargain with peers over the dividend rate at the same time as a preventive measure to avoid cutthroat or unfair competition between providers. However, we recognize the limitation of these results, which are based on the assumption that there is no additional cost reduction other than that achieved from the peer-partition optimization. We surmise that providers in cooperation can make further expenses cut by pooling and optimizing their resources, and traffic engineering, which will transform their cost functions.
The question remains open how the ramifications of this type of cooperation can be quantified in peer-assisted services.
where the last expression follows by integrating the last term of (12) by parts. From (10) and (13), ϕ Ξ ′ n (x) is rearranged as From the assumption, ϕ Ξ ′ \{p} n (x) is given by (FluidAD1) for Z = Ξ ′ \ {p}, which is plugged into the last term of (14) to yield To reduce the double integral of (15), we use the following fact: where we used the change of variable ut = τ and changed the order of the double integration with respect to u and τ . Plugging (16) into (15) The first term of the RHS can be decomposed into the following: Thus, we can obtain Integrating (9) with respect to x and from (19), we get
B. Proof of Theorem 2
To prove the theorem, we need to show that the condition for the core in Definition 1 is violated, implying that it suffices to show the following: This means that the payoff of p ∈ Z is greater than the marginal increase of the limit worth, i.e.,
|
2013-03-29T14:11:05.000Z
|
2011-04-03T00:00:00.000
|
{
"year": 2014,
"sha1": "3e7d259deceedf5674e3a7b14f0abc486e03ef3d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1104.0458",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9c51b9a63dfeccb5906f1fdfde99c9ceeaa1f908",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119261041
|
pes2o/s2orc
|
v3-fos-license
|
Characterisation of close visual binaries from the AstraLux Large M Dwarf Survey
We present VLT/SINFONI J, H+K spectra of seven close visual pairs in M dwarf binary/triple systems, discovered or observed by the AstraLux M dwarf survey. We determine the spectral types to within 1.0 subclasses from comparison to template spectra and the strength of K-band water absorption, and derive effective temperatures. The results are compared to optical spectral types of the unresolved binary/multiple systems, and we confirm that our photometric method to derive spectral types in the AstraLux M dwarf survey is accurate. We look for signs of youth such as chromospheric activity and low surface gravity, and find an age in the range 0.25-1 Gyr for the GJ 852 system. Strong Li absorption is detected in optical spectra of the triple system J024902 obtained with FEROS at the ESO-MPG 2.2m telescope. The equivalent width of the absorption suggests an age consistent with the beta Pic moving group. However, further observations are needed to establish group membership. Ongoing orbital monitoring will provide dynamical masses and thus calibration of evolutionary models for low mass stars.
INTRODUCTION
M dwarfs in multiple systems can provide valuable insight into the structure, formation and evolution of verylow mass stars and brown dwarfs, through their multiplicity characteristics as well as their physical and orbital properties (e.g., Burgasser et al. 2007;Goodwin et al. 2007;Janson et al. 2007;Duchêne & Kraus 2013). Despite being the most common stars in our neighbourhood, fundamental physical characteristics such as mass, radius, luminosity, and relations between these properties, are not as well constrained for mid-to late-M type dwarfs as for solar-type and intermediate mass stars. Detailed studies of orbital elements of individual low mass binaries, and dynamical masses, are needed for empirical calibration of models for low-mass stars, and significant effort has therefore been made in recent years to better characterise in particular ultracool dwarfs (e.g. Bouy et al. 2008;Bonnefoy et al. 2009;Konopacky et al. 2010;Dupuy et al. 2010;Schlieder et al. 2014;Zhou et al. 2014). In addition, the recent discoveries and dedicated surveys for M dwarf planets require accurate stellar physical properties in order to derive reliable planetary parameters, furthering the interest for characterising M dwarfs (e.g., Johnson et al. 2012;Mann et al. 2012Mann, Gaidos & Ansdell 2013;Muirhead et al. 2012Muirhead et al. , 2014Dressing & Charbonneau 2013;Newton et al. 2014Newton et al. , 2015Gaidos et al. 2014;Alonso-Floriano et al. 2015;Bowler et al. 2015).
Atmospheric and evolutionary models are particularly poorly constrained for young ( 100 Myr) M dwarfs, of which only a handful of binaries have measured dynamical masses (see, e.g., Close et al. 2005;Bonnefoy et al. 2009). In order to better constrain multiplicity properties and identify young binaries suitable for dynamical mass measurements, we carried out the AstraLux large M dwarf survey: a Lucky Imaging multiplicity survey of 761 young, nearby late-K and M dwarfs (Bergfors et al. 2010;Janson et al. 2012), supplemented by 286 mid-to late-M dwarfs in an extension of the survey (Janson et al. 2014a). From this survey we selected seven pairs in binary or triple systems for the spectroscopic observations presented in this paper, to better characterise the stars with respect to spectral types and youth. A large fraction of the AstraLux targets, including five of the binaries studied here, have recently been kinematically linked to young associations (Malo et al. 2013(Malo et al. , 2014aRodriguez et al. 2013, Schlieder et al., in preparation). Observed properties such as spectral type, surface gravity and luminosity can be compared to evolutionary models of premain sequence stars to yield component mass and system age estimates for close binaries. Ongoing astrometric and radial velocity monitoring will provide a better understanding of these objects from well determined orbits and dynamical masses within a few years, which will then be compared to the results presented here. This paper is organised as follows: After a short introduction in Section 1, Section 2 describes the target selection, observations and data reduction procedure. In Section 3, the individual component spectral types and effective temperatures are determined and compared to previous estimates. We also measure the equivalent widths of gravity sensitive features and compare to old field stars and to stars in young associations. We discuss our results and additional youth indicators in Section 4, and compare the observed and derived MK vs. T eff of GJ 852 BC and J061610 to theoretical isochrones to infer individual masses and system ages. We end this report with a summary of our results and conclusions in Section 5.
Target selection and observations
Seven nearby M dwarf binaries or close pairs in hierarchical triple systems were selected from the target list of the AstraLux M dwarf survey as good candidates for followup near-infrared spectroscopy and characterisation. These are listed in Table 1. The selected targets had all been observed in at least two epochs and were confirmed as physically bound via common proper motion. They all had a projected separation of a < 8 AU and primaries with photometric spectral types M3.5 or later derived from As-traLux observations in SDSS i ′ − and z ′ −band. The stars in the AstraLux survey are suspected to be young, based on their coronal activity and low tangential velocity, and have spectroscopic distances of less than 52 pc from the Sun (Riaz, Gizis & Harvin 2006). The GJ 852 BC system has a USNO parallax distance of only 10 pc (Harrington & Dahn 1980).
The targets were observed in service mode with the adaptive optics fed Spectrograph for INtegral Field Observations in the Near Infrared (SINFONI, Bonnet et al. 2004) at the VLT Unit Telescope 4 (Yepun). SINFONI consists of the SPIFFI integral field spectrograph (Eisenhauer et al. 2003) together with the Multi-Application Curvature Adaptive Optics module (MACAO, Bonnet et al. 2003). We used the J (λ = 1.1 − 1.4 µm) and H + K (λ = 1.45 − 2.45 µm) gratings, with a resolving power of R≈2000 and 1500 respectively, and the target itself as a natural guide star. Preslit optics were selected depending on binary separation to provide a spaxel scale of 12.5 mas × 25 mas, corresponding to a Field of View (FoV) of 0.8 arcsec 2 , for all the targets with angular separations 0. ′′ 1 − 0. ′′ 2 (see Table 1). For the wider couple, GJ 852 BC, the 125 mas × 250 mas spaxel preoptics were used, corresponding to a FoV of 8 arcsec 2 . Each target was observed in a dither sequence with small offsets between eight science exposures, and three sky dither points at the beginning, middle and end of science acquisition. For each observation a telluric standard star of spectral type B or early A was observed at similar airmass. All observations were performed at airmass close to 1.0. Table 1 lists the observational details: which components were observed (GJ 852 BC, J053018 AB and J024902 BC are part of triple systems with a wider companion), their angular separation as measured by Bergfors et al. (2010); Janson et al. (2012), the date of observation, integration times and number of integrations, measured Signal-to-Noise ratio (S/N, see Sect. 3.1) and the spectral type of the telluric standard.
Data reduction
J and H + K band datacubes were built from a set of raw data and associated calibration frames using the SINFONI data reduction pipeline version 2.2.5 (Abuter et al. 2006). Some of the raw frames were affected by stripes on slitlet #25 (the so called odd/even effect 1 ) and by dark horizontal lines. These electronic artefacts were properly removed using custom scripts before providing the frames to the pipeline. The data reduction was checked by eye and a set of quality control parameters were compared to reference values 2 .
Though the binaries are successfully resolved by MACAO, cross contamination of their spectra is an issue that must be addressed. We used a custom spectral extraction tool to deblend the flux of the components, and consequently their spectra, slice by slice in each of the datacubes. The tool is a modified version of the algorithm presented in Bonnefoy et al. (2009). It first estimates the positions of the sources inside the FoV, which usually drift due to the atmospheric refraction 3 . We then applied on each slice a modified version of the Dumas et al. (2001) CLEAN algorithm to retrieve the individual flux of the sources. The algorithm requires the PSF at the time of the observation and for each cube wavelength. To provide that, we considered two different approaches. We first used a scaled version of the telluric standard star data cubes observed immediately after our targets (hereafter PSFstd ). Alternatively, we built the PSF by duplicating the profile (hereafter PSFdup) of the brightest binary component (or of the component farthest from the FoV edges). We re-estimated the position of the sources, and re-built the PSF in case PSFdup was chosen, applying a second layer of CLEAN. For each input cube, an extracted cube for each binary component, and a residual map that enables to monitor the efficiency of the extraction process was produced. PSFdup provides a more accurate extraction. On the contrary, the PSF shape built following PSFstd is less appropriate, but the resulting spectra have slightly higher S/N in some cases. Since our spectral analysis is based mainly on the continuum shape (see Section 3.1), we used the PSFdup reduced spectra for our analysis.
The science target and telluric standard spectra were extracted using the same aperture sizes for target and standard star so as to minimise differential flux losses. Aperture sizes of 225 mas were considered optimal for obtaining high S/N for the small plate scale observations, and a 900 mas aperture was used for GJ 852 BC. For each set of observations, the strong Br-series absorption lines in H + K and the Paβ line in J-band in our late-B -early-A telluric standard spectra were fitted with Voigt functions and subtracted from the standard star spectrum before it was divided by a blackbody function of corresponding temperature. The science spectrum was then divided by the resulting spectral response, containing only remaining instrumental and atmospheric features, to obtain the final spectrum.
Spectral type and effective temperature
The strong atomic and molecular features and temperature sensitive continuum regions make K-band the optimal choice for determining spectral types of cool stars in the near-infrared. As a first estimate, we performed a visual comparison of the K-band spectral shapes to template SpeX spectra obtained from the IRTF Spectral Library (Cushing, Rayner & Vacca 2005;Rayner, Cushing & Vacca 2009). We also compared the strengths of some of the most prominent atomic and molecular absorption features in Hand K-band suitable for spectroscopic classification, such as Na, Ca, Al, Mg, Si, Fe, and CO and FeH band heads (Cushing, Rayner & Vacca 2005), to the IRTF templates.
The S/N was measured in each wavelength band in small regions without prominent spectral features as suggested by Covey et al. (2010), and are listed in Table 1. Note that these values are representative and vary over the spectral range. The low S/N of the J040805 and J061610 spectra makes it difficult to assign precise spectral types to these targets based on absorption feature strengths, and we estimated rough spectral types for these stars from the continuum shapes in the K-band.
Figures 1-4 show our J and H + K band spectra plotted together with the SpeX templates. For this and following analysis, we used the spectra extracted using the PSFdup-method since it best preserves the individual spectral shapes. The J-band spectra of J024902 and J040805 are of low S/N ( 20) and are omitted from Figure 1 as no atomic features could be confidently identified.
The initial estimate is complemented by a more quantitative spectral type analysis in which we calculated the H2O − K2-index defined by Rojas-Ayala et al. (2012): The index is a modified version of the Covey et al. (2010) H2O − K-index which takes into account the Mg I and Ti I atomic features that affect the measurements in bright spectra. It measures the temperature dependent strength of K-band water absorption and is independent of gravity and metallicity for 3000 K T eff 3800 K. We measured the median flux in each wavelength region in Eq. 1 and estimated errors with a Monte Carlo simulation. Random Gaussian noise based on the S/N was added to a set of 10 000 median flux measurements, from which the mean H2O−K2 index was calculated with error bars corresponding to the standard deviation of the resulting distribution. The indices and corresponding spectral types are listed in Table 2, together with the spectral types derived from i ′ − z ′ by Bergfors et al. (2010); Janson et al. (2012) and the visually estimated K-band spectral types.
We derive the effective temperatures using the spectral type -effective temperature relations for 600 Myr old M dwarfs from Kraus & Hillenbrand (2007). The results are listed in Table 2 and assume errors in T eff directly corresponding to the estimated errors of the adopted spectral type.
Comparison of near-infrared spectral types to
AstraLux optical photometry In order to derive approximate spectral types for the binary or multiple systems discovered in the AstraLux M dwarf survey and exclude background contaminants, each target was observed in SDSS i ′ -and z ′ -band and individual spectral types were derived from i ′ − z ′ colours and the integrated JHK spectral types, with estimated errors of ±1 spectral subtype (see Daemgen et al. 2009;Bergfors et al. 2010). Obtaining near-infrared spectral types allows us to Table 2. Photometric and near-infrared spectral types, and inferred effective temperatures. Janson et al. (2012Janson et al. ( , 2014a, and identify potential biases in the method. We find that, with the exception of J021330 B, all near-infrared spectral types agree with the photometrically derived spectral types within the photometric ±1 subclass error, and in most cases to within 0.5 subclasses (see Table 2). This confirms that the photometric method used in the AstraLux large M dwarf survey in general provides accurate spectral types.
FEROS observations and data reduction
For some of our targets spatially unresolved optical spectroscopy exists, obtained with the Fiberfed Extended Range Optical Spectrograph (FEROS, Kaufer et al. 1999) mounted to the ESO-MPG 2.2 m telescope at La Silla Observatory. J053018 and J061345 were observed within the ESO programmes 086.A-9014 and 089.A-9013 (Bergfors et al., in preparation), and J024902 within programme 088.A-9032 (Schlieder et al., in preparation). Additional FEROS spectra of J021330 were retrieved from the ESO archive (programme ID 091.C-0216, PI: Rodriguez). FEROS provides spectra covering λ = 3500 − 9200Å across 39 orders at R ≈ 48000, using two 2. ′′ 0 optical fibres separated by 2. ′ 9. The targets were observed in 'object+sky' mode with one fibre on the star and the other on sky, and at airmass close to 1.0. Observational details are provided in Table 3. The data were reduced using the FEROS Data Reduction System (DRS) within the ESO-MIDAS package. The package follows standard spectroscopic reduction procedures which include flat-fielding, background subtraction, bad pixel correction, optimal order extraction, and wavelength calibration using ThAr lamp lines. The software also computes the barycentric velocity correction and re-bins and merges the orders to produce a continuous spectrum. Calibrations used during the reduction were acquired in daytime.
For J024902, the RV standard GJ 1094 was observed on the previous night as an internal calibration check.
Optical spectral types
We determined integrated optical spectral types for these systems by comparing TiO and CaH band head strengths to M dwarf spectral templates from the Sloan Digital Sky Survey (Bochanski et al. 2007). Section 4.1.2 provides details on additional analysis performed for J024902.
An earlier optical spectral type of ∼ M2 is found for the M4+M4 system J024902 compared to the photometric resolved optical spectral types, due to the inclusion of the early M primary in the FEROS fibre aperture (J024902 A, ρA,BC ≈ 0.5 ′′ , see also Sect. 4.1.2). The optical spectral types are otherwise fully consistent with the AstraLux photometric spectral types. Our optical and near-infrared spectral types agree within errors, which is to be expected on average.
Surface gravity and chromospheric activity
Age is one of the most difficult stellar parameters to determine. A combination of several properties is usually required to determine youth, since many signs may suggest, however not establish by themselves, that a star is young. For M dwarfs, such spectral features can include chromospheric and coronal activity (emission features, in particular strong Hα emission, X-ray emission, flares), signs of accretion discs (e.g. photometric excess, forbidden O I emission lines), Liabsorption in the optical spectrum at 6708Å. Other signs of youth include low surface gravity, low tangential velocity, and kinematic properties consistent with known Young Moving Groups (YMGs) or associations.
Low surface gravity can be measured in medium resolution spectra such as ours for spectral types M5 or later using gravity sensitive alkali lines in the nearinfrared (see e.g. Gorlova et al. 2003;McGovern et al. 2004;Kirkpatrick et al. 2006 2007). We measured the equivalent widths (EW) of the gravity sensitive Na I doublet at 1.138µm and the K I doublets at 1.169, 1.177µm and 1.243, 1.253µm with associated errors as described in Bonnefoy et al. (2014), with the psuedocontinuum wavelength regions reported therein. For a given spectral subtype, low surface gravity, hence youth, can be seen in the reduced strength of the alkali lines compared to main sequence stars. Table 4 lists the measured EWs for all stars in our sample, with the exception of J040805 and J024902 for which the quality of the spectra was not sufficient for precise measurements. These are also plotted in Fig. 5, together with EWs for field dwarfs from the IRTF SpeX library and <10 Myr old stars from Manara et al. (2013) for comparison. We measured the EWs for all stars in our sample for consistency, even though the only stars in our target sample that are classified as ∼ M5 or later are the stars in the GJ 852 BC system and J061610 B. For the latter, the measured Na I at 1.138µm is slightly weaker than for the field main sequence template, and the K I at 1.169 and 1.253µm overlap with the young template measurement, suggesting intermediate surface gravity. We see no indication of particularly weak alkali lines in the J-band spectra for the other targets, and hence no signs of low surface gravity and youth.
No emission lines indicating youth, such as Brγ or Paβ, can be confidently detected in our near-infrared spectra. Weak 'bumps' can be seen in the J-band spectra of J061345 and J061610 at roughly the position of Paβ, however these features likely arise from incomplete H-absorption line subtraction in the telluric spectra during data reduction. Similar 'bumps' are seen around the Brγ absorption feature in K-band for GJ 852 BC.
Candidate members of YMGs
Based on the selection of targets from their strong coronal emission and low tangential velocity, Riaz, Gizis & Harvin (2006) estimated that their catalog, from which the As-traLux target sample was obtained, consisted of mainly stars younger than ≈600 Myr, and using the age-velocity relation of Holmberg, Nordström & Andersen (2009) we derived an upper age for our AstraLux sample of 1 Gyr in Bergfors et al. (2010). Since then, the kinematics of several of our original large survey targets have been investigated for kinematic membership in YMGs and some are candidate members of e.g. the AB Dor Moving Group (MG), the β Pic MG, and the Argus and Columba associations (Malo et al. 2013(Malo et al. , 2014aRodriguez et al. 2013, Schlieder et al., in preparation), including five of our SINFONI targets: J021330: A convergent point analysis by Rodriguez et al. (2013) yielded a high probability (87%) of β Pic MG membership, however, their analysis using the BANYAN statistical software (Malo et al. 2013) placed this target as a field system. The updated BANYAN II web tool (Malo et al. 2013;Gagné et al. 2014), which assumes a refined prior based on the expected populations, places this system as part of the field population. FEROS spectra obtained from the ESO archive show Balmer line and Ca II H & K emission, but no Li-absorption.
J024902: See Sect. 4.1.2. J053018: This triple system is flagged by Malo et al. (2014a) as a candidate member of the AB Dor MG, with a probability of 97.7% when including radial velocity measurements in the analysis. Using the BANYAN II tool, we find a 73.9% probability of kinematic membership using the refined prior. Additional integrated FEROS spectra show that the system is chromospherically active with strong Balmer line and Ca II H & K emission, however no visible Li-absorption (Bergfors et al., in preparation).
J061345: The binary has 99.99% probability of belonging to the Argus association when radial velocity is included (Malo et al. 2014a), decreasing to 76.1% with the BANYAN II tool. No Li-absorption is visible in integrated FEROS spectra (Bergfors et al., in preparation).
J061610: Using the BANYAN II tool with non-uniform priors, we find a 4% probability that the system belongs to the β Pic MG (see also Janson et al. 2014b), while if using uniform prior the probability increases to 99.6%. No radial velocity measurements were included in either analysis. The convergent point analysis tool of Rodriguez et al. (2013) finds a probability of 84% of β Pic MG membership.
Our BANYAN II analysis of the GJ 852 and J040805 systems places them as field objects.
FEROS observations of the J024902 system
The CASTOFFS survey to identify young, low-mass stars near the Sun included 2MASS J02490228-1029220 ABC as a candidate of the β Pic MG (Schlieder et al. 2012, Schlieder et al. in preparation). The star was selected as a candidate on the basis of consistent position and proper motion, and strong X-ray and UV emission.
Radial and Rotational Velocity: To measure the radial velocity (RV) and rotational velocity of J024902 ABC, we used IDL software to perform a cross-correlation (CC) analysis Bender & Simon 2008) with a suite of K5 − M5 RV templates taken from Prato et al. (2002). These template stars were observed during FEROS runs in 2011 December and 2012 October and were reduced following the same methods as the science target. We measured the RV of J024902 ABC across portions of four echelle orders chosen to be free of strong telluric absorption. The average RV across the four orders was 17.1 ± 1.1 km s −1 , where the dominant source of error is a 1 km s −1 systematic introduced by the use of empirical RV templates. The CC function in each case was strong and single peaked, showing no indication of a tight, spectroscopic binary. The strongest peak was found when using the M2.5 star GJ 752 A as a template. We also measured a projected rotational velocity of v sin i =11 ± 3 km s −1 by cross-correlating our spectrum with rotationally broadened templates. Spectral type: Our RV analysis indicated that J024902 ABC had a spectral type of ∼ M2 from the best match RV template. As a further check, we smoothed our spectrum to R ≈ 1000 and performed a visual comparison to the SDSS M dwarf spectral type templates from The Hammer IDL spectral typing suite . Although our spectrum is not flux calibrated, our visual comparison of the strength of atomic and molecular absorption features between 5500 and 7000Å yields a spectral type of M2 ± 1. We presume this to be the spectral type of the primary, J024902 A, which contributes more flux than the B and C components.
Age indicators and kinematics: The FEROS spectrum of J024902 ABC is rich with emission from the Hydrogen Balmer series (Hα, Hβ, Hγ, Hδ, Hǫ) and other signatures of strong magnetic activity. Additionally, the spectrum exhibits strong Li absorption at 6707.8Å with EW = 250 ± 30 mÅ(see Fig. 6). Taking into account the relative fluxes of an M2 and two M4 components at 6707.8Å, the Li strength as a function of temperature, and the Li depletion rate for M2 and M4 dwarfs at ages 20 Myr (da Silva et al. 2009), we conclude that the only possible contribution to the Li absorption feature must come from the M2 component. In addition, the Li depletion boundary is in between spectral types M4 and M5 in the β Pic MG (Binks & Jeffries 2014;Malo et al. 2014b), and contribution from the lower mass companions is therefore not possible at β Pic or older ages. A comparison of our measurement and Li absorption strengths for M1−M2 type stars of different ages is shown is Figure 7. The EW of Li in J024902 A is weaker than in M1−M2 type members of the ≈ 10 Myr TW Hydrae association, but stronger than members of the ≈ 40 Myr old Tucana-Horologium association (Kraus et al. 2014). The Li strength is consistent with that of early-M stars in the β Pic MG, suggesting a similar age of ≈ 20 Myr (Mentuch et al. 2008;Mamajek & Bell 2014).
The position and proper motion of the star combined with our RV measurement reveals that the UVW Galactic velocities of the system are consistent with the β Pic group distribution for distances between 60 − 80 pc. However, at these distances J024902 ABC is discrepant with the β Pic group XYZ Galactic position distribution by ≈ 20 pc in both X and Z. This is likely why the Bayesian probability estimator BANYAN II provides negligible probability of membership in β Pic or any other MG using the available kinematics. Although the combination of consistent partial kinematics and strong evidence for youth make a compelling case for J024902 ABC to be a member of the β Pic MG, final group membership assignment will require a parallax measurement and a better understanding of the full XYZ distribution of the β Pic group. Nevertheless, the unambiguous detection of Li in the optical spectrum of the system indicates an age < 40 Myr and the system warrants further study.
Hertzsprung-Russell diagram
Comparison with evolutionary models can provide additional constraints on the system ages and mass estimates to be compared to dynamical masses in the future. We compared the positions of the two systems containing components of spectral type M5 or later, J061610 and GJ 852 BC, in a MK vs. T eff diagram to the Baraffe et al. (2015), hereafter BHAC15, isochrones.
The GJ 852 BC system consists of an M4.5 and an M7.5 star and has a trigonometric parallax of 96 ± 6 mas (Harrington & Dahn 1980). This system is of particular interest for orbital monitoring and age determination, since both young and late-type M dwarf binaries with dynamical masses and well characterised spectra are rare. We measured the K-band flux ratio between the BC components and converted to individual apparent magnitudes using integrated 2MASS photometry (Cutri et al. 2003). The individual component absolute K-band magnitudes and derived effective temperatures are plotted together with BHAC15 isochrones and iso-mass contours in Fig. 8. Assuming co-evality, the system is likely older than ≈250 Myr, consistent with the findings from the analysis of gravity sensitive features in the previous section which showed no sign of low surface gravity. For an age 300 Myr, GJ 852 C has a model mass below the hydrogen burning limit (0.072 M ⊙ ).
Because of its intermediate surface gravity results in the EW analysis and its debated β Pic MG membership (Malo et al. 2013;Janson et al. 2014b), we also put the J061610 system on an H-R diagram, despite its lack of a trigonometric parallax measurement. We assumed the spectroscopic (statistical) distance of 47 ± 7 pc derived by Malo et al. (2013) and plot the derived MK vs. T eff and BHAC15 isochrones in Figure 9. Assuming co-evality and considering the more stringent constraints set by the primary star parameters, we find a 1σ system age of 50 Myr. At this age, both components in the system have masses above the hydrogen burning limit. Various age estimates for the β Pic MG in the literature based on different methods such as the lithium depletion boundary, kinematics and ischronal fitting constrains the age to the range 10 − 40 Myr (see e.g. Mamajek & Bell 2014, for a summary). A system age in this interval can not be rejected from this analysis. We note that for young ages our observed spectral types correspond to higher T eff than we assumed using the Kraus & Hillenbrand (2007) empirical models for Praesepe (≈ 600 Myr), shifting the stars to the left in the figure and implying a higher age.
SUMMARY AND CONCLUSIONS
Our near-infrared spectral analysis of seven close binaries from the AstraLux M dwarf survey provides spectral types for all stars and constraints on surface gravity and age for two systems: GJ 852 BC and J061610. The spectral types derived from visual comparison of spectral energy distribution and absorption features and measuring the Rojas-Ayala et al. (2012) H2O−K2 index were found consistent with the estimates determined from i ′ −z ′ colours in the AstraLux M dwarf survey, thereby validating the photometric method. Additional optical spectra of J024902 provides radial and rotational velocity and primary star spectral type as well as age constraints.
The ages of these systems are of particular value for future dynamical mass studies. Upper limits of 1 Gyr were derived in Bergfors et al. (2010), and lower limits were here explored for the two systems containing components with spectral type M5 or later from EW measurements of gravity sensitive alkali lines and comparison of observed parameters with theoretical isochrones.
From comparison with the BHAC15 evolutionary models we found a 1σ lower age limit of 250 Myr for GJ 852. This intermediate age lower limit is consistent with the analysis of gravity sensitive J-band alkali lines and the shape of the H + K spectra, neither of which showed any sign of low gravity and hence ongoing contraction. For an age 300 Myr we find that the model predicted mass of the C component is below the hydrogen burning limit.
The J061610 system lacks a trigonometric parallax measurement, however, assuming the spectroscopic distance derived by Malo et al. (2013), we found a 1σ system age of 50 Myr. An age consistent with the β Pic MG, in which the system is a kinematic member candidate, can not be ruled out from this analysis. The EW analysis suggests intermediate surface gravity. Future measurements (e.g. the Li EW, radial velocity and trigonometric parallax) would provide firmer constraints on age and model predicted masses. Kraus et al. (2014). Individual measurements are plotted as gray open symbols while the means and standard deviations are plotted as larger symbols with error bars. The Li EW and error for the J024902 system, in which the ∼ M2 primary contributes all observed Li, is shown as the magenta square. The measured EW is only consistent with similar spectral type members of the β Pic MG.
An analysis of optical spectra and kinematics of the J024902 system suggests an age of 40 Myr, based on the strong Li absorption. Further data such as parallax is needed to establish membership in the β Pic MG.
The ultimate goal of this binary study is to better constrain evolutionary models, which have been shown to systematically under predict masses for M 0.5M ⊙ objects (Hillenbrand & White 2004). The AstraLux Large M Dwarf Survey, from which these binaries were selected for spectroscopic characterisation, discovered ≈ 200 nearby, lowmass binary systems, many of which belong to YMGs. Ongoing orbital monitoring together with precise parallactic distances obtained with Gaia (Perryman et al. 2001) will within a few years provide dynamical masses for a large number of binaries, providing stringent constraints on evolutionary models for young, low-mass stars.
|
2015-11-30T00:43:20.000Z
|
2015-11-30T00:00:00.000
|
{
"year": 2015,
"sha1": "a5f5568e3ab0e1b56c0c0d9b586a73a05cc0f999",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/456/3/2576/18467066/stv2768.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "a5f5568e3ab0e1b56c0c0d9b586a73a05cc0f999",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
245251529
|
pes2o/s2orc
|
v3-fos-license
|
Overexpression of pressure-responsive miRNA-5703 inhibits pressure-induced growth and metastasis of liver cancer
A vast majority of liver cancers coexist with cirrhosis and/or portal hypertension. A high-pressure tumour microenvironment may lead to malignant progression of liver cancer. Through quantitative reverse transcription-polymerase chain reaction, we found that miRNA-5703 was expressed at low levels in HepG2 and Huh-7 cells and pressure-treated MHCC97H implanted mouse cancer tissues, while its potential target gene, sarcoma gene (SRC), was highly expressed. The expression of miRNA-5703 was higher in liver cancer tissues from Barcelona Clinic Liver Cancer (BCLC) stage A1 patients than those from BCLC stage A2-D patients, whereas SRC showed the opposite expression pattern. Bioinformatics analysis, luciferase reporter assay, and western blotting were performed to verify the relationship between miRNA-5703 and its potential target SRC. Using intravital imaging and immunohistochemistry, we demonstrated that pressure promotes tumour growth in subcutaneous tumourigenesis nude mice, and overexpression of miRNA-5703 significantly downregulated Ki67 and upregulated NM23 in tumour tissues of mice, implying the blockage of tumour growth and metastasis. The activation of proliferation, migration, and invasion of HepG2 and Huh-7 cells by pressure, and inhibition by overexpressing miRNA-5703 were observed by cell counting kit-8 assay, flow cycle assay, transwell assay, and wound healing assay. After the intervention of pressure, inhibitor, and lentivirus to hepatoma cells, SRC, focal adhesion kinase (FAK), phosphatidylinositol 3-kinase (PI3K), serum/glucocorticoid regulated kinase-3 (SGK3), phosphoinositide dependent protein kinase 1 (PDK1), and paxillin were upregulated, and forkhead box O1 (FOXO1) and cyclin dependent kinase inhibitor 1B (P27Kip1) were downregulated in pressure-loaded hepatoma cells, which could be reversed by overexpression of miRNA-5703 or SRC knockdown. In conclusion, upregulation of miRNA-5703 inhibited pressure-induced growth and metastasis by suppressing the SRC-FAK-FOXO1 axis and SRC-paxillin axis. This novel perspective may be conducive to the mechano-inspired anticancer drugs of liver cancer.
Introduction
Liver cancer is currently the third leading cause of cancer-related deaths worldwide [1]. The mortality rate of liver cancer is 8.2% in both sexes combined [2]. In recent years, immunosuppressants have attracted much attention in the treatment of liver cancer, but compared to other cancers, the response rate of liver cancer is much lower [3]. The tumour microenvironment (TME) is a key interfering factor in Ivyspring International Publisher immunotherapy for liver cancer [4], and the interdependence of biological, chemical, and physical cues affect the TME [5]. However, the role of physical cues in liver cancer has not been definitively elucidated.
Portal hypertension (PHTN) is a recognised risk factor [6] and an independent predictor of liver cancer [7,8]. The incidence of liver cancer and mortality due to liver cancer in patients with PHTN is greater than in patients without PHTN [6]. In a median follow-up of 58 months, 12.2% of patients with cirrhosis and PHTN (without varicose veins) developed liver cancer, and receiver operating characteristic curves identified that those who had a hepatic venous pressure gradient above 10 mmHg had a 6-fold increase in liver cancer incidence [8]. PHTN has also been proven to be a notable risk factor for late recurrence after hepatectomy (hazard ratio: 2.424; confidence interval: 95%, 1.644-3.574; P < 0.001) [9]. Recently, an increasing number of investigations using non-invasive examinations, such as computed tomography [10] and spleen stiffness measurement [7], have helped to assess the degree of PHTN and predict the occurrence, development, and recurrence of liver cancer. It has been hypothesised that modifications to the TME associated with PHTN might increase the growth potential of liver cancer and maintain a certain level of resistance to chemoembolisation or promote processes of evasion [6]. PHTN increases the biological pressure in hepatic sinuses [11][12][13], leading to higher pressure on cancer cells [14], and destroys the mechanical balance of the TME. The mechanism by which pressure regulates the proliferation and metastasis of hepatoma cells remains poorly understood.
Pressure has been shown to be an important factor that influences the growth or aggression of solid tumours [15,16]. Zeng et al. suggested that pressure activation of malignant colon cells accelerated tumour development [17]. Basson et al. applied increasing pressure between 0-80 mmHg to breast cancer cell lines (MCF-7), prostate cancer cell lines (MLL, PC3), and colon cancer cell lines (SW620, Caco-2, CT-26) for 24 h, and observed that the proliferation of all these pressurised cell lines was stimulated [18]. Goetz et al. found that mechanical transduction, which is dictated by the forces generated by intercellular adhesion, cell contraction, and TME, is a key determinant of tumour progression [19]. Fernnde-zsnchez et al. identified that the mechanical pressure exerted by tumour growth onto non-tumourous adjacent epithelium increased the crypt enlargement accompanying the formation of early tumourous aberrant crypt foci, which suggests that mechanical activation of signalling pathways may function in tumour heterogeneity [20]. However, there have been no reports concerning the function of mechanical pressure in liver cancer.
Previous studies on mechanical-responsive miRNAs have mainly focused on cardiovascular and orthopaedic diseases. Mechanical-sensitive miR-181b-5p expression was suppressed by shear stress, which elevated levels of its target gene STAT, leading to the onset of atherosclerosis [21]. miRNA-181a inhibits inflammatory responses in mice with intervertebral disc degeneration by inhibiting the ERK pathway [22]. Abnormal compressive forces could regulate the expression of miRNA-221, -222, -21, and -27 in articular cartilage, and the identification of these forces has the potential to improve understanding how they impact tissue homeostasis [23]. Determining whether pressure-responsive miRNAs are involved in the development of liver cancer is is an innovative project worthy of exploration.
Autophosphorylation of SRC (Y418) is an early response to mechanical transduction and leads to the activation of FAK (Y397). This finding suggests that SRC may be a messenger to initiate and spread the growth signal of cells under stress stimulation, and FAK is one of its downstream targets [24]. PI3K catalyzes the phosphorylation of phosphatidylinositol-4,5-diphosphate (PIP2) to phosphatidylinositol-3,4,5-triphosphate (PIP3). PIP3 binds to the signal protein PDK1 containing a PH1 hydrophobic structure. PDK1 promotes the phosphorylation of SGK3 T-ring residues in the phosphorylation tank and activates SGK3 [25]. As a gene that inhibits proliferation, FoxO1 phosphorylation is an inactivation process, which promotes cell proliferation. FoxO1 has a positive regulatory effect on P27kip1. P27kip1 is a cell cycle regulator encoded by CDKN1B [26]. For a long time, its function has been related to inhibiting the process of the cell cycle. Under integrin-binding or growth factor stimulation, paxillin is phosphorylated by FAK and Src to create the binding site of binding protein Crk [27]. Paxillin promotes cell adhesion and movement through hyperphosphorylation of casein/serine phosphorylation sites [28]. Depending on the cell environment, these activities may lead to cancer and stimulate cancer progression and metastasis.
In an earlier report [14], we concluded that exerting a pressure of 15 mmHg on liver cancer cell lines (HepG2, Huh-7, and MHCC97H) for 24 h could promote the proliferation, migration, and invasion of the cells. Under this pressurisation, five pressure-responsive miRNAs and 10,150 mRNAs were screened by miRNA and mRNA microarray, respectively. In this study, we investigated the pathophysiological role and possible mechanism of pressure-responsive miRNA-5703 in the growth and metastasis of liver cancer, using in vivo and in vitro experiments, and observed that overexpression of miRNA-5703 may suppress the mechanicaldependent development of liver cancer, which provides a theoretical basis for the combined application of miRNA-5703 overexpression with immunosuppressants.
miRNA and mRNA expression profiling
The differentially expressed miRNAs and mRNAs screened from the pressurised and unpressurized HepG2 cells are available in the Gene Expression Omnibus (GEO) repository (www.ncbi. nlm.nih.gov/geo/query/acc.cgi?acc=GSE119881 and www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE 120194). The Kyoto Encyclopedia of Genes and Genomes (KEGG) database (www.genome.jp/kegg/ annotation) was used for genome/meta genome annotation, and a bubble chart was produced using KOBAS3.0 (Peking University, Beijing, China).
Cell culture and reagents
The HepG2 cell line was purchased from American Type Culture Collection (HB-8065; ATCC, Manassas, VA, USA). The Huh-7 cell line was purchased from the Japanese Collection of Research Bioresources Cell Bank (0403; JCRB, Manassas, VA, USA). MHCC97H and HL-7702 cell lines were purchased from the Shanghai Cell Bank of the Chinese Academy of Sciences (HB-8065; Shanghai, China). No infection was detected in any of the cells lined by mycoplasma testing. Short tandem repeat analysis was used to authenticate the cell lines. All cells were cultured at 37 °C under 5% CO2 in Dulbecco's Modified Eagle's Medium (DMEM; HyClone, Logan, UT, USA) containing 10% foetal bovine serum (Gibco Life Technologies, Carlsbad, CA, USA), 1 mM sodium pyruvate, 2 mM glutamine, 100 U/mL penicillin G, 100 mg/mL streptomycin, and 10 mM HEPES (Sigma, San Francisco, CA, US). Cells were treated with 0.2% trypsin and 0.02% EDTA (Thermo Fisher Scientific, Waltham, MA, USA) when collected for experiments.
Pressure loading
The 2-dimensional (2D) pressure loading system was designed and manufactured by our research group, and the protocol can be found in our previous papers [14,29,30]. The Flexcell-5000 Compression System (Flexcell International Corp, Burlington, NC, US) was used to exert pressure on the 3D cultured cells. The methods of cell 3D culture and pressure loading have been described in our previous paper [14].
Cell Counting Kit-8 assay
A Cell Counting Kit-8 (CCK-8) assay was conducted using a kit (Dojindo Laboratories, Kumamoto, Japan) to detect cell proliferation, and the protocol used has been detailed in our previous paper [14].
Cell cycle analysis by flow cytometry
A cell cycle assay kit (MULTI SCIENCES, Hangzhou, Zhejiang, China) was used to detect the cell cycle, and the details of the methods have been reported in our previous paper [14].
Transwell assay
Transwell® Permeable Supports consisting of Snapwell™ and Netwell™ inserts (Corning, Corelle, NY, US) were used to perform the cell invasion assay, and the protocol used was the same as that used in our previous paper [14].
Wound healing assay
Here, 5 × 10 5 pre-treated cells were seeded into each well of a 24-well plate (Corning, Corelle, New York, USA), and the details of the methods have been reported in our previous paper [14].
Immunofluorescence staining
A cover slide was placed in each well of a 6-well plate (Corning, Corelle, New York, USA) and 7 × 10 4 cells were added per well for overnight cultivation. Subsequently, after fixing with 4% paraformaldehyde (Sinopharm Chemical Reagent Co. Ltd., Shanghai, China) for 30 min, the cells were blocked with 3% bovine serum albumin (BSA) for 30 min after membrane permeabilisation. They were then incubated with a 1:300 dilution of primary antibodies overnight at 4 °C. Then, a 1:300 dilution of secondary antibodies were added and incubated for 1 h. DAPI (G1012, Servicebio, Wuhan, China) was used to stain nuclei. The stained cells were observed and photographed using a fluorescence microscope (DMi8, Leica, Wetzlar, Germany) after being sealed with a sealing reagent that inhibited fluorescence quenching.
Cytoskeleton staining
For cytoskeleton staining, a cover slide was placed in each well of a 6-well plate and 7 × 10 4 cells were added per well for overnight cultivation. The next day, the culture medium was discarded and the cells on the climbing slides were washed thrice with phosphate-buffered saline (PBS; HyClone, Logan, UT, US) for 5 min each. Then, 1 mL 4% paraformaldehyde (Sinopharm Chemical Reagent Co. Ltd., Shanghai, China) was added to the pores to fix the cells for 30 min. The cells were then washed thrice with PBS for 5 min each. The cells were stained with 80 mL 1:300 phalloidin (Servicebio, Wuhan, China) at room temperature (25 ℃) for 2 h. The cells were then washed with PBS thrice for 5 min each. Finally, 500 μL DAPI was used to stain the nuclei for 15 min. The staining solution was discarded and the cells were washed thrice with PBS for 5 min each. The slides were observed and photographed using a fluorescence microscope (DMi8, Leica, Wetzlar, Germany) after being sealed with a sealing reagent to inhibit fluorescence quenching.
Dual-Luciferase reporter gene assay
HEK293T cells were co-transfected with pMIR-Report-SRC 3ʹ-UTR (wild-type or mutant), miR-5703 mimic, or the control. The cells were harvested 24 h later, and a luciferase assay was performed using the Dual-Luciferase Reporter Assay System (E1910, Promega, WI, US) according to the manufacturer's instructions. Luciferase activity was expressed as a percentage of that of the control. The universal sequencing forward and reverse primers of H11164 and H11165 vectors (OBIO, Shanghai, China) were Luc-C-F (5ʹ-GAGGAGTTGTGTTTGTGGAC-3ʹ) and M13F (5ʹ-TGTAAAACGACGGCCAGT-3ʹ). The binding site was a 7mer-m8 which was predicted from TargetScan website (http://www.targetscan.org/ vert_71/).
Western blotting
Cells were lysed in RIPA lysis buffer (Servicebio, Wuhan, China). The lysates were centrifuged at 16099.2 x g for 10 min, and the protein concentrations were measured by the standard BCA assay (Pierce TM BCA Protein Assay Kit, New York, USA) according to the manufacturer's instructions. Equal amounts of protein were separated using a PAGE Gel Fast Preparation Kit (Beyotime Biotechnology, Shanghai, China) according to the manufacturer's instructions. The membranes were then blocked in PBST containing 3% BSA for 1 h at room temperature. Then, they were probed with primary antibody at 4 °C overnight and incubated with the secondary antibody for 1 h at room temperature. The protein bands were detected using the ECL western blotting Detection Reagent (Pierce, New York, USA). The film was exposed at various times depending on the antibody.
RNA extraction and quantitative reverse transcription PCR
mRNA levels and miRNA levels were quantified by reverse transcription PCR (RT-qPCR) as previously described [14]. The primers (wcgene Biotech, Shanghai, China) used in this study are listed in Table 1 and 2. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) and U6 were used as controls for mRNA and miRNA detection, respectively. Relative gene expression was calculated using the 2 -ΔΔCq method [31] after normalisation to the expression of GAPDH or U6 small nuclear RNA, and the results were statistically analysed as mean ± standard deviation (SD). Table 1. Primer sequences of miRNA-5703 used for reverse transcription-quantitative polymerase chain reaction.
Gene name
Forward primer Reverse primer
Cell adhesion assay
Matrigel (Corning, Corelle, New York, USA) and serum-free DMEM medium (HyClone, Logan, UT, USA) were mixed to form a basement membrane of 10 μg/mL. Six 96-well plates (Corning, Corelle, New York, USA) were laid with 60 μL/well of the basement membrane, and the plates were placed in an ultra-clean cabinet for 24 h. Then, serum-free DMEM was added to the wells. After 1 h, the matrix adhesive Matrigel was washed away. HepG2, Huh-7, and MHCC97H cells were inoculated into six 96-well plates at a density of 4,000 cells/well. For each cell line, one plate was cultured under environmental pressure, and the other plate was pressurised to 15 mmHg for culture. After 24 h, the medium was discarded, and the cells were washed thrice with PBS (HyClone, Logan, UT, US). The cells were fixed with 100 μL methanol (Aladdin, Shanghai, China) for 15 min and then the methanol was discarded. Then, 100 μL of hoe33258 dye solution (Beyotime Biotechnology, Shanghai, China) was added, incubated for 15 min, and washed out with H2O2 (Aladdin, Shanghai, China). The number of cells was recorded and photographed using a fluorescence microscope (CX31; Olympus Corporation, Tokyo, Japan).
Plasmid construction and lentivirus packaging
The overexpressed miRNA-5703 plasmid and sarcoma gene (SRC) intervention plasmid used for lentivirus packaging were constructed by Shanghai Collariaceae Biology (Shanghai, China). Based on the principle of base complementary pairing, three interfering target sequences were designed (5ʹ-GAC AGACCTGTCCTTCAAGAA-3ʹ; 5ʹ-GCTCGGCTCATT GAAGACAAT-3ʹ; 5ʹ-GGCTCCAGATTGTCAACA ACA-3ʹ) and primers were synthesised. The primers were annealed and ligated into a pLenR-GPH interference vector (Zorin, Shanghai, China). The plasmid was digested with restriction enzymes to ensure the correct size of the cleaved band. Dh5a competent cells were used to transform the interference plasmid and amplify it in bacteria. The plasmid was then extracted and sequenced.
Tumour formation in nude mice
Twenty-four nude mice were divided into four groups, each group treated with subcutaneous injections under the armpit as follows : 1)
Hematoxylin-eosin staining
Liver tissues were fixed in 4% paraformaldehyde solution for 24 h. The dehydration process was conducted with the conventional gradient alcohol for 1 min each time, followed by two 5 min washes with xylene, wax dipping, paraffin embedding, and slicing (4 μm slices). Paraffin slices were routinely dewaxed in water and cleaned with xylene for 5 min thrice, with 100% ethanol for 10 min twice, and twice with 95% ethanol for 10 min. Next, they were washed twice in dH2O for 5 min. The sections were stained with haematoxylin for 10 min. The slices were cleaned with dH2O and then differentiated with 1% hydrochloric acid alcohol for 5 s. After cleaning with dH2O, the slices were treated with ammonia water and stained with eosin solution for 3 min. The tissues were dehydrated with 95% ethanol twice for 5 min, and then they were placed in xylene twice for 5 min. The slides were then removed and dried. Finally, they were sealed with neutral gum, and the histopathological changes of the liver tissues were observed under an optical microscope (CX31, Olympus Corporation, Tokyo, Japan).
Statistical analyses
Results were compared using Student's t-test, and the data are expressed as means ± SD of at least three independent experiments. All P values were calculated using two-tailed tests, obtained using the SPSS software (version 16.0; SPSS, Chicago, IL). Statistical significance was set at P < 0.05.
KEGG pathway analysis of the 1,309 target genes revealed the relative materiality of the pathways of which the top three cancer growth and metastasis-related pathways were the focal adhesion (FA) pathway, the PI3K pathway, and the FOXO pathway (Fig. 1C). By investigating the KEGG pathway maps, we found that the conjoint upstream gene of the three pathways was SRC, which is also one of the predicted target genes of downregulated miRNA-5703 under pressure loading. An mRNA microarray was used to detect changes in HepG2 cells cultured for 24 h (control group) and cells under 15 mmHg of pressure for 24 h (pressure-treated group). A cluster heat map of differentially expressed genes screened by microarray which correlated to the above three pathways is shown in Fig. 1D. Among them, protein tyrosine kinase-2 (PTK2), phosphatidylinositol 3-kinase (PI3K, PIK3R1), SRC, and serum/glucocorticoid regulated kinase-3 (SGK3) were upregulated in pressurised liver cancer cells; forkhead box O1 (FOXO1) and cyclin dependent kinase inhibitor 1 B (CDKN1B, P27 Kip1 ) were downregulated; phosphoinositide dependent protein kinase 1 (PDK1) showed no statistically significant changes in expression. We verified the expression of these genes in HepG2 and Huh-7 cell lines by RT-qPCR (Fig. 1E, P < 0.05, or P < 0.01).
Verification of SRC as a target of miRNA-5703
The expression of miRNA-5703 and SRC detected by miRNA array and mRNA microarray are shown in Fig. 1F and 1G, respectively. The expression of miRNA-5703 in pressurised HepG2 cells was downregulated by approximately 0.68 times to that in the cells of control group (Fig. 1F), and the expression of SRC was upregulated by approximately 8.87 times to that in the cells of control group (Fig. 1G). We then verified the microarray results in three liver cancer cell lines (HepG2, Huh-7, and MHCC97H) by RT-qPCR ( Fig. 2A, P < 0.05, P < 0.01, or P < 0.001 and 2B, P < 0.05), and the results were consistent with those of the microarray. In addition, there was no significant change in their expression in the normal liver cell line HL-7702 ( Fig. 2A, P > 0.05, and 2B, P > 0.05). The mRNA levels of SRC in liver cancer tissues from patients in stage A2-D was higher than that in tissues from patients in stage A1 (J) while a difiference in expression was not observed between in the adjacent normal tissues of patients in the two stages. (K) and (L) The higher expression of SRC in HCC tissues was observed by RT-qPCR analysis. NS indicates the means are not significantly different (P > 0.05), mean ± SD, n = 3. A two-tailed Student's t-test was used. ***P < 0.001, **P < 0.01, *P < 0.05.
Western blotting showed that SRC expression was inhibited in both HepG2 and Huh-7 cells after miRNA-5703 overexpression ( Fig. 2C and 2D, P < 0.05). The miRNA-5703 binding site was predicted using the TargetScan website (Fig. 2E), and it was a 7mer-m8 positioned at nucleotides 1,745-1,751 of the SRC 3¢-UTR. Wild-type plasmids and plasmids with a mutated binding site were transfected into HEK293T cells. The light signals produced by the luciferase of firefly and produced by the luciferase of sea kidney were detected, and their ratios were calculated. The ratio decreased by 43.69% after overexpression of miRNA-5703 with the wild-type binding site in SRC (Fig. 2F, P < 0.001), while there was no significant difference in the ratio before and after overexpression of miRNA-5703 after binding site mutation (Fig. 2F, P > 0.05).
Twenty-two cases of liver cancer tissues and adjacent normal tissues (more than 5 cm away from cancer tissues) were collected from patients with liver cancer that were classified as BCLC stage A1 (without PHTN). Twenty-five cases of liver cancer tissues and adjacent normal tissues were collected from patients with liver cancer that were classified as stage A2-D (with PHTN). The expression of miR-5703 in liver cancer tissues from patients in stage A2-D was suppressed relative to that of tissues from patients in stage A1 (Fig. 2G, P < 0.05), but not in the adjacent normal tissues (Fig. 2H, P > 0.05). The mRNA levels of SRC in liver cancer tissues from patients in stage A2-D was higher than that of tissues from patients in stage A1 (Fig. 2I, P < 0.05), but not in the adjacent normal tissues (Fig. 2J, P > 0.05). In the 22 paired tissues of patients in stage A1, 19 (86.4%) showed higher SRC expression in liver cancer tissues than in adjacent normal tissues, of which 12 (63.1%) highly expressed SRC (> 2 times) and 7 (36.8%) did not have as high expression levels (< 2 times; Fig. 2K). In 25 paired tissues of patients in stage A2-D, the expression of SRC in 22 (88%) of liver cancer tissues was higher than that in adjacent normal tissues, of which 19 pairs (86.3%) highly expressed SRC (> 2 times), while 3 pairs (15.7%) did not have as high expression levels (< 2 times; Fig. 2L).
Overexpression of miRNA-5703 inhibits pressure-induced proliferation of hepatoma cells
Flow cytometry was used to detect proliferation of the HepG2 and Huh-7 cell lines. As shown in Fig. 3A, overexpression of miRNA-5703 significantly inhibited the proportion of HepG2 and Huh-7 cells in the S phase (P < 0.01), and knocking down SRC had similar effects (P < 0.01). The HepG2 and Huh-7 cell lines were categorized into four groups based on the experimental conditions: control (conventional culture), pressure (culture under 15 mmHg of pressure), pressure + miRNA-5703 (+) (culture under 15 mmHg of pressure with over-expression of miRNA-5703), and pressure + SRC (-) (culture under 15 mmHg of pressure after knocking down SRC). The CCK-8 assay was used to detect the proliferation of the liver cancer cells (Fig. 3B, P < 0.05, P < 0.01). The results showed that overexpression of miRNA-5703 significantly inhibited the proliferation of HepG2 and Huh-7 cells, and knocking down SRC achieved similar results. The CCK-8 assay showed that there was no effect on the growth of pressure-loaded HL-7702 cells (Fig. 3C, P > 0.05).
Overexpression of miRNA-5703 inhibits pressure-induced cell proliferation via SRC-FAK--FOXO1 pathway
Using western blotting, we found that 15 mmHg pressure loading promoted the expression of SRC, FAK, PI3K, and SGK3, and inhibited the expression of FOXO1 and P27 Kip1 (Fig. 4A and 4B), which was consistent with the results of the microarray illustrated in Fig. 1D. Meanwhile, the expression of phosphorylated SRC (pSRC; Y418), pFAK (Y397), pPDK1 (S241), and pSGK3 (T320) were upregulated, which indicated that pressure not only increased the transcription of SRC, FAK, PIK3R1, and SGK3 at the mRNA level, but also activated the phosphorylation of these proteins. It should be noted that the expression of PDK1 at the mRNA level is not regulated by pressure, but pressure can promote phosphorylation of PDK1 at the protein level. Pressure loaded cells were treated with Herbimycin A (pSRC inhibitor), Gsk2256098 (pFAK inhibitor), LY294002 (PI3K inhibitor), PHT-427 (PDK1 inhibitor) and EMD638683 (SGK3 inhibitor; Fig. 4A and 4B), and the relationship between upstream and downstream proteins in the cascade reaction was detected and the results of the grey analysis are shown as histogram in Fig. S1 (HepG2) and Fig. S2 (Huh-7).
The peripheral membrane protein GM130, which is strongly attached to the Golgi membrane, is involved in controlling cell polarisation and directed cell migration [32]. The expression of GM130 was also increased in the HepG2, Huh-7, and HL-7702 cells, which was detected by an immunofluorescence assay (Fig. 7A). The cell adhesion assay showed that the number of HepG2, Huh-7, and MHCC97H cell lines adhering to the matrix was significantly increased after pressure loading (Fig. 7B). . (B) Overexpression of miRNA-5703 and SRC gene silencing inhibited pressure-induced hepatoma cell proliferation, which was assessed by CCK-8 assay. (C) CCK-8 assay was used to detect the proliferation of normal hepatocyte HL-7702 under different pressure loading conditions, and no significant difference was found. NS indicates the means are not significantly different (P > 0.05), mean ± SD, n = 3. A two-tailed Student's t-test was used vs Control: ***P < 0.001, **P < 0.01, *P < 0.05; Vs Pressure: ## P < 0.01, # P < 0.05. The actin microfilaments of HepG2 and Huh-7 cells were stained with the ghost pencil-ring peptide. The results showed that pressure increased the number of cytoskeleton microfilaments, while overexpression of miRNA-5703 significantly reduced it (Fig. 8A, P < 0.05, P < 0.01, P < 0.001, or P > 0.05). Immunofluorescence showed that the expression of FA protein vinculin was upregulated by pressure, while overexpression of miRNA-5703 significantly inhibited the expression of vinculin in cells under pressure (Fig. 8B, P < 0.05, P < 0.01, or P > 0.05).
Overexpression of miRNA-5703 inhibits pressure-induced cell migration and invasion via the SRC-paxillin pathway
Under 15 mmHg of pressure, herbimycin A and GSK2256098 inhibited paxillin expression (P < 0.05), while LY294002, PHT-427, and EMD638683 had no significant effect on paxillin expression. This indicated that 15 mmHg pressure loading promoted the expression of SRC and FAK in HepG2 and Huh-7 cells, and it remarkably upregulated the expression of paxillin (Fig. 9A, P < 0.05, or P < 0.01). Overexpression of miRNA-5703 or knockdown of SRC inhibited the expression of paxillin, which suggests that this may be the mechanism by which miRNA-5703 in inhibits tumour metastasis (Fig. 9B, P < 0.05, or P < 0.01).
The MHCC97H cells were divided into four groups based on experimental conditions: control (conventional culture), pressure (15 mmHg, 24 h), pressure + vector (15 mmHg, 24 h + vector), and pressure + miRNA-5703 (+) (15 mmHg, 24 h + miRNA-5703 over-expression). The cells were injected subcutaneously into mice and imaged in vivo 10 d post-injection. We found that tumour size and weight increased in the pressure loaded cell group, but decreased in the miRNA-5703 overexpression group (Fig. 10A, P < 0.05, P < 0.001, or P > 0.05). Hematoxylin-eosin (HE) staining of liver cancer and normal liver tissue is shown in Fig. 10B. The results of the western blot suggested that pressure-upregulated SRC was suppressed in the miRNA-5703 overexpression group (Fig. 10C, P < 0.05, P < 0.01, or P > 0.05). The expression of proteins in the SRC-FAK-FOXO1 pathway were detected by western blotting in HepG2 (left) and Huh-7 (right). β-actin was used as a loading control. (B) Using grey analysis, the ratio of phosphorylated protein to total protein was calculated and is displayed in a histogram, and the expression differences between the groups are shown. Mean ± SD, n = 3. A two-tailed Student's t-test was used. *** P < 0.001, ** P < 0.01, * P < 0.05.
The immunohistochemical assay showed that the expression of Ki67 was upregulated in tumours in the pressurised group and downregulated significantly in tumours in the miRNA-5703 overexpression group, and the expression of NM23 had the opposite pattern (Fig. 10D, P < 0.05, or P > 0.05). Ki67 is an indispensable antigen in cell proliferation and is associated with mitosis. In clinical practice, Ki67 is used to label cells in the proliferation cycle [33]. NM23 is highly expressed in well-differentiated tumours, and it inhibits tumour cell metastasis. It is also negatively correlated with lymph node metastasis. In the clinic, the detection of NM23 gene expression by immunohistochemistry is an important method for judging the metastatic ability of tumours [34]. The expression of miRNA-5703 in tumours of the four groups was assessed by RT-qPCR (Fig. 10E, P < 0.05, P < 0.001, or P > 0.05). The schematic diagram of cascade reaction is shown in Fig. 11.
Discussion
In recent years, mechano-inspired anticancer drugs that target components of transduction and mechanosensitive signalling pathways have emerged. Etaracizumab, Volociximab, and Cilengitide are undergoing clinical trials and are aimed at preventing the pro-metastatic signalling transduction associated with integrin-mediated sensing of various mechanical cues in the TME [16]. Changes in the liver mechanical microenvironment caused by cirrhosis and PHTN may be involved in the regulation of liver cancer growth and metastasis. However, to date, there has been no research on a drug that targets the mechanical environment of liver cancer which is continuously enveloped by stiff matrix TME and hypertension from the portal system. Our findings suggest that miRNA intervention may prevent tumor promotion from stiffness of the extracellular matrix and suppress the development of liver cancer. These findings provide novel information that can potentially be utilized for the development of new drugs. The transwell assay showed that there was no significant change in mobility of HL-7702 cells after pressure loading. Representative photographs were taken at a ×400 magnification. The scale bars are 25 μm. NS indicates the means are not significantly different (P > 0.05), mean ± SD, n = 3. A two-tailed Student's t-test was used. Vs Control: ***P < 0.001, **P < 0.01, *P < 0.05; vs Pressure: ### P < 0.01, ## P < 0.01, # P < 0.05. SRC was found to be the conjunct upstream protein of the top 3 enriched pathways (FA pathway, FOXO pathway, and PI3K pathway) using GO analysis of the 1,309 predicted target genes of miRNA-5703. Consistent with previous observations in human colon cancer cell lines and breast cancer cell lines [35][36][37][38], reducing SRC or inhibiting the FAK/PI3K axis blocked pressure-stimulated cell adhesion in liver cancer cell lines. The deformation of the liver cancer cell membrane caused a pressure-activated expression of SRC and pSRC (Y418), which could bind FA leading to the up-regulation of FAK and pFAK (Y397) and boosting the function of FA [39]. Therefore, pressure not only upregulated the expression of total SRC and FAK, but also promoted their phosphorylation, which maximised their activation effect.
Pressure-responsive miRNA-5703, which was identified by miRNA microarray, was predicted to target SRC. The expression of miRNA-5703 in liver cancer tissues of patients with PHTN was significantly lower than that in patients without PHTN, while the expression of SRC had the opposite pattern. Increased translation of SRC in vivo activates downstream FAK and PI3K, which inactivate FOXOs by activating PDK1 to phosphorylate SGK3, suppressing FOXO1 which could inhibit P27 kip1 and cyclin D1, and ultimately promotes cell proliferation. SRC activates cell motility associated proteins (vinculin, paxillin, and actin), which are important for the normal function of the cytoskeleton. The C-terminal region of paxillin contains four lim domain (LIM) domains, which anchor paxillin to FA [40]. The N-terminal region of paxillin is rich in protein-protein interaction sites, leading to its binding to a variety of proteins, including tyrosine kinases such as SRC and FAK [41], structural proteins such as vinculin and actopaxin, and actin regulators [42]. Our study found that physical distortion of the cytoskeleton transfers mechanical loads through actin-and microtubule-associated molecules to initiate intracellular signaling, and overexpression of miRNA-5703 may counteract this effect. . (B) The expression of FA protein vinculin was inhibited by miRNA-5703 transfection as measured by immunofluorescence. The scale bars are 50 μm. NS indicates the means are not significantly different (P > 0.05), mean ± SD, n = 3. A two-tailed Student's t-test was used. ***P < 0.001, **P < 0.01, *P < 0.05.
Our pressure-induced subcutaneous tumourforming model of mice is different from that of liver cancer in the context of PHTN in cirrhosis. However, previous studies have suggested that it is feasible to implant cancer cells or stem cells loaded with pressure [43] or shear stress [44,45] in vitro into nude mice and observe tumour formation in vivo and detect the expression of related proteins. Based on previous studies, we used 3D cell culture technology to mix cells into biogels to achieve a better simulation of the tumour mechanical microenvironment in vivo. Tumour size increased normally after implantation, and the protein expression in each group was significantly different.
Our study has some limitations. Although the role of pressure in promoting the migration and invasion of hepatoma cells has been fully confirmed in vitro, we only detected the expression of NM23 in subcutaneous tumours in vivo. In situ tumourigenesis in nude mice is necessary to further explore the effect of pressure on liver cancer metastasis, and further studies are needed. In addition, whether the cancer-promoting effect of pressure is related to membrane channel proteins is still unknown. Our previous microarray results confirmed that the mRNA levels of integrin subunits (αv, α3, α6, α11, β4, β6, and β7) was upregulated in pressurised hepatoma cells (GEO database: GSE120194) [14]. Integrins have been studied extensively in the mechanical signalling of cancer cells and cancer-associated fibroblasts, but not in liver cancer. In addition, the iron channel protein PIEZO2 and its target protein NOTCH1/2 were upregulated (GEO database: GSE120194) [14] PIEZO is a mechanically sensitive channel protein that has recently been implicated in the development and progression of various cancers [46]. The function of these membrane proteins and whether pressure-responsive miRNAs act on them requires further study. The expression of SRC in tissues from nude mice was detected by western blotting, and β-actin was used as a loading control. (D) Immunohistochemical assays showed that overexpression of miRNA-5703 inhibited the expression of Ki67 and upregulated the suppression activity of NM23 in subcutaneous tumours in nude mice. Representative photographs were taken at ×400 magnification. The scale bars are 25 μm. (E) The expression of miRNA-5703 in tumours under the four different conditions was detected by RT-qPCR. NS indicates the means are not significantly different (P > 0.05), mean ± SD, n = 3. A two-tailed Student's t-test was used. Vs Control: ***P < 0.001, **P < 0.01, *P < 0.05; vs Pressure: ### P < 0.01, ## P < 0.01, # P < 0.05. Figure 11. Schematic diagram summarizing how pressure-responsive miRNA-5703 promotes the growth and metastasis of liver cancer by inhibiting SRC expression.
|
2021-12-17T16:41:59.985Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b5e2db5e4da1d5e54cba0d27603743d094d49da9",
"oa_license": "CCBY",
"oa_url": "https://www.jcancer.org/v13p0325.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a7e13c63cb55eb05443a5629d91a3cb9811fcbd0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59174384
|
pes2o/s2orc
|
v3-fos-license
|
Image processing method in implementation of handwriting identification for Japanese katakana characters
Japanese is one of the popular languages and is spoken in international world. Japan ranks fourth out of ten commonly used languages in the world. Pattern recognition techniques have developed over the time and often used to solve problems. Pattern recognition technique is used for identification of handwriting, images, etc. Japanese Katakana handwriting with all the complexity turns out to have strict rules in writing. In its application, there is an inaccuracy in the writing of Katakana letters. This is caused by the many variations and procedures of writing Katakana. The procedure of writing Katakana letters has its own rules especially regarding the number of scratches. Therefore, in this research, authors implemented Recurrent Neural Network to identify the words based on the Katakana handwriting
Introduction
According to list of languages used according to the number of native speakers, Japanese is ranked 9th after Chinese, Hindi, Spanish, English, Bengali, Arabic, Russian and Portuguese [1]. Human has various ways in learning foreign language. As example, self-taught, actively speaking the language and taking courses. Foreign language can also be learnt digitally through computer. Therefore, machines (computers) are required to master every character in all languages to ease the human in learning the foreign language. Pattern recognition techniques are used for handwriting recognition, images, etc. Pattern recognition aims to perform identification process of an object (e.g. handwriting) into a particular class, based on particular pattern [2]. Artificial neural network is one of the artificial intelligence of human brain which always tries to stimulate the learning process of the human brain. The term artificial is used because this neural network is implemented using a computer program that is able to process a number of calculation during the learning process [3]. Recurrent Neural Network (RNN) is a neural network with feedback facilities to neurons, so that the information flow of the inputs has multiple directions [4]. Recurrent Neural Network is a network that has at least one feedback loop. RNN has excellent depiction capabilities and can address the weaknesses feedforward [5].
Katakana with all its complexity turns out to have strict rule in its writing [6]. The rule is called a stroke order or a sequence of scratches. Because the writing technique, in addition to the beautiful writing, is also very useful as a method of memorizing the Katakana letters, which certainly saves memory in our brain. Artificial neural networks have the ability to identify letters based on handwriting. Therefore, in this research, the authors utilized artificial neural network of recurrent 2 neural networks to identify letters based on handwriting Katakana letters. In the implementation, there is an inaccuracy in the writing of Katakana letters. This is caused by the many variations and procedures of writing Katakana. The procedure of writing Katakana letters has its own rules especially regarding the number of scratches. Therefore, an approach is needed to identify Japanese writing Katakana letters.
There have been numerous information technology researches on identification field, such as Artificial Neural Network Implementation of Back Propagation Method in Japanese Katakana and Hiragana Handwriting [1] and research entitled Diagonal Feature Extraction Based Handwritten Character System Using Neural Network [7]. Another previous research that applied Recurrent Neural Network method entitled Nonlinear Identification Using Recurrent Neural Network and Kalman Dead-Zone Filter [5]. Although numerous researches in word identification have been conducted, it still needs further development on research in the field of speech recognition, especially in writing Katakana.
Image
Image is a two-dimensional media which composed by group of pixels that are the smallest part of the image. In general, the image is formed from regular square squares so that the horizontal and vertical distances between pixels are the same in all parts of the image [2]. Image as the data recording system output can be: optical in the form of photographs, analogue in the form of video signals such as images on television monitors, and digital that can be directly stored on a magnetic tape.
Digital image is the visual representation of an object after experiencing numerous data transformations from various forms of numeric sequence [8]. Feature Extraction is the process of measuring data that has been normalized to generate a feature value. The feature value is used by the classifier to identify the input unit with the output target unit and facilitate the classification because it is easy to distinguish [7]. In general, the feature is all the measurement results that can be obtained. The feature can also illustrate the characteristics of the monitored object [8]. An example of a lowlevel feature is signal intensity. Features can be symbolic, numeric or both.
Dataset
Sample data used is 46 Katakana letters written on A4 size paper using black-ink markers. Sample data were acquired from 13 people who understand Katakana letters. The writing procedure of data sample is 10 people wrote 46 Katakana letters in one try for training data and the other 3 wrote for testing data. Total sample data used is 460 sample data images used as training data and 138 sample data images used as testing data. Writing sample of Katakana letters is shown in Figure 1.
Recurrent neural network
Recurrent Neural Network (RNN) is neural network a neural network with feedback facilities to neurons, so that the information flow of the inputs has multiple directions [4]. Recurrent Neural Network is a network that has at least one feedback loop. RNN has excellent depiction capabilities and can address the weaknesses feed-forward [5]. RNN Output depends not only on the current input, but also on the neural network input of the past. This condition is intended to accommodate past events to be included in the computation process.
Letter identification process
Several phases performed in identification process of Katakana letters, namely: • Normalization is a part of radiometric correction, which is to eliminate the difference between two or more different images of time or location by referring to the image that is considered the best and the right one. In other words the function of normalization is to get data with mean zero and standard deviation that is equal to one. The result of normalization is shown in • After image pre-processing process, then performed feature extraction phase. The feature extraction used was Diagonal Based Extraction Featured. Feature extraction was performed to get the feature value as an input for Recurrent Neural Network input layer. Diagonal Based Feature Extraction is a process of writing characters identification using offline methods in the process [7]. Every image of a character, has the size of 60 x 90 pixels, is then divided into 54 equal zones, and each zone measures 10 x 10 pixels. Figure 7 shows the detail of zone partition. • Afterwards, the classification process of the handwriting image that has been extracted its feature value by using the Recurrent Neural Network was performed. The flow chart of the Recurrent Neural Network of this research can be seen in Figure 8 The explanation of Figure 8 is as follows: • Initial input is image acquired from image pre-processing stage and its feature extraction value has already acquired.
• State vector input. State Vector is the desired target from the training image and testing image. State vector can be calculated using formula: x(k) ∈ℜ n • Calculate the weight of initial value using the equation: • Calculate the value of hidden layer using formula: • Calculate value of ø and ơ using sigmoid function below: • Calculate the value of network output using the equation:
Result and analysis
The number of Katakana letters image that have been trained is 460 images, they consist 10 samples image for each of 46 Katakana letters. In this section, all the test result of the 3-testing data of each letter will be presented. It is shown in Table 1. From the test result, the accuracy rate can be calculated using the equation below: It can be concluded that the accuracy rate of pattern recognition for the tested characters is 86.1952 %.
Conclusion and future research
Based on the literature, implementation and testing that have been conducted, it can be concluded as follows:
|
2019-01-25T15:59:22.432Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "695ffa7b04a67a0d0750f2a42bd6c0c086ca02ab",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1116/2/022021",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "0b570b02c43e0e81a4f59825a3a4832f9db4651e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2017257
|
pes2o/s2orc
|
v3-fos-license
|
Genome-wide association study of clinically defined gout identifies multiple risk loci and its association with clinical subtypes
Objective Gout, caused by hyperuricaemia, is a multifactorial disease. Although genome-wide association studies (GWASs) of gout have been reported, they included self-reported gout cases in which clinical information was insufficient. Therefore, the relationship between genetic variation and clinical subtypes of gout remains unclear. Here, we first performed a GWAS of clinically defined gout cases only. Methods A GWAS was conducted with 945 patients with clinically defined gout and 1213 controls in a Japanese male population, followed by replication study of 1048 clinically defined cases and 1334 controls. Results Five gout susceptibility loci were identified at the genome-wide significance level (p<5.0×10−8), which contained well-known urate transporter genes (ABCG2 and SLC2A9) and additional genes: rs1260326 (p=1.9×10−12; OR=1.36) of GCKR (a gene for glucose and lipid metabolism), rs2188380 (p=1.6×10−23; OR=1.75) of MYL2-CUX2 (genes associated with cholesterol and diabetes mellitus) and rs4073582 (p=6.4×10−9; OR=1.66) of CNIH-2 (a gene for regulation of glutamate signalling). The latter two are identified as novel gout loci. Furthermore, among the identified single-nucleotide polymorphisms (SNPs), we demonstrated that the SNPs of ABCG2 and SLC2A9 were differentially associated with types of gout and clinical parameters underlying specific subtypes (renal underexcretion type and renal overload type). The effect of the risk allele of each SNP on clinical parameters showed significant linear relationships with the ratio of the case–control ORs for two distinct types of gout (r=0.96 [p=4.8×10−4] for urate clearance and r=0.96 [p=5.0×10−4] for urinary urate excretion). Conclusions Our findings provide clues to better understand the pathogenesis of gout and will be useful for development of companion diagnostics.
Among 1993 clinically-defined cases, 1613 patients had information such as clinical parameters (UUE and FE UA ) to classify their clinical subtypes.
22/29
Supplementary We analyzed 1993 cases and 1334 controls whose genotype data for rs72552713 and rs223114 were available. A1 is a risk-associated allele and A2 is a non-risk-associated allele. § We analyzed 1993 cases and 1334 controls whose genotype data for these two SNPs were available. Logistic regression analyses were performed using a multivariate model including two SNPs, which located on different haplotypes.
24/29
Supplementary ); Coef., regression coefficient; CI, confidence interval; LL, lower limit; UL, upper limit. We performed multivariate linear regression analyses, in which all 7 SNPs, alcohol drinking and BMI were included in the model.
25/29
Supplementary We performed multivariate logistic regression analyses using replication stage samples. These data were adjusted with rs504915 of NRXN2.
Supplementary Methods
Genotyping and quality control At GWAS stage, the data sets were filtered individually on the basis of single nucleotide polymorphism (SNP) genotype missing call rates (>1%), and the Hardy-Weinberg equilibrium (HWE) in controls (p<1.0×10 -6 ). We confirmed that all the subjects showed high genotype call rates (>98%). Pairwise identity by state was evaluated in order to identify pairs of individuals with cryptic relatedness. We confirmed that there was no pair showing cryptic relatedness greater than expected for second-degree relatives. We performed principal component analysis including our GWAS data set together with HapMap phase II samples. 1 2 As a result, we excluded one gout patient as a population outlier who was presumed to be of mixed origin (East Asian and European) (supplementary figure S2)
Statistical analyses for GWAS
We conducted an association analysis using a 2×2 contingency table based on the allele frequency. For each of the filtered SNPs, p value of association was assessed by χ 2 test, and the odds ratio (OR) and 95% confidence interval (95% CI) were calculated. The quantile-quantile plot and the genomic inflation factor (λ) were used to assess the presence of systematic bias in the test statistics due to potential population stratification. After excluding SNPs within 500 kb from the SNPs reaching a genome-wide significance threshold (p<5.0×10 -8 ), the λ was 1.054, indicating a subtle inflation of p values (supplementary figure S3).
Imputation
For the imputation, the 1000 genomes reference panel (the East Asian population) was obtained from the phase 1 release (16 March 2012, http://www.1000genomes.org/announcements/updated-integrated-phase-1 -release-calls-2012-03-16), and we ran a logistic regression analysis based on imputation dosages via MACH2DAT. 3 We included all SNPs with estimated r 2 >0.9 and minor allele frequency ≥ 0.01 for analysis.
Analysis of the two dysfunctional SNPs of ABCG2
The genotyping of the two ABCG2 SNPs (rs72552713 and rs2231142) was performed with an allelic discrimination assay (Custom TaqMan Assay, Applied Biosystems) with a LightCycler 480 (Roche
Estimation of variance explained by identified SNPs
For each variant identified in our GWAS, we calculated percent of the variance explained. We used the liability threshold model in quantitative genetics. 5 In this model, it is assumed that liability of a binary disease trait on unobserved continuous scale is assumed to be normally distributed with a mean of zero and variance of one, and individuals whose liabilities surpass a threshold develop the disease. To calculate the percent variance explained, we assumed that the prevalence of gout was 1.1% based on the estimation in Japanese male population. 4 6 We assumed that SNP effects were additive on the logistic scale. Under the rare disease assumption, we approximated the relative risk by the odds ratio obtained in this study. For the identified SNPs, we used the allele frequencies in the East Asian population in the 1000 Genomes Project. 7
Subtype analysis
We investigated the magnitude of associations between the identified SNPs and the types of gout by examining type-specific ORs and the case-subtype heterogeneity test. Fractional excretion of urate clearance (FE UA ) and urinary urate excretion (UUE) were measured for each patient as described previously, 8 and all cases were classified based on the criteria (supplementary figure S1). To estimate gout type-specific ORs, the frequency of a SNP in each type was compared with the frequency in controls using a logistic regression. To assess whether the estimated type-specific ORs were significantly different, the frequencies of the SNP were compared between types by dichotomous logistic regression (the case-subtype heterogeneity test). 9 For these subtype analyses, the effects of alcohol drinking, body mass index (BMI), and all the identified SNPs were considered in the model. When evaluating the effects of risk allele of SNP on clinical parameters (FE UA and UUE), a linear regression analysis was performed by defining the SNP genotype predictor variable x as the number of risk alleles associated with gout risk. All the logistic and linear regression analyses were performed using the STATA version 11.0.
|
2018-04-03T03:17:40.949Z
|
2015-02-02T00:00:00.000
|
{
"year": 2015,
"sha1": "0bf8dc40e3874269099a71a34ea42bb657d8ac78",
"oa_license": "CCBYNC",
"oa_url": "https://ard.bmj.com/content/75/4/652.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "599cfb1b53e4be2befcfab784fa4b5001ee2fdde",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237347103
|
pes2o/s2orc
|
v3-fos-license
|
Relativistic nucleon–nucleon potentials in a spin-dependent three-dimensional approach
The matrix elements of relativistic nucleon–nucleon (NN) potentials are calculated directly from the nonrelativistic potentials as a function of relative NN momentum vectors, without a partial wave decomposition. To this aim, the quadratic operator relation between the relativistic and nonrelativistic NN potentials is formulated in momentum-helicity basis states. It leads to a single integral equation for the two-nucleon (2N) spin-singlet state, and four coupled integral equations for two-nucleon spin-triplet states, which are solved by an iterative method. Our numerical analysis indicates that the relativistic NN potential obtained using CD-Bonn potential reproduces the deuteron binding energy and neutron-proton elastic scattering differential and total cross-sections with high accuracy.
Since Einstein's theory of special relativity in the early twentieth century, there are still issues when considering systems that contain more than two nucleons, as in such systems, pair nucleons are influenced by the presence and motion of the other nucleons. As the three-and four-nucleon bound and scattering problems can be numerically solved with controlled errors, they provide an ideal theoretical laboratory for investigating the relativistic effects in the few-nucleon systems. Several techniques are developed to study the relativistic effects in few-body systems. Among them, the Faddeev-Yakubovsky method provides an exact numerical treatment of three and four-nucleon systems.
The inputs for the relativistic Faddeev-Yakubovsky equations are the fully-off-shell (FOS) relativistic 2N t-matrices, which can be obtained with two different approaches. In the first approach, the FOS relativistic 2N t-matrices are obtained directly from the nonrelativistic 2N t-matrices applying a two-step process. In the first step, using an analytical relation proposed by Coester et al., the relativistic right-half-shell (RHS) t-matrices are obtained from the nonrelativistic RHS t-matrices 1 . In the second step, the FOS relativistic t-matrices are obtained from the RHS t-matrices by solving a first resolvent equation proposed by Keister et al. 2 . This approach has been successfully implemented in few-body bound and scattering calculations in a three-dimensional (3D) scheme 3-8 , without using a partial wave (PW) decomposition.
In a second approach, the relativistic FOS t-matrices can be calculated from the solution of the relativistic Lippmann-Schwinger (LS) equation using the relativistic 2N potentials. The input relativistic NN potentials can be obtained from the nonrelativistic potentials by solving a quadratic equation, using an iterative scheme proposed by Kamada and Glöckle 9 . We have recently implemented this iterative technique in a 3D scheme to calculate the matrix elements of relativistic two-body (2B) potentials for spin-independent Malfliet-Tjon (MT) potential as a function of the magnitude of 2B relative momenta and the angle between them. To do so, we formulated the quadratic operator relation between the nonrelativistic and relativistic NN potentials in momentum space leading to a 3D integral equation 10 . We successfully implemented this iterative approach to calculate the matrix elements of boosted 2B potential from the MT potential to study the relativistic effects in a 3B bound state 11 . Our numerical results showed that the relativistic effects lead to a 2% reduction in 3B binding energy using MT potential. Our exact and detailed numerical studies of relativistic effects in 3B bound states using spinindependent MT potential demonstrates that direct integrations in the 3D scheme can be utilized to achieve the same results obtained using a PW method and paves the path for an extension to realistic interactions that have a more complicated spin-isospin dependence. Considering modern nucleon-nucleon (NN) potentials by including spin and isospin degrees of freedom and calculating the relativistic NN potentials from realistic NN potentials is the task we address in this paper. This is the first step toward our goal for a fully relativistic treatment of triton and Helium-3 bound state properties and the long-term interest in studying the scattering problems at the few-GeV energy scale in a 3D scheme. We show that the representation of the quadratic equation in momentum www.nature.com/scientificreports/ helicity basis states leads to a single and four coupled 3D integral equations for NN singlet and triplet spin state, correspondingly. The single and coupled integral equations are solved using the mentioned iterative scheme, and the matrix elements of relativistic NN potentials are obtained from CD-Bonn potential 12 . Our numerical analysis indicates that the calculated relativistic potential reproduces the deuteron binding energy and differential and total cross-sections of neutron-proton (np) elastic scattering with very high accuracy. The motivation for using the 3D scheme and implementing a direct integration method is to replace the discrete angular momentum quantum numbers of a PW representation with continuous angle variables and consider all partial wave components to infinite order, independent of the energy scale of the problem. Consequently, 3D representation avoids the very involved angular momentum algebra for permutations, transformations, and few-nucleon forces, and in contrast to the PW approach, the number of equations in the 3D representation is energy independent. The 3D scheme is successfully implemented in a series of few-body bound and scattering states calculations by different few-body groups, from Ohio-Bochum collaboration 4,5,7,[13][14][15][16][17][18][19][20][21][22][23][24][25] to Tehran 26-32 and Kraków 33-41 groups.
In the "Relativistic NN potentials in a momentum helicity representation" section, we present the 3D formalism for the relationship between relativistic and nonrelativistic NN potentials. By projecting the quadratic relation between nonrelativistic and relativistic NN potentials in momentum helicity basis states, we obtain the matrix elements of relativistic NN potentials as a function of the magnitude of 2N relative momenta, the angle between them, and the helicity eigenvalues. We derive a single integral equation for NN total spin state s = 0 and four coupled integral equations for s = 1 . In the "Calculation of relativistic NN interactions" section, we present our numerical results for the matrix elements of relativistic NN potential obtained from CD-Bonn potential in different spin and isospin channels. In the "Numerical tests for the relativistic NN potentials" section, we test the obtained relativistic potential by calculating and comparing deuteron binding energy and differential and total cross-sections of np elastic scattering with corresponding nonrelativistic results. Finally, a conclusion and outlook are provided in the "Conclusion and outlook" section.
Relativistic NN potentials in a momentum helicity representation
In this section, we show how to obtain the matrix elements of relativistic NN interactions in a 3D scheme from the nonrelativistic interactions by solving a nonlinear equation derived by Kamada and Glöckle. The relativistic interactions are designed to accurately reproduce the NN bound and scattering observables. To this aim and to check the accuracy of obtained relativistic interactions, as we show in the "Numerical tests for the relativistic NN potentials" section, one needs to solve the homogeneous and inhomogeneous LS integral equations (19), (21), and (22) in a momentum helicity representation to calculate relativistic deuteron binding energy and the scattering amplitudes to obtain the differential and total cross-sections.
The relativistic and nonrelativistic NN potentials, i.e., V r and V nr , are related together by a quadratic operator equation as 9 where m is the mass of nucleons, p is the relative momentum of two nucleons ( p is the operator), and ω(p) = 2E(p) = 2 m 2 + p 2 . To calculate the relativistic NN potential V r from a nonrelativistic potential V nr in a 3D representation, we present Eq. (1) in momentum helicity basis states. The antisymmetrized momentum helicity basis states for a 2N system with total spin and isospin s and t, and the relative momentum p are introduced as 16 where p is the unit momentum operator, is the eigenvalue of the helicity operator s ·p , with the parity eigenvalues η π = ±1 and eigenstates |p;p s � π = 1 √ 2 (1 + η π P π )|p;p s � . The 2N helicity basis states are orthogonal and normalized as The matrix elements of NN nonrelativistic and relativistic potentials in 2N helicity basis states, introduced in Eq. (2), are given as Representation of the quadratic relation between nonrelativistic and relativistic NN potentials, given in Eq. (1), in 2N helicity basis states is as s πt dp |p;ps ; t� πa 1 4 πa �p;ps ; t| = 1.
(4) V πst nr, ′ (p, p ′ ) ≡ πa �p;ps ; t|V nr |p ′ ;p ′ s ′ ; t� πa , www.nature.com/scientificreports/ The first and the second terms on the right-hand side of Eq. (6) can be evaluated straightforwardly. For evaluation of the third term, the completeness relation of Eq. (3) should be inserted. By these considerations, Eq. (6) reads as By considering the following properties of NN potentials, one can obtain the negative helicity eigenvalue components of the potential from the positive ones as The above relations are also valid for the relativistic potential V πst r, ′′ (p, p ′′ ) . By using the properties of Eq. (8), one can show that the second term of Eq. (7), for ′′ = −1 and ′′ = +1 are equal together For 2N singlet spin state, s = 0 , Eq. (7) leads to a single integral equation to obtain the matrix elements of relativistic potential V π0t r,00 (p, p ′ ) For triplet spin states, s = 1 , Eq. (7) leads to four coupled integral equations, corresponding to helicity eigenvalues , ′ = 0, +1 , to calculate the matrix elements of relativistic potentials V π1t r,00 (p, p ′ ) , V π1t r,01 (p, p ′ ) , V π 1t r,10 (p, p ′ ) , and V π1t r,11 (p, p ′ ) Equations (10) and (11) should be solved for each value of 2N total isospin t = 0, 1 . For the numerical solution of single and coupled integral equations, i.e., Eqs. (10) and (11), by choosing momentum vector p ′ parallel to the z−axis, the azimuthal angular dependence of the matrix elements of nonrelativistic and relativistic potentials can be factored out as an exponential phase as where x ≡p ·p ′ and x ′′ ≡p ′′ ·p ′ . One can show that the matrix elements of V πst r, ′′ (p, p ′′ ) can be obtained as . By these considerations,Eqs. (10) and (11) can be written as (6) πa �p;ps ; t|4mV nr |p ′ ;p ′ s ′ ; t� πa = πa �p;ps ; t|ω(p)V r |p ′ ;p ′ s ′ ; t� πa + πa �p;ps ; t|V r ω(p)|p ′ ;p ′ s ′ ; t� πa + πa �p;ps ; t|V r · V r |p ′ ;p ′ s ′ ; t� πa .
Calculation of relativistic NN interactions
To calculate the matrix elements of relativistic potentials for different spin-isospin (s, t) channels, we solve the integral equations (14) and (15) using an iterative method proposed by Kamada and Glöckle 9 . The iteration starts and stops when the maximal difference between the matrix elements of the relativistic potential V πst r, ′ (p, p ′ , x) obtained from two successive iterations drops below 10 −6 MeV fm 3 . After each iteration, to obtain the matrix elements V πst, ′ r, ′′ (p, p ′′ , x, x ′′ ) that appears in the kernel of Eqs. (14) and (15), we need to perform the azimuthal angle integration of Eq. (16). To speed up the convergence of the iteration in solving Eqs. (14) and (15), in some (s, t) channels even to be able to reach the convergence, we use the weighted average of relativistic potential obtained from two successive iterations as The input to our calculations is a 3D form of CD-Bonn potential in momentum helicity representation obtained from the summation of partial wave matrix elements of the potential up to total angular momentum j max = 20 . For the discretization of the continuous momentum and angle variables, we use the Gauss-Legendre quadrature. A combination of hyperbolic and linear mapping with 120 mesh points is used for the momentum variables, and for azimuthal and polar angle variables, a linear mapping with 40 mesh points is used. In our calculations, we use the nucleon mass m = Table 1. Our numerical analysis indicates that for (s = 0, t = 0) channel, the convergence can be reached only for α = β = 1 , whereas for (s = 0, t = 1) and (s = 1, t = 0) the fastest convergence can be reached by α = 2, β = 1 . For (s = 1, t = 1) the fastest convergence can be reached by α = 3, β = 1 . It should be mentioned that Kamada and Glöckle have used α = β = 1 in their calculations to obtain relativistic potentials from AV18, CD-Bonn, and Nijm I, II potentials 9 . In Figs. 1, 2, 3, and 4, the matrix elements of relativistic potential obtained from CD-Bonn . Table 1. The number of iterations N iter to reach convergence in the solution of Eqs. (14) and (15) for the calculation of relativistic potential in different spin and isospin channels from CD-Bonn potential, as a function of the weight averaging parameters α and β defined in Eq. (18). www.nature.com/scientificreports/ potential in different spin-isospin channels are compared with corresponding nonrelativistic potentials. The differences between nonrelativistic and relativistic potentials are also shown. As we can see, while the nonrelativistic and relativistic potentials show similar structures, the difference between them is significant and is in the same order of magnitude as the potentials.
Numerical tests for the relativistic NN potentials
In this section, we present two numerical tests for NN bound and scattering states, which show the validity of our formalism and the accuracy of calculated relativistic potentials in the 3D scheme.
Deuteron binding energy and wave function.
To test the accuracy of calculated relativistic NN potential in the (s = 1, t = 0) channel, we calculate the deuteron binding energy and wave function for both nonrelativistic CD-Bonn and the obtained relativistic potentials. The relativistic form of the homogenous LS equation for describing deuteron binding energy E d = m d − 2m and wave function ψ M d (p) is given by the following coupled integral equations for wave function components ψ 0 and ψ 1 where In the nonrelativistic form, the free propagator is replaced by (E d − p 2 m ) −1 , and the relativistic NN potential is replaced by the nonrelativistic potential V +110 nr,M d � ′ (p, p ′ ) 17 . In Table 2, we present our numerical results for deuteron binding energies obtained from both CD-Bonn and relativistic potentials. The relative percentage difference of 0.06 % indicates an excellent agreement between relativistic and nonrelativistic deuteron binding energies. In Fig. 5, we show the deuteron wave function components calculated for both relativistic and nonrelativistic CD-Bonn potentials. As we can see, the constructed relativistic potential reproduces the deuteron binding energy and wave function obtained by CD-Bonn potential with high accuracy. np elastic scattering. For the second numerical test, we calculate the differential and total cross-section of np elastic scattering for the relativistic potential constructed from CD-Bonn potential. To study describe the relativistic np elastic scattering in momentum helicity space, the relativistic form of inhomogeneous LS equations for 2N t-matrices in singlet and triplet spin states can be obtained as www.nature.com/scientificreports/ where m s i and m t i are the spin and isospin projection of single nucleons along the quantization z−axis, and the coefficients C are the Clebsch-Gordan coefficients. In Fig. 6, we show the differential cross sections of np elastic scattering, calculated for relativistic and nonrelativistic CD-Bonn potentials, for the projectile energies E lab = 50, 96, 143 and 200 MeV. Total cross-sections can provide a more detailed comparison between relativistic and nonrelativistic potentials. In Table 3, we present our numerical results for total cross-sections of np elastic scattering, obtained by relativistic and nonrelativistic CD-Bonn potentials, as a function of the incident projectile energy E lab . As we can see, the maximum relative percentage difference is less than 0.01 % , indicating that the relativistic total cross-sections are in excellent agreement with the corresponding nonrelativistic cross-sections. Kamada and Glöckle have shown in Ref. 9 that the obtained relativistic potential from AV18 potential, in a PW decomposition, reproduces the nonrelativistic phase shifts with five significant figures with projectile energy in the domain (1 − 350) MeV. As one can see in Table 3, our nonrelativistic and relativistic cross-section results are also in perfect agreement with five significant figures with an incident projectile energy in the broader domain (0.001-750) MeV. So we are convinced that the 3D formulation and calculations for relativistic potential provide the same accuracy as a PW calculation.
Moreover, in a prior study for calculating relativistic potentials from spin-independent MT potential 10 , which has no spin and isospin complexity of CD-Bonn, we obtained a relative percentage difference of 0.06% between nonrelativistic and relativistic deuteron binding energies and a maximum relative percentage difference of 0.007% in two-body total elastic scattering cross-sections, which can be compared with 0.06% and 0.01% relative percentage differences obtained in this study for CD-Bonn potential. This comparison indicates calculating relativistic NN interactions from realistic interactions in a 3D scheme provides almost the same accuracy as a spin-independent calculation.
Conclusion and outlook
In this paper, the quadratic equation, which connects the relativistic and nonrelativistic NN potentials, is formulated in momentum helicity space as a single and four coupled three-dimensional integral equations for 2N singlet and triplet spin states. In our numerical calculations, we implement the CD-Bonn potential to obtain the matrix elements of the relativistic potential as a function of the magnitude of 2N relative momenta, the angle between them, and spin and isospin quantum numbers. The quadratic integral equations are solved using an iterative scheme. Our numerical results indicate that calculated relativistic NN potential from the CD-Bonn potential reproduces 2N observables for deuteron binding energy and the differential and total cross sections of np elastic scattering with high accuracy. The implementation of relativistic NN potentials in the relativistic description of triton binding energy and wave function is currently underway.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2021-08-30T01:15:26.728Z
|
2021-08-27T00:00:00.000
|
{
"year": 2021,
"sha1": "204a2d96ead2a1a0bfe473b9f1e45fc8214285f4",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-96924-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ebc8261d6e2ae69f7cfad9737812255d31457ac",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
119256789
|
pes2o/s2orc
|
v3-fos-license
|
K$^0_{\rm S}$ and $\rm \Lambda$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
The ALICE measurement of K$^0_{\rm S}$ and $\rm\Lambda$ production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV is presented. The transverse momentum ($p_{\rm T}$) spectra are shown for several collision centrality intervals and in the $p_{\rm T}$ range from 0.4 GeV/$c$ (0.6 GeV/$c$ for $\rm\Lambda$) to 12 GeV/$c$. The $p_{\rm T}$ dependence of the $\rm \Lambda$/K$^0_{\rm S}$ ratios exhibits maxima in the vicinity of 3 GeV/$c$, and the positions of the maxima shift towards higher $p_{\rm T}$ with increasing collision centrality. The magnitude of these maxima increases by almost a factor of three between most peripheral and most central Pb-Pb collisions. This baryon excess at intermediate $p_{\rm T}$ is not observed in pp interactions at sqrt(s) = 0.9 TeV and at sqrt(s) = 7 TeV. Qualitatively, the baryon enhancement in heavy-ion collisions is expected from radial flow. However, the measured $p_{\rm T}$ spectra above 2 GeV/$c$ progressively decouple from hydrodynamical-model calculations. For higher values of $p_{\rm T}$, models that incorporate the influence of the medium on the fragmentation and hadronization processes describe qualitatively the $p_{\rm T}$ dependence of the $\rm\Lambda$/K$^0_{\rm S}$ ratio.
The evolution of the baryon to meson ratio with collision energy, comparisons with pp events and a study of the centrality dependence in nucleus-nucleus collisions provides additional information about this "baryon anomaly" [8]. In Pb-Pb collisions at the Large Hadron Collider (LHC) energies, that are around 14 times higher than those at RHIC, the maximum of the Λ/K 0 S ratio is expected to be shifted towards higher p T , because of an increased partonic radial flow [4,5]. In contrast, the Λ/K 0 S ratio measured in elementary pp collisions should not change significantly with the center-of-mass energy, since the particle production is presumably dominated by fragmentation processes.
The relative contribution of different hadronization mechanisms changes with hadron momentum. While at intermediate p T recombination might be dominating, fragmentation could take over at higher p T , depending on the underlying momentum distributions of the quarks. For this reason it is important to identify baryons and mesons in a wide momentum range. The topological decay reconstruction of K 0 S and Λ provides an opportunity to extend the baryon and meson identification from low to high transverse momenta, which can not easily be achieved using other particle identification methods without introducing additional systematic effects.
In this Letter we present the K 0 S and Λ transverse-momentum spectra and the Λ/K 0 S ratios from Pb-Pb collisions at √ s NN = 2.76 TeV recorded in November 2010's heavy-ion run of the LHC. The p T dependence of the Λ/K 0 S ratios is compared with pp results obtained at √ s = 0.9 and 7 TeV, that bracket the Pb-Pb measurements in energy.
A description of the ALICE apparatus can be found in [9]. For the analysis presented here, we used the Time Projection Chamber (TPC) and the Inner Tracking System (ITS) to reconstruct charged particle tracks within the pseudo-rapidity interval of |η| < 0.9. Particle momenta were determined from the track curvature in a magnetic field of 0.5 T. The two VZERO scintillator counters, covering pseudorapidity ranges of 2.8 < η < 5.1 (VZERO-A) and −3.7 < η < −1.7 (VZERO-C), provided a signal proportional to the number of charged particles in these acceptance regions. The VZERO detectors together with the two innermost Silicon Pixel Detector (SPD) layers of the ITS, positioned at radii of 3.9 and 7.6 cm (acceptance |η| < 2.0 and |η| < 1.4 respectively), were used as an interaction trigger.
To select a pure sample of hadronic interactions, only events with at least one particle hit in each of the three trigger detectors (SPD, VZERO-A and VZERO-C) were accepted offline. The selected events were required to have reconstructed primary vertices with a position along the beam direction within ±10 cm of the nominal center of the detector to ensure a uniform acceptance in pseudo-rapidity for the particles under study. The events were then classified according to the collision centrality, based on the K 0 S and Λ production The ALICE Collaboration sum of the amplitudes in the VZERO counters fitted with a Glauber model description of the collisions, as discussed in [10]. After these selections, we retained for the final analysis 13 million events in the collision centrality range from 0 to 90% of the nuclear cross-section.
The weakly decaying neutral hadrons (K 0 S and Λ) were reconstructed using their distinctive V-shaped decay topology in the channels (and branching ratios) K 0 S → π + π − (69.2%) and Λ→ pπ − (63.9%) [11]. The reconstruction method forms so-called V0 decay candidates and the details are described in [12]. Because of the large combinatorial background in Pb-Pb collisions, a number of topological selections had to be more restrictive than those used in the pp analysis. In particular, the cuts on the minimum distance of closest approach between the V0 decay products and on the minimum cosine of the V0 pointing angle (the angle between the line connecting the primary and V0 vertices and the V0 momentum vector) [12] were changed to one standard deviation and to 0.998, respectively. In addition, we retained only the V0 candidates reconstructed in a rapidity window of |y| < 0.5, with their decay-product tracks within the acceptance window |η| < 0.8. To further suppress the background, we kept only V0 candidates satisfying the cut on the proper decay length l T ·m/p T < 3 cτ (4 cτ), where l T and m are the V0 transverse decay length and nominal Λ (K 0 S ) mass [11], and cτ is 7.89 cm (2.68 cm) for Λ (K 0 S ) [11]. For the Λ candidates with p T < 1.2 GeV/c, a conservative three-standard-deviation particle-identification cut on the difference between the specific energy loss (dE/dx) measured in the TPC and the expected energy loss as defined by a momentum-dependent parameterization of the Bethe-Bloch curve was applied for the proton decay-product tracks. To reduce the contamination of Λ reconstructed as K 0 S , an additional selection was applied in the Armenteros-Podolanski variables [13] of K 0 S candidates, rejecting candidates with p arm T < 0.2 × |α arm |. Here, p arm T is the projection of the positively (or negatively) charged decay-product momentum on the plane perpendicular to the V0 momentum. The decay asymmetry parameter α arm is defined as α arm = (p + − p − )/(p + + p − ), where p + (p − ) is the projection of the positively (negatively) charged decay-product momentum on the momentum of the V0. The minimal radius of the fiducial volume of the secondary vertex reconstruction was chosen to be 5 cm to minimize systematic effects introduced by efficiency corrections. It was verified that the decay-length distributions reconstructed within this volume were exponential and agreed with the cτ values given in the literature [11].
The raw yield in each p T bin was extracted from the invariant-mass distribution obtained for this momentum bin. Examples of such distributions are shown in Fig. 1. The raw yield was calculated by subtracting a fit to the background from the total number of V0 candidates in the peak region. This region was ±5σ for K 0 S , and ±(3.5σ + 2 MeV/c 2 ) (to better account for tails in the mass distribution at low p T ) for Λ. The σ was obtained by a Gaussian fit to the mass peaks. The background was determined by fitting polynomials of first or second order to side-band regions left and right of the peak region. K 0 S and Λ production The ALICE Collaboration The overall reconstruction efficiency corrections were extracted from a procedure based on HIJING events [14] and using GEANT3 [15] for transporting simulated particles, followed by a full calculation of the detector responses and reconstruction done with the ALICE simulation and reconstruction framework [16]. The estimated efficiency included the geometrical acceptance of the detectors, track reconstruction efficiency, the efficiency of the applied topological selection cuts, and the branching ratios for the V0 decays. The typical efficiencies for both particles were about 30% for p T > 4 GeV/c, dropping to 0 at p T ∼ 0.3 GeV/c. The efficiencies did not change with the event centrality for p T above a few GeV/c. However, at lower p T , they were found to be dependent on the event centrality. For Λ at p T < 0.9 GeV/c the difference is about factor 2 between the 0-5% and 80-90% centrality intervals. This was because the distributions of the topological variables used in the selections were changing with the centrality, whereas the corresponding threshold cut values were kept constant. The effect was well reproduced by the Monte Carlo simulations. The final momentum spectra were therefore corrected in each centrality bin separately.
The spectra of Λ were in addition corrected for the feed-down contribution coming from the weak decays of Ξ − and Ξ 0 . For this purpose, a two-dimensional response matrix, correlating the p T of the detected decay Λ with the p T of the decayed Ξ, was generated from Monte-Carlo simulations. By normalizing this matrix to the measured Ξ − spectra [17], the distributions of the feed-down Λ were determined and subtracted from the inclusive Λ spectra. The phase space distribution and total yield for the Ξ 0 were assumed to be the same as for the Ξ − . The feed-down correction thus obtained was found to be a smooth function of p T with a maximum of about 23% at p T ∼ 1 GeV/c and monotonically decreasing to 0% at p T > 12 GeV/c. As a function of centrality, this correction changed by only a few per cent.
Since the ratio Ξ − /Ω − in Pb-Pb collisions at √ s NN = 2.76 TeV was measured to be about 6 [18], and taking into account that the branching ratio Ω − → ΛK − is 67.8% [11], the feed-down contribution from decays of Ω − baryons would be about 1%, which is negligible compared with other sources of uncertainty (see below). Also, we did not correct the Λ spectra for the feed-down from non-weak decays of Σ 0 and Σ(1385) family.
The fraction of Λ's produced in hadronic interactions with the detector material was estimated using the detailed Monte Carlo simulations mentioned above. Since this fraction was found to be less than 1%, it was neglected.
The following main sources of systematic uncertainty were considered: raw yield extraction, feed-down, efficiency corrections, and the uncertainty on the amount of crossed material. These were added in quadrature to yield the overall systematic uncertainty on the p T spectra for all centralities.
The systematic uncertainties on the raw yields were estimated by using different functional shapes for the background and by varying the fitting range. Over the considered momentum range, the obtained raw yields varied within 3% for K 0 S and 4-7% for Λ. As a measure for the systematic uncertainty of the feed-down correction, we used the spread of the values determined for different centrality ranges with respect to the feed-down correction estimated for minimum bias events. This deviation was found to be about 5% relative to the overall Λ yield.
The systematic uncertainty associated with the efficiency correction was evaluated by varying one-byone the topological, track-selection and PID cuts. The cut variations were chosen such that the extracted uncorrected yield of the K 0 S and Λ would change by 10%. To measure the systematic uncertainty related to each cut, we used as a reference the corrected spectrum obtained with the nominal cut values. For Λ, the feed-down correction was re-evaluated and taken into account for every variation of the cut on the cosine of the pointing angle. The overall p T -dependent systematic uncertainty associated with the efficiency correction was then estimated by choosing the maximal (over all cut variations) deviation between varied and nominal spectra values obtained in each momentum bin. For the momentum range K 0 S and Λ production The ALICE Collaboration [20]. Right: Selected Λ/K 0 S ratios as a function of p T compared with Λ/K 0 S andΛ/K 0 S ratios measured in Au-Au collisions at √ s NN = 200 GeV [21]. The solid, dashed and dot-dashed lines show the corresponding ratios from a hydrodynamical model [22,23,24], a recombination model [25] and the EPOS model [26], respectively. considered, this systematic uncertainty was determined to be 4-6% for both K 0 S and Λ. The systematic uncertainty introduced because of possible imperfection in the description of detector material in the simulations was estimated in [12] and amounted to 1.1-1.5% for K 0 S and 1.6-3.4% for Λ. Since the systematic uncertainties related to the efficiency correction are correlated for the Λ and K 0 S spectra, they partially cancel in the Λ/K 0 S ratios. These uncertainties were evaluated by dividing Λ and K 0 S spectra obtained with the same cut variations and found to be half the size of those that would be obtained if the uncertainties of the Λ and K 0 S spectra were assumed to be uncorrelated. Altogether, over the considered momentum range, the maximal systematic uncertainty for the measured Λ/K 0 S ratios was found to be about 10%.
The transverse-momentum spectra of K 0 S obtained in different centrality intervals were compared with the spectra of charged kaons also measured by ALICE [27]. The two sets of spectra agree within the systematic uncertainties.
The corrected p T spectra are shown in logarithmic scale in Fig. 2 (left). The spectra were fitted using the blast-wave parameterization described in [19]. The resulting curves are superimposed in Fig. 2 (right), with a linear scale and for a restricted momentum range, to emphasize the low-p T region. The fit range in p T was from the lowest measured point up to 2.5 GeV/c (1.6 GeV/c) for Λ (K 0 S ). The fitting functions were used to extrapolate the spectra to zero p T to extract integrated particle yields dN/dy. The results are given in Table 1. The systematic uncertainties of the integrated yields were determined by shifting the data points of the spectra simultaneously within their individual systematic uncertainties and reapplying the fitting and integration procedure. In addition, an extrapolation uncertainty was estimated, by using alternative (polynomial, exponential and Lévy-Tsallis [28,29]) functions fitted to the low-momentum part of the spectrum, and the corresponding difference in obtained values was added in quadrature.
The p T dependence of the Λ/K 0 S ratios, formed for each centrality interval by a division of the respective measured p T spectra, is presented in Fig. 3 (left panel). For comparison, the same ratios measured in minimum bias pp collisions at √ s = 0.9 [12] and 7 TeV [20] are plotted as well.
The Λ/K 0 S ratios observed in pp events at √ s = 0.9 and 7 TeV agree within uncertainties over the presented p T range, and they bound in energy the Pb-Pb results reported here. The ratio measured in the most peripheral Pb-Pb collisions is compatible with the pp measurement, where there is a maximum of about 0.55 at p T ∼ 2 GeV/c. As the centrality of the Pb-Pb collisions increases, the maximum value K 0 S and Λ production The ALICE Collaboration Table 1: Integrated yields, dN/dy, for Λ and K 0 S with uncertainties which are dominantly systematic. A blast-wave fit is used to extrapolate to zero p T . Fractions of extrapolated yield are specified. Ratios of integrated yields, Λ/K 0 S , for each centrality bin with the total uncertainty, mainly from systematic sources, are shown. of the ratio also increases and its position shifts towards higher momenta. The ratio peaks at a value of about 1.6 at p T ∼ 3.2 GeV/c for the most central Pb-Pb collisions. This observation may be contrasted to the ratio of the integrated Λ and K 0 S yields which does not change with centrality (Table 1). At momenta above p T ∼ 7 GeV/c, the Λ/K 0 S ratio is independent of collision centrality and p T , within the uncertainties, and compatible with that measured in pp events.
A comparison with similar measurements performed by the STAR Collaboration in Au-Au collisions at √ s NN = 200 GeV is shown in Fig. 3 (right panel). Since the anti-baryon-to-baryon ratio at the LHC is consistent with unity for all transverse momenta [30,31], the Λ/K 0 S andΛ/K 0 S ratios are identical and we show only the former. The STAR Λ/K 0 S andΛ/K 0 S ratios shown are constructed by dividing the corresponding p T spectra taken from [21]. The quoted 15% p T -independent feed-down contribution was subtracted from the Λ andΛ spectra. The shape of the distributions of Λ/K 0 S andΛ/K 0 S are the same but they are offset by about 20% and have peak values around 10% higher, and respectively lower, than the ALICE data. This comparison between LHC and RHIC data shows that the position of the maximum shifts towards higher transverse momenta as the beam energy increases. It is also seen that the baryon enhancement in central nucleus-nucleus collisions at the LHC decreases less rapidly with p T , and, at p T ∼ 6 GeV/c, it is a factor of two higher compared with that at RHIC. Also shown in the right panel of Fig. 3 is a hydrodynamical model calculation [22,23,24] for most central collisions, which describes the Λ/K 0 S ratio up to p T about 2 GeV/c rather well, but for higher p T progressively deviates from the data. Such decoupling between the calculations and measurements is already seen in the comparison of p T -spectra [27]. The agreement for other charged particles is improved when the hydrodynamical calculations are coupled to final-state re-scattering model [25]. Therefore it would be interesting to compare these data and their centrality evolution with such treatment. For higher p T , a recombination model calculation [5] is presented (Fig. 3, right panel). It approximately reproduces the shape, but overestimates the baryon enhancement by about 15%. In the right panel of Fig.3, we also show a comparison of the EPOS model calculations [26] with the current data. This model takes into account the interaction between jets and the hydrodynamically expanding medium and arrives at a good description of the data.
In conclusion, we note that the excess of baryons at intermediate p T , exhibiting such a strong centrality dependence in Pb-Pb collisions at √ s NN = 2.76 TeV, does not reveal itself in pp collisions at the centerof-mass energy up to √ s = 7 TeV. At p T > 7 GeV/c, the Λ/K 0 S ratios measured in Pb-Pb events for different centralities all merge together and with the dependence observed in pp collisions. This agreement between collision systems suggests that the relative fragmentation into Λ and K 0 S hadrons at high p T , even in central collisions, is vacuum-like and not modified by the medium. In future, it would be interesting to extend the measurements to higher transverse momenta to see whether the nuclear modification factor behaves in the same way as the one for charged particles [32].
As the collision energy and centrality increase, the maximum of the Λ(Λ)/K 0 S ratio shifts towards higher p T , which is in qualitative agreement with the effect of increased radial flow, as predicted in [4]. The ratio of integrated Λ and K 0 S yields does not, within uncertainties, change with centrality and is equal to that measured in pp collisions at 0.9 and 7 TeV. This suggests that the baryon enhancement at intermediate K 0 S and Λ production The ALICE Collaboration p T is predominantly due to a re-distribution of baryons and mesons over the momentum range rather than due to an additional baryon production channel progressively opening up in more central heavy-ion collisions. This centrality dependence may be challenging for theoretical models which try to disentangle the quark-recombination contributions from the radial-flow effect and which, in addition, will need to describe other particle spectra and their p T -dependent ratios.
The width of the baryon enhancement peak increases with the beam energy. However, contrary to expectations [7], the effect at the LHC is still restricted to an intermediate-momentum range and is not observed at high p T . This puts constraints on parameters of particle production models involving coalescence of quarks generated in hard parton interactions [33].
Qualitatively, the baryon enhancement presented here as p T dependence of Λ/K 0 S ratios, is described in the low-p T region (below 2 GeV/c) by collective hydrodynamical radial flow. In the high-p T region (above 7-8 GeV/c), it is very similar to pp results, indicating that there it is dominated by hard processes and fragmentation. Our data provide evidence for the need to include the effect of the hydrodynamical expansion of the medium formed in Pb-Pb collisions on the mechanisms of fragmentation and hadronization.
|
2014-09-18T22:37:12.000Z
|
2013-07-21T00:00:00.000
|
{
"year": 2013,
"sha1": "5a9b4a0139677dcba4928c9795fd09208693e924",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.111.222301",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "5a9b4a0139677dcba4928c9795fd09208693e924",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
154929337
|
pes2o/s2orc
|
v3-fos-license
|
local public transport planning in poland – geographical input
. This paper concentrates on geographical contribution to public transport planning in Poland with a special regard to transport services of general interest. The authors draw on the newly enacted Polish legislative acts concerning public transportation: the Act of 16 December 2010 on public transport and the Regulation of 25 May 2011 on the detailed scope of sustainable development plan of public transport . According to these legal acts, authorities of the largest local and regional governments in Poland are obliged to prepare public transport plans by March 2014. In order to provide useful guidelines that would ameliorate the preparation of public transportation plans by these authorities, the authors demonstrate some effective examples of geographical analyses utilising sample cases of a medium-sized city (Gdynia) and a medium-sized poviat (Krosno poviat). The authors explain how to delineate the network of public transport of general interest in these administrative units along with route categorisation. Addition-ally, some principles of the city area division into public transportation sectors – a spatial unit facilitating public transport planning – are presented on the example of Gdynia.
introduction
Pursuant to the Act of 16 December 2010 on public transport selected local governments in Poland must prepare and follow sustainable development plans of public transport (public transport plans) which institute numerous regulations to help transport organisers run, manage and finance transport services of general interest. The reasons behind adopting new law include insufficient and/or ineffective provision of public transport services in many Polish poviats and cities. These disadvantages have been widely presented in the literature as negative by-products of liberalisation on the public transport market (Bogdanowicz, 1996;Menes, 2001;Bergel, 2008). The term 'insufficient' denotes poor frequencies, deficiencies of bus and railway runs during off-peak hours, and inadequate routes as compared to the needs and expectations of the local population. The term 'ineffective' means the service provision which is both unreliable and overburdened by high demand. As a result, commercial carriers take over local markets causing chaotic development of suburban bus transport. Polish geographers have frequently investigated the changes of the local public transport markets (Dej, Kołoś, 2009;Kretowicz, 2010).
A public transport plan is expected to maximise the utilisation of bus, tram and train services by improving the frequency of services and spatial coverage of public transport systems. The above legislation is most pertinent to the following types of areas: these of high public transport needs (uncoordinated operation of carriers, lack of intermodal integration, overlapping links and routes, inadequate stop locations, high variation of needs and preferences), and these of low public transport needs (suffering from deficiencies or absence of bus/railway links as a result of low profitability) (Chaberko, Kretowicz, 2011). Such areas are most predisposed for public transport of general interest, i.e. non-com-mercial segment of public transport services organised and financed by a public body.
The demand for public transport services is shaped by the intensity and patterns of commuting, school travels and distribution of services versus places of residence. In Poland, 2.3 million people commute to work located beyond their gmina of residence on a daily basis (Dojazdy do pracy..., 2010). The types of main trip attractions are incorporated into the Regulation of 25 May 2011 on the detailed scope of sustainable development plan of public transport and reflected by the term 'public utility institutions' . Considering social structure of underserved areas, transport services of general interest should above all concern these of high proportion of economically disadvantaged population who cannot afford to purchase and utilise cars, and the persons with reduced mobility (the disabled, the elderly, etc.). The most disadvantaged localities include these located peripherally, i.e. far from major roads and railways, often near voivodeship and poviat administrative borders. Besides, service provision gaps frequently occur in the regions of dispersed settlement and numerous small localities.
The main goal of this paper is to provide practical guidelines for local governments in order to support public transport planning on a local level with an aid of geographical analyses. First, the authors review the definitions of public transport services of general interest in the Polish law. Next, they propound fundamental rules to divide urban area into public transport sectors. Finally, they demonstrate how to delineate the network of transport services of general interest in poviats and cities on the examples of the city of Gdynia and the Krosno poviat. All the above issues remain in keeping with geographical and spatial analyses, which are necessary for public transport planning on a local and regional level alongside the transport modelling techniques.
public transport network of general interest
Introducing services of general interest the main objective the European Union has been to direct structural policies to prevent vulnerable social groups or regions from being excluded from access to essential services (White Paper on services of general interest, 2004). These services are universal as the authorities must deliver them irrespective of low profitability on the free market. For this reason, accessibility, affordability and sufficient quality remain at the core for their provision. The main goal of public transport organisers regarding delivering services of general interest is to counteract social exclusion and devise a policy to tackle their causes. L. Pickup and G. Giuliano (2005) suggest three factors among the causes of social exclusion where transport policy has a clear influence: poor access to services, hopelessness resulting from health or disability problems exacerbated by transport barriers, and polarised or fragmented communities affected by mobility disadvantage. According to the fundamental legal acts concerning local governments in Poland, gmina, poviat and voivodeship authorities are responsible for satisfying collective community needs in the field of public transport. Pursuant to the Act of 20 December 1996 on public utilities management these needs must be satisfied constantly and incessantly by service of which provision is to be ensured by these authorities. Having been defined in this fashion, services of general interest also incorporate regional and local public transport. Transport services of general interest are defined in the Act of 16 December 2010 on public transport as the services available universally and delivered by the publiclysubsidised transport operator in order to constantly and incessantly satisfy population needs. The Lisbon Treaty signed in 2007 by the 27 EU Member States recognises so-called services of general economic interest (SGEI). These services are delivered obligatorily with respect to equality and solidarity and are served as closely as possible to the needs of the users. It seems that the most important hallmarks of services of general interest are mentioned only in the latter document: affordability (regardless of service profitability) and mandatory provision es-pecially when entities operating on the free market fail to fulfil community service. These characteristics indicate that public transport is principally prejudiced in favour of local inhabitants while private transport rarely operates in unprofitable conditions and avoids serving certain areas and most disadvantaged groups of population.
Concerning the above, organisation and financing of public transport by a public body is required to satisfy the needs of all inhabitants sufficiently and universally. Nevertheless, public funds must not be equivalent to economic benefits for carriers, as states the article 107 of The Treaty on the Functioning of the European Union. Such aid granted by the state or local government would disturb free competition. Simultaneously, the state financial support cannot be selective, i.e. directed to the one privileged carrier. The only exception that proves the rule concerns refunds paid to the operator to offset losses incurred by providing unprofitable services. This refund not only covers net costs accumulated by the carrier through discharging public service obligations, but also takes into account the revenue generated thereby and a reasonable income (The Regulation (EC) No. 1370No. /2007No. ..., 2007. All of the above regulations have been adopted by the Act of 16 December 2010 on public transport and operate as a common law in Poland.
public transport services of general interest in cities
Although the domain of public transport planning intertwines with the urban transport modelling techniques, it concentrates on different goals. The main product of the urban transport engineering is a transport model constructed in order to facilitate vehicle and passenger flows, harmonise individual and public transport and optimise costs and benefits of the transport system by travel demand forecasting; hence the focus is on traffic management embracing all users. In this paper, attention is given to public transport, i.e. one part of mass transport (excluding taxi, school transport and worker transport) that comes under the Act of 16 December 2010 on public transport (Krych, Kaczkowski, 2010). The authors are particularly interested in public transport services of general interest as a non-commercial segment of public transport. According to the above Act and the Regulation of 25 May 2011 on the detailed scope of sustainable development plan of public transport the foremost element of this plan includes a transport network to be served by carriers delivering public transport services of general interest. Originally, these services are to be available universally, being delivered constantly and incessantly by a public transport operator subordinate to a public transport organiser. In large cities public transport has been operating according to this model since the early 1990s, but presently this obligation concerns all municipalities, especially those where authorities wish to run transport services of general interest.
For the purposes of delivering services of general interest it is crucial to determine the range and shape of a transport network in accordance with the needs of all inhabitants. Assumedly, this network should embrace all residential areas in an attempt to retain the distance of convenient walk for the great majority of population from the residence to bus/tram/train stops under 300-400 metres, and in special cases under 500 metres in urban areas (Majewski, Beim, 2008) as well as 750-1000 metres in rural areas (Fitzpatrick et al., 1996). The shape of urban transport network of general interest ought to facilitate movements between city districts and the city centre along the routes served by the operator of public transport whereas the shape of a poviat public transport network is to increase the access to poviat and gmina seats for all of the inhabitants.
Although the term 'the network of public transport of general interest' is fairly new in the legal system of Poland, such networks have existed in cities for years demarcated by the lines served by urban public transport carrier(s). Hence, the cities that already run regular urban public transport would not need to delineate a new network. It is assumed that this network is pre-established by existing bus, tram and trolleybus lines. However, the preparation of a public transport plan creates a good opportunity to analyse and evaluate this network and, if necessary, reconstruct it or make the necessary changes adjusting its shape to the rapidly changing urban environment. It is particularly essential to examine whether some distant urban areas or city hinterlands remain underserved by public transport regardless of high local demand. Such situation is common as new residential areas or housing estates, new workplaces or large service areas are presently being located far from the city centres. Moreover, it is not enough to determine the range and shape of a transport network of general interest as the vehicles must run with a well-adjusted frequency to the local demand in order to deliver transport services effectively. Thereunder, the choices of minimum and maximum frequencies, i.e. the construction of timetables for day and week peak and off-peak times, are an imperative for the optimal performance of public transport.
public transport sectors
The terms 'traffic analysis zone' or 'transport analysis zone' (TAZ) as well as 'traffic analysis district' (TAD) have been coined by transport planners as the basic geographic units for the purposes of transport forecasting and modelling. A group of combined TAZs creates a traffic analysis district. The most accurate definition of TAZs and methods of their delineation are proposed by the US National Cooperative Highway Research Program (1998). This definition reads: Geographic areas dividing the planning region into relatively similar areas of land use and land activity. Zones represent the origins and destinations of travel activity within the region… every household, place of employment, shopping center, and other activity… are first aggregated into zones and then further simplified into a single node called a centroid.
In the USA, these zones are created by combining census blocks using a set of rational criteria to make socio-economic data for each zone always available. Hence, a traffic analysis zone (TAZ) is defined by the US Census Bureau as a special area delineated by the state and/or local transport officials for tabulating traffic-related data, especially journey-to-work and place-of-work statistics (US Census Bureau, 2012). TAZs are utilised in the popular four-step transport model procedure, which includes trip generation (travels generated by every zone by destination), trip distribution (travels generated from every zone to every other zone by destination), mode choice or modal split (for each pair of zones) and route assignment (between the origin zone and destination zone by a particular mode to a route). The US National Cooperative Highway Research Program (2012) has recently issued a report that comprehensively guides transport planners though the whole process. This institution argues that despite numerous recent extensions, the traditional four-step model will continue to be used for many years, especially in the small-and mediumsized urban areas.
There have been numerous approaches to divide regions into traffic analysis zones in Poland ranging from manual (Analiza ruchu w gminie Łomianki..., 2008) to the GPS and GIS-based software tools (Celiński et al., 2009;Szarata, 2010). It is desirable to utilise a large number of traffic analysis zones in transport planning and engineering in order to increase precision of the modelling procedure (Celiński et al., 2009). This number is cost-dependent and thus changeable -for example, during the 2005 research, the city of Warsaw was divided into 774 traffic analysis zones and thousands of passengers and households were interviewed in order to conduct proper analyses (Warszawskie badanie ruchu, 2005). The Upper Silesian region was divided into 185 traffic analysis zones during the latest comprehensive traffic analyses (Karoń et al., 2010). The transport modelling techniques have been also used in smaller spatial scales such as a site level in order to detect changes in passenger flows near large commercial investments (Szarata, 2010).
Geographical reference to traffic analysis zones requires utilising geographical criteria in their delineation. Most TAZs in Poland are outlined drawing on demographic and land use data. The authors suggest public transport sector as a major unit for delineation of a public transport network of general interest. This unit would facilitate rational local public transport planning according to the Act of 16 December 2010 on public transport. Similar sectors have been used recently in transport strategies and plans designed by the main towns of regions, e.g. Koszalin (2007), Olsztyn (2010) and Piotrków Trybunalski (2012), although no criteria for their demarcation have been mentioned. The main goal of the public transport sector is to enhance precision and transfer public transport of general interest planning to a lower level of administrative division. The authors utilise geographical criteria in their delineation alongside the TAZ, which remains the major unit in traffic forecasts and modelling. The latter analyses are also mandatory for a public transport plan (Jastrzębski, 2009). In unison with the main aim of this paper, the authors underline practical solutions and suggest simple methods within financial and organisational reach of local governments.
A complex spatial structure of large cities and numerous public transport routes make it impossible to plan public transport of general interest in a city without dividing it into smaller units. This division should be performed according to such criteria as land use and concentration of potential demand. Importantly, public transport sectors ought to be possibly most homogenous and encompass areas with passengers travelling towards one place, node or main artery. The boundaries of public transport sectors must correspond to the functions of particular districts (e.g. residential, commercial, industrial) and existing obstacles (e.g. natural barriers such as rivers, hills, lakes). Unfortunately, it is hard to delineate public transport sectors based on these criteria, as the shape of public transport sectors does not correspond to the existing administrative borders (census blocks, districts, housing estates etc.). Worse still, only administrative areas offer statistical data necessary to conduct further analyses (e.g. demographic and social data). To sum up, the main criteria to delineate public transport sectors include: (a) the shape of transport network -connected with the main routes (roads, arteries, railways) which determine passenger flows; (b) functional criteria -connected with districts of various functions (city centre, commercial district, industrial district, residential areas); (c) morphologic criteria-connected with districts of diverse development and land use (densely populated city centre, large housing estates, old rural areas incorporated into the city, suburbia); (d) natural and man-made barriers -rivers, hills, mountain ranges, railways, ports; (e) administrative criteria -public transport sectors must correspond to administrative division (districts, census tracts, housing estates, other units).
A few general rules regarding the above criteria can be formulated. The boundaries of public transport sectors must not coincide with the main transport arteries and go perpendicular to natural or man-made barriers. Although the first four crite-ria are most suitable, in practice the administrative criterion is most utilised. This situation results from the availability of demographic, social and economic data collected for administrative units. Therefore, traffic analysis zones are often coincident with cen-sus tracts or urban units and boundaries of public transport sectors correspond to district boundaries. When demarcated in this manner, public transport sectors rarely run optimal to the existing public transport routes.
fig. 1. Division of Gdynia into public transport sectors
Source: Authors' own work Figure 1 presents a sample division of the city of Gdynia into seven public transport sectors. The functional city centre is distinguished as the main service area where the major routes meet (A). The public transport sectors B and C encompass densely built-up residential and industrial areas. These sectors are delineated according to the course of the main routes from the city centre and the Rapid Urban Railway -the most impor-tant route in the entire urban area of the Tri-City (Gdańsk, Gdynia, Sopot). Residential districts with multi-family housing located to the north of the city centre and isolated by the port basin and harbour form the public transport sector D. The sector E embraces large housing estates distant from the city centre but linked with the central area by the main artery through the sector C. The sector F is the least uniform of all as it covers a large housing estate lo-cated nearby the city centre, housing subdivisions and former rural areas located in the western part of the city. Yet still, all this area is served by one main road leading to the city centre. The boundary between the sectors F and C is delineated along the railway route -this does not contradict the above rules as the route remains of little importance for public transport in Gdynia and should be regarded rather as a barrier than a major conduit. There are only two smaller housing estates in the sector G -the majority of travels from this sector occur along one road leading mostly through the sector B.
categorisation of public transport routes in Gdynia
The main goal of a public transport plan is not to design or redesign transport routes and timetables it details. However, this document should determine universal standards of transport services delivered within the network of general interest. These standards include, e.g. frequencies of bus, tram and train runs and other parameters. In order to ascribe appropriate frequencies to particular sections of routes, these sections need to be categorised. Each category would have different frequency standards (measured in minutes) separately for peak hours, off-peak hours and weekends. Other parameters that may come in handy include utilisation of low-floor means of public transport. Route categorisation is not a mandatory element of a public transport plan; however it seems to be a simple, useful and practical solution for municipal authorities while making decisions on the spatial distribution of public transport services of general interest.
The first stage of planning requires appropriate data gathered for each public transport sector (e.g. the socio-demographic structure of the local population, spatial structure, main functions, land use, public transport routes that run through each sector and current transport provision). The main traffic generators (trip attractions), such as large job providers, service areas, schools and universities, hospitals and shopping centres, should also be taken into account. As one of the principal objectives incorporated in the Act of 16 December 2010 on public transport is to satisfy the needs of the disabled, it is mandatory to approximate their number in each sector and determine common destinations of their journeys.
The main part in planning a network of transport services of general interest embraces the minimum and maximum frequencies of bus, tram and train runs along different routes (sections of routes). This can only be done by means of passenger flow counts as well as in-vehicle and household surveys conducted among local population (Travel Survey Manual, 1996). The results of this research enable transport planners to attribute accurate frequencies to each category. It seems practical to utilise three to four categories. The first category encompasses the main public transport routes in a sector that link the largest housing estates with the city centre or other important travel destinations located outside this sector. The second category embraces supplementary routes (crucial for inter-district transport), other routes to the city centre, alternatively feeder routes. The third and fourth categories include routes significant only for one sector linking the least populous areas with the main routes, interchange nodes or district centres. The sample characteristics for further route categorisation within a transport network of general interest is presented in Table 1 on the example of public transport of the sector D in Gdynia. Obviously, this table may be extended as more information is gathered, e.g. from the public running records or other sources.
The most significant element of the spatial development of Gdynia regarding public transport in the sector D are large housing estates, which give this part of the city a residential character. The densely built-up area contrasts with the single-family housing located on the fringes of large housing estates. There are also some small areas of residential function located to the north, while the southern part of the sector D comprises extensive industrial area (the port and shipyard). The most important institution in the area includes a university complex (Gdynia Maritime University). Figure 2 presents a sample categorisation of routes within the public transport sector D in Gdynia for the purposes of a transport network of general interest. With the results of the surveys among the population and public transport users in Gdynia, it is possible to assign frequencies to the route categories that form the network of general in-terest.
Spatial development
Southern part: industrial and port area, central part: large housing estates and single-family housing located in the fringes, dispersed single-family housing in the northern part of the area
functions and main traffic generators
Residential function (bedroom community), port and industrial function -southern sector (shipyard, container terminal, heat and power station); Maritime University, maritime terminal, sea passenger terminal, naval base, four secondary schools, two healthcare units, social welfare centre)
Socio-demographic population structure
High population density (except northern part of the area), stable population dynamics (population increase in Oksywie), high or average share of workers The following route categorisation in the public sector D in Gdynia is suggested: (a) category I of high frequencies -links large housing estates and the city centre plus one route serving the Gdynia Maritime University (possibly with options concerning academic year and holidays); (b) category II of moderate frequencies -includes routes within housing estates which link more distant parts of these estates to the main roads, plus one route to the south-western part of the city linking the sector D with an interchange node allowing passengers to transfer to the Rapid Urban Railway; (c) category III of low frequencies -includes routes to smaller single-family housing estates of the lowest demand for public transport. Priority in the utilisation of low-floor vehicles is granted to the category I as it shows the highest passenger flows serving the routes by the medical and social assistance centres. No priority in those terms is granted to the category III as these routes serve less populated areas of higher car-utilisation and the lowest concentration of persons of reduced mobility.
The decisions concerning vehicle frequencies included in the public transport plans for each fragment of the transport network of general interest for different peak and off-peak hours and days must be obligatory for public transport organisers during the vehicle routing and timetable construction. The operator of public transport would only provide services of general interest based on the contract made with the organiser. This carrier is not to interfere with routes and timetables.
public transport network of general interest in poviats
According to the new law regarding public transport new functions have been ceded upon poviats, namely organisation, management and planning of public transportation. These responsibilities are new for this unit of administrative division, yet poviat authorities have performed some tasks connected the with management and control since the Act of 5 June 1998 on poviat government and the Act of 6 September 2001 on road transport have come into force. It is plain that new obligations are particularly focused on the areas located within poviat boundaries where public transport demand is not fully satisfied by the free market. These areas include sparsely populated and peripherally located localities underserved (or not served at all) by the public and commercial bus and railway carriers. The inhabitants of these areas used to be protected by the still current Act of 25 September 1981 on state enterprises, which unfortunately lost its significance along with the transformation of urban public carriers and local State Bus Companies (PKS-Przedsiębiorstwo Komunickacji Samochodowej) into commercial companies. This legislation, among other things, obliges the state to subsidy unprofitable public transport links run by the public companies. However, the number of privatised, commercialised or communalised PKS companies has been increasing consistently since (Taylor, Ciechański, 2008. As a result, these companies have gradually limited, suspended or discontinued the most unprofitable links and bus runs (Kretowicz, 2009(Kretowicz, , 2010. For this reason, the problem of public transport of general interest is given precedence by the newly-enacted legal acts. The main aim of the new regulations with respect to poviats is to provide sufficient access to public transport for those most economically deprived and the persons of reduced mobility living in un(der)served and peripheral areas. The other objective, namely coordination, integration and control over the scattered supply and provision of proper passenger information, is more significant in urban and metropolitan areas, although it may still be vital for densely populated rural areas located in the southern and south-eastern parts of Poland. Organisation of public transport on a poviat level can bring numerous benefits to the local population and municipal authorities. When organised and planned locally, public transport improves accessibility to the poviat and gminas seats, enhances home-to-work passenger flows and helps satisfy everyday needs of the population residing far from the community centres. Accordingly, transport services of general interest in peripheral areas prevent transport and social exclusion (especially among households with no individual transport), as well as encourage inhabitants to transfer from their own vehicles to the public modes of travel. This policy limits costs of everyday travelling, reduces traffic bottleneck effects, especially in the entrance roads to the larger towns and cities, and to some extent solves parking problems in the poviat seats.
poviats opposed to public transport obligations
As stated previously, the Act of 16 December 2010 on public transport burdens poviat governments with a number of new responsibilities, which have never been realised by these public bodies. Hitherto, the poviat responsibilities concerning public transport have included license and permit issuing, carriers control, accounting and distribution of state subsidies to the carriers on account of concessionary fares. Importantly, the regulatory functions performed by poviat authorities on a local public transport market are limited, yet these elements introduced regulatory competition in the early 2000s. Additionally, the only mandatory document loosely connected with public transport planning is socalled public transport market analysis prepared by poviats either with the introduction of new lines or once a year. This analysis is not of strictly planning character, but it is utilised to justify the decisions to grant or reverse permits to run public transport services. As evidenced in 2011 by the Supreme Audit Office of Poland, as many as 89% poviats failed to prepare public transport market analysis regularly, and most prepared this document negligently (Informacja o wynikach kontroli..., 2011). In light of the new legislation, poviats become public transport organisers and take over the organisation, management and planning of public transport on a local level. Notably, poviats may also become the organisers of public transport of general interest. If they intend to run transport services of general interest, these most populous (of at least 80,000 inhabitants) or their unions (of at least 120,000 inhabitants), have to prepare a sustainable public transport development plan. Preparation of this plan coupled with organisation and financing of public transport may overload poviats' capabilities and strain their budgets. Consequently, most poviats will probably not be interested in organising public transport services of general interest. There is also a common contention that the demand for public transport within a poviat territory can be fully satisfied by the free market and no additional aid from local governments is necessary. At all events, if new obligations do not entail extra funds from the state budget, the poviat transport of general interest will not be run on a large scale.
These few poviat authorities which do plan to organise and finance public transport of general interest, must take a public transport plan very seriously. If they fail to prepare this document before March 2014, there is a risk that financing of public transport of general interest will be limited (no contracts with public transport operators for a period longer than 3 years). In addition, the regulations concerning reduced-fare rides are going to change -they are to be different for the routes served by transport of general interest. If poviat governments do not decide to run these services, the local pop-ulation may lose the opportunity to utilise reducedfare tickets (available only from operators delivering services of general interest).
Irrespective of the above, public transport plans are not expected to radically change this segment of the public transport market that is not a service of general interest (Szczerbaciuk, 2012).
network of public transport of general interest in the Krosno poviat
Large cities are best prepared to perform the role of a public transport organiser as they have been delivering public transport services by means of subservient authorities established and appointed to organise public transport in urban areas since the 1990s. For this reason, urban network of public transport of general interest is practically intact and coincides with the lines served by urban carrier(s). Delineation of such a network in poviats remains a challenge. There is no one simple 'way' of how to include routes to the network although a few general directions can be distinguished. Thus, the network of public transport of general interest might include: (a) all routes which are currently and regularly served by any mean of public transport; (b) all routes currently served by a public carrier -this concerns poviats with their own public transport run by an urban carrier in the town (city) and beyond its limits; (c) only the routes of the greatest importance for poviats, i.e. these linking gminas' seats with poviat seats of the largest passenger flows; (d) only the routes in localities which have never been served by public transportation or these deprived thereof in recent years; (e) a combination of the above. The decision on the direction or combination of the above characteristics optimal for a particular poviat should be based on the passenger flow counts and demand forecasts. Until now, poviats neither have been obligated nor have they needed to conduct such a research. Hence, the authorities cannot utilise any previous information or data (as opposed to the cities). For this reason, poviats can only rely on the data provided by public transport carriers (if these operate in the area) whereas it is difficult to obtain reliable and consistent data on passenger flows (e.g. according to the ticket sales data) from every private carrier.
It is highly probable that complex research and traffic counts to measure actual and potential transport demand will be too expensive for most of poviat budgets. Hence, such field studies conducted in unison with the public transport plan significantly increase its total cost. Instead, geographic research to some extent may replace costly analyses. In order to determine the demand for public transport (including the transport services of general interest) and significance of routes poviat officials may utilise: (a) population distribution analysis (population density) on a locality level; (b) distribution of main job providers, middle and high schools and other trip generators along with an estimation of daily commuting and travels; (c) analysis of residence and travel destination of persons of reduced mobility; (d) analysis of the current public transport offer (all public and private carriers); (e) recognition of the 'transport exclusion' phenomenon, i.e. localities with no or very poor public transport as measured by a number of daily and weekend links; (f) marketing analyses via internet in order to evaluate current opinions on public transport operation and passenger behaviour (travel directions, modal split); (g) household survey concerning travel behaviour conducted on a representative sample of population. In rural areas this research can be done in a form of school or village council survey, or any other mass contact with local populations.
The aforementioned research fields include the distribution of large job providers along with their range and intensity of commuting. An example of such examination is presented in Figure 3 for the Krosno poviat.
In order to perform this analysis, the data have been collected about the location of large companies (by localities and employment, extracted from the Local Databank of the Central Statistical Office of Poland) and the employees' locality of residence (data provided by the companies). There is a significant dominance of the poviat seat on the local labour market in the Krosno poviat -2/3 of all enterprises of above 50 workers are located in Krosno. The most intense commuting concerns the gminas adjoining the town boundary and in some localities amounts to 200 persons per 1,000 inhabitants. This figure is lower in gminas located in the southeastern (Iwonicz-Zdrój and Rymanów gminas) and western (Jedlicze gmina) parts of the poviat. This is owed to the higher number of jobs available in these gminas because of the employment in the oil industry (Jedlicze oil refinery) and tourist resorts (hotels and boarding houses located in Iwonicz-Zdrój and Rymanów-Zdrój). fig. 3. Number of the employed by enterprises of above 50 workers (left) and commuting intensity to Krosno (right) in the localities of the Krosno poviat (data for 2010) Source: Authors' own work based on the Local Databank of the Central Statistical Office of Poland and Kaczkowski (2008) This type of analysis enables transport planners to estimate potential commuter flows, however, when unsupported with car ownership or car utilisation data, it fails to recognise the number of commuters making use of public transport. The survey conducted on a representative sample of commuters (e.g. in the companies) makes it possible to expand the reasoning to the entire local population. When supplemented with information on travels to school (available in school records), this data provides the most important and regular passenger flows in this poviat. Similar analysis can be performed for other smaller towns in the area, which would render the overall depiction of the most significant passenger flows there.
categorisation of public transport routes in the Krosno poviat
There is no need to divide poviats into public transport sectors as gminas reflect functional catchment areas on a local scale and localities can be regarded as public transport zones. The first stage of planning requires appropriate data collected for each gmina. This geographically referenced data include the socio-demographic population composition, spatial structure and development, the main functions and land use etc., as well as the public transport routes that run through each gmina. The main traffic generators (trip attractions), such as large job providers, service areas, schools and universities, hospitals and shopping centres, should also be taken into account. As one of the principal objectives incorporated in the Act of 16 December 2010 on public transport is to satisfy the needs of the disabled, it is mandatory to discern their number in each sector and determine common destinations of their journeys. The above information should be supplemented with the current provision of public and private bus and railway carriers. The authors employ an example of the Chorkówka gmina located to the east of Krosno. The sample depiction of the preliminary characteristics for further route categorisation is presented in Table 3 and Figure 4. Obviously, the table may be extended as more information is gathered, e.g. from public running records or other external sources. As in the case of Gdynia, this categorisation plays a major role in delineating a transport network of general interest. Table 4 presents only sample frequencies (the real values should be based upon population surveys and traffic counts) in order to depict the outcome of route categorisation. New residential single-family housing in the localities adjoining Krosno (suburbanisation processes), the remaining part of the Chorkówka gmina -dispersed single-family homesteads, good availability to healthcare units, uneven distribution of secondary schools (intense school travels) functions and main traffic generators Agricultural and service function of the gmina, the absence of large job providers (intense commuting to Krosno), indispensable coordination between school transport and public transport of general interest) Socio-demographic population structure Population increase in the eastern part of the gmina (Krosno's suburban zone), high population density, average share of the elderly as compared to the whole poviat The main part in planning a network of transport services of general interest embraces the minimum and maximum frequencies of bus, tram and train lines. This can only be done by means of passenger flow counts as well as in-vehicle and household surveys conducted among local population. These surveys have not been widely performed for poviats. The results of this research enable transport planners to attribute accurate frequencies to each category.
It seems practical to utilise three to four categories. The first category encompasses the main public transport routes in a gmina which link the largest localities with the poviat seat or other important travel destinations located outside this gmina. The second category is to embrace supplementary routes, but crucial for inter-gmina transport or alternative routes to the poviat seat. The third and fourth categories include the routes significant only for one sector linking the least populated areas with the main routes, interchange nodes or the gmina seat. The sample depiction of preliminary characteristics for further categorisation of routes included in the transport network of general interest is presented in Figure 4 by means of the Chorkówka gmina, which adjoins the town of Krosno.
The main goal of public transport of general interest is to supplement the existing commercial bus transport. This supplementation is considered as directing vehicles of public carriers to the routes underserved or served inefficiently by the commercial carriers. For the major category I the role of public-ly-funded transport should be only to support commercial carriers in off-peak hours (in the mornings, afternoons, evenings, at the weekends). The routes of category II are to link gminas with other larger localities and other gminas (excluding Krosno). Along these routes public transport of general interest should also reinforce public transport in offpeak periods, and provide bus transport to the routes affected by severe scarcity of evening and weekend bus runs.
The smallest localities in the Chorkówka community located along the unserved or underserved routes by commercial carriers are included into category III or IV. The role of public transport of general interest is to provide a 'minimum' number of bus runs to prevent these localities from transport exclusion. Accordingly, publicly-funded carrier should deliver transport services in the most important times of the day: morning and afternoon peaks (commuting and school travels). If the poviat finances allow, these services should also be provided in off-peak hours and weekends.
conclusions
The new legislative acts concerning public transportation pose a challenge to local authorities as they oblige them to plan, organise and manage the public transport market. These public bodies have not been fully prepared to perform such functions. Transport services of general interest remain of great importance for local population (especially these deprived of public transport links). Even having acquired the necessary funds to finance provision of public transport of general interest its planning and decisions on spatial distribution in the area are difficult. Worse still, a public transport plan can be prepared in many ways as no uniform directives are included in the legislation (especially the extent and level of detail of its mandatory sections). It seems that the preparation of this document may be successfully supported by geographic analyses with the utilisation of simple methods and available data. This especially concerns geographic elements significant for the preparation of the plan. The main method includes cataloguing characteristics of spatial development, infrastructure, population structures and public transport services in the area under consideration. This kind of analysis enables the planners to demarcate problem areas of insufficient transport provision and identify the main passenger flows; thus, when supported with traffic counts, surveys and marketing analysis, the descriptive part of a public transport plan remains a foundation of decision-making in the following sections. A different approach should be used for cities as compared to poviats, although in the aforementioned legislation there are no separate regulations concerning administrative units. In the crucial part of the plan -the delineation of a network of public transport services of general interest -the authors suggest route categorisation with basic standards ascribed to each category. The mere delineation is not expected to be complex in cities (requires minor changes) whereas in poviats such network must be demarcated manually. For the latter areas, the authors propose public transport services of general interest as complementary to commercial public transportation aimed predominantly at certain places or off-peak hours.
To recap, the role of geographers in public transport planning is most recommended and at least as much justified as the participation of other specialists in the entire process, i.e. transport engineers, economists and lawyers -all collaborating with a local government. By investigating a range of different spatial phenomena (natural environment, demographic and socio-economic population structures coupled with the transport organisation in space), geographers tackle them most comprehensively and provide a synthetic foundation for further analyses.
|
2019-01-11T21:53:47.000Z
|
2014-02-18T00:00:00.000
|
{
"year": 2014,
"sha1": "da1be29beb3ebc52bdf779963ed190b99223bd82",
"oa_license": null,
"oa_url": "https://doi.org/10.2478/bog-2014-0001",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "c06e7d6012b64e54302419c981c273dbf0e1067b",
"s2fieldsofstudy": [
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
}
|
235348070
|
pes2o/s2orc
|
v3-fos-license
|
Technical viability of the YF MAC-HD ELISA kit for use in yellow fever-endemic regions
Yellow fever (YF), an arboviral disease, affects an estimated 200,000 people and causes 30,000 deaths per year and recently has caused major epidemics in Africa and South America. Timely and accurate diagnosis of YF is critical for managing outbreaks and implementing vaccination campaigns. A YF immunoglobulin M (IgM) antibody-capture (MAC) enzyme-linked immunosorbent assay (ELISA) kit, the YF MAC-HD, was successfully introduced starting in 2018 to laboratories in Africa and South America. The YF MAC-HD kit can be performed in 3.5 hours, test up to 24 samples, and includes all reagents necessary to perform the test, except for water used to dilute wash buffer. In 2018 and 2019, a total of 56 laboratory personnel from 39 countries in Africa and South America were trained to use the kit during workshops, followed by take-home YF IgM proficiency testing (PT) exercises. Participants received either a 10- or 20-sample YF PT panel and performed testing using the YF MAC-HD kit. All countries obtained 90% or higher correct results. These results verified the technical viability and transferability of YF MAC-HD kit use for laboratories in YF-endemic countries.
Yellow fever is a vaccine-preventable disease transmitted by mosquitoes that annually affects an estimated 200,000 people and causes 30,000 deaths. Being able to quickly and accurately identify people infected with yellow fever virus by laboratory confirmation is critical for managing outbreaks and vaccination campaigns. A test developed by the Centers for Disease Control and Prevention can identify yellow fever antibodies in a person's blood within approximately four hours. In order to assure this test could be used in laboratories located in regions with yellow fever transmission, workshops were held to train laboratorians from Africa and South America on how to use the test. Laboratorians were then given a panel of samples and performed the test in their own laboratories. Of the 39 countries that performed testing, all countries scored 90% or higher concordant results, demonstrating the successful transfer of this yellow fever antibody detection test. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111
Introduction
Yellow fever (YF) is an arboviral disease endemic in tropical and subtropical areas of Africa and the Americas. It is estimated to cause 200,000 cases and 30,000 deaths annually, with 90% occurring in Africa [1,2]. The causative agent of YF, yellow fever virus (YFV), is a singlestranded RNA virus from the genus Flavivirus, family Flaviviridae, and is primarily spread via Aedes spp. and Haemagogus spp. mosquitoes [3]. Most individuals who become infected with YFV are either asymptomatic or develop mild, non-specific illness that may consist of fever, headache, body aches, fatigue, nausea, or vomiting [1,3,4]. Approximately 5-26% of symptomatic individuals, however, develop more severe YF disease, consisting of high fever, the typical jaundice, bleeding, shock, and organ failure; of those who develop severe YF disease, 30-60% will die [4,5].
YF epidemics have occurred in recent years in both Africa and the Americas. In South America in 2016 to June 2020, multiple YF sylvatic epidemics and epizootics occurred in Brazil, notably near the large urban areas of São Paulo and Rio de Janeiro [6,7,8]. These Brazilian YF epidemics led to at least 2,278 confirmed human cases and 777 deaths [6,9]. On the African continent, YF outbreaks were reported in Angola and the Democratic Republic of the Congo in 2015 and 2016, leading to more than 7,334 suspected cases, 962 of which had been laboratory-confirmed, and 393 registered deaths [10]. From January to December 2019, Nigeria reported 4,288 suspected yellow fever cases, 227 which had been laboratory-confirmed, and 231 deaths [11,12]. Despite an effective and safe vaccine that prevents YF disease and induces likely lifelong, protective immunity, many people living in or near at-risk territory remain unvaccinated [13].
In 2017, the World Health Organization (WHO) launched the Global Strategy to Eliminate Yellow Fever Epidemics (EYE) 2017-2026, a global coalition of countries and partners to help combat the increased risk of YF epidemics [13]. One of the strategic objectives of the EYE Strategy is to contain outbreaks rapidly [13], the success of which hinges, in part, on high quality YF diagnostic testing. The need for rapid and accurate YF diagnostic tests is of paramount importance because in many people early symptoms of YF are clinically indistinguishable from those caused by many other acute infections. However, there is currently a lack of validated commercially available serological and molecular assays [14]. Laboratories use nonstandardized in-house YF assays and countries often rely on a small number of reference laboratories to perform YF testing [4,15]. The availability of validated, standardized YF assays would greatly aid in filling these critical diagnostic gaps [14].
Immunoglobulin M (IgM) antibody testing for YF is one of the primary diagnostic methods used because it is the initial humoral isotype response generated following an infection [15]. Due to cross-reactivity with other flaviviruses [16], YF IgM testing is used as a primary screening method and YF IgM positive results require confirmatory neutralization testing. In 2015, a standardized YF IgM antibody capture (MAC) enzyme-linked immunosorbent assay (ELISA) kit (YF MAC-HD) was developed by the Centers for Disease Control and Prevention (CDC) that could be completed in approximately 3.5 hours [15]. Each kit can be used to test up to 24 serum samples, and all the required reagents, except for water in which to dilute the wash buffer, are included in the kit [15]. Development of the YF MAC-HD kit was based on the CDC YF MAC-ELISA, an in-house test that uses fourteen individual reagents and commercially sourced components and requires an overnight antigen incubation step [17]. However, obtaining all these individual reagents and components often poses challenges for countries that are resource-limited, and stock-outs are common. Additionally, critical reagents such as antigen and conjugate need to be titrated by the testing laboratories, which subsequently may be leading to standardization challenges. Implementation of the YF MAC-HD kit would help streamline and standardize testing across laboratories, mitigate reagent issues, eliminate the need for titration and validation of individual reagents, and shorten the testing time to 3.5 hours. If the kit proves viable for use in high risk YF-endemic regions, it could lead to more reliable YF surveillance and outbreak response.
YF IgM proficiency testing (PT) panels were developed and used to assess the effectiveness of trainings that introduced the YF MAC-HD kit and to determine the extent to which the YF MAC-HD kit technology is transferable to, and technically viable in, laboratories of YFaffected regions. This report describes how performance and technical viability of the kit in YF-endemic settings was evaluated.
Ethics statement
Residual human specimens were used according to the Centers for Disease Control and Prevention Institutional Review Board protocol 6773. Formal consent was not obtained from the training participants due to anonymization of results of participants and countries.
The YF MAC-HD kit was produced using Good Documentation Practices (GDocP) by the Bio-pharmaceutical Manufacturing & Academic Resource Center (BioMARC), a non-profit biologics contract development and manufacturing organization owned and operated by Colorado State University (CSU). The kit was first introduced to laboratory experts during a YF MAC-HD training workshop held in Fort Collins, Colorado, USA in May 2018. Five laboratory experts from five countries in Africa and five laboratory experts from four countries in South America at high or moderate risk of YF attended this week-long workshop where they trained and performed testing using the kit. Test results were calculated both manually and by using an Excel workbook with embedded formulae to automate the calculations.
At the end of the workshop, 10 kits were donated and shipped to each country for use in proficiency testing, staff training, and testing of archived samples. A 20-sample YF PT panel accompanied the kits, where the panel consisted of 11 serum samples of varying high (5), medium (4), and low (2) YF IgM positivity and 9 negative IgM serum samples. The YF IgM positive samples were previously confirmed via neutralization assays and selected based on their respective P/N (defined as the mean optical density (OD) of the sample reacted on YF antigen divided by the mean OD of the negative control reacted on YF antigen) and NBR (nonspecific background reaction; defined as the mean OD of the sample reacted on YF antigen divided by the mean OD of the sample reacted on normal antigen) values. When tested at the CDC, high, medium, and low YF IgM positive samples had approximate P/N values of >11, 6-8, and 3-4, respectively, and all had NBR values of �1.5. The negative IgM samples had P/N values of <1.5. Each vial contained 25ul of serum that was heat-inactivated at 56˚C for 30 minutes to help reduce sample infectivity. Reference results were obtained at the CDC using the YF MAC-HD kit. Four of the five African countries and three of the four South American countries received the kit/panel shipments. Two countries were unable to receive the kits and panels due to shipping challenges.
In July and August 2019, two follow-up workshops were held in Africa for the purpose of YF diagnostic capacity-building, during which the YF MAC-HD kit was introduced to 33 African national laboratories. The two workshops each lasted five days and were held at the Centre Pasteur Cameroun in Yaounde, Cameroon where instruction was conducted in English, and at the Institut Pasteur de Dakar in Dakar, Senegal where instruction was conducted in French. A combined 46 laboratorians from 33 African countries at high, moderate, or potential risk of YF, attended the workshops where they trained on YF diagnostic testing methods including the YF MAC-HD kit (Fig 1). Kit instructions for use were provided in English, French, Spanish and Portuguese. At the end of the workshops, eight kits and a 10-sample YF PT panel prepared similarly to the 2018 YF PT panel were provided to each participating laboratory.
Of note, in March 2020, a third workshop was held at the Instituto de Diagnóstico y Referencia Epidemiológicos "Dr. Manuel Martínez Báez" (InDRE) in Mexico City, Mexico, for purposes of YF diagnostic capacity-building in Central and South America. This five-day workshop hosted a combined 20 laboratorians from 13 countries, and similar to the 2019 African workshops, eight kits and a 10-sample PT panel were provided to each participating laboratory at the end of the workshop. Unfortunately, due to the COVID-19 pandemic, complete YF MAC-HD PT data from all participating laboratories was unable to be obtained due to laboratories shifting their focus to SARS-CoV-2 testing. Complete YF MAC-HD PT results from this workshop will be reported at a later date.
For both 2018 and 2019 YF PT panels, participants were instructed to test the panel samples in single replicates, and if the laboratory routinely used or had access to a YF IgM in-house positive control (IHPC), the IHPC was included as an additional sample. Participants reported their results in a CDC-provided PT worksheet that captured laboratory information, kit lot, plate washing method, and sample and kit control results. The sample and control information included OD, P/N, and NBR values, along with result interpretation. A correct result was defined as performing accurate P/N and NBR calculations, along with obtaining the final correct overall interpretation (positive, negative, equivocal) for each sample. The final percentage score was calculated as the number of correct sample results obtained compared to the total number of samples tested in the PT panel. Participants were instructed to submit their results to the CDC and their respective WHO regional laboratory coordinators within three weeks after returning to their laboratory.
Additionally, the YF MAC-HD kit instructions for use included specific instructions on how to perform three plate washing methods-manual hand washing using a multichannel pipette, automated washing using a strip-well washer, or an automatic 96-well head manifold
PLOS NEGLECTED TROPICAL DISEASES
Technical viability of yellow fever kit use washer-in order to accommodate different plate washing methods that are used in the laboratories. YF antigen OD results from the 2019 PT exercise were analyzed according to the various plate washing methods to determine whether differences were seen in ODs when the three methods were compared.
Results/Discussion
A summary of the YF PT exercise for both 2018 and 2019 is shown in Table 1. In 2018, of the seven countries that received kits and tested the 20-sample YF PT panel, all seven countries scored 100% on the PT. Six of these seven (86%) countries included an IHPC with the PT. In 2019, 32 of the 33 countries submitted PT results for the 10-sample YF PT panel. Of these, 31 countries scored 100% and one country scored 90% on the PT. For the one country that scored 90%, the kits and PT panel encountered a three-day delay in arrival at the home laboratory due to a flight cancellation. Fifteen of these 32 countries included an IHPC (47%). Additionally, all countries in 2018 and almost all countries (3 failed) in 2019 submitted correct results interpretations. All countries received their scores during follow-up, which included recommendations for corrective action and to assure understanding of proper YF MAC-HD results calculations as necessary.
The individual laboratory OD and P/N values for both the 2018 and 2019 PT exercises were plotted to demonstrate the variability observed between the laboratories (Fig 2). As expected, generally more variability was reported in the OD and P/N values for the positive samples compared to the negative samples. Coefficient of variation (CV) values were calculated for the positive control on YF antigen (PCVA) and negative control on YF antigen (NCVA) OD values as representative samples: 2018 PCVA CV = 23.0%; 2018 NCVA CV = 21.8%; 2019 PCVA CV = 29.9%; 2019 NCVA CV = 33.2%. Even though higher-than-ideal variability was reported in OD and P/N values across laboratory results, due likely to fluctuations in local testing conditions such as high room temperature in the testing laboratory, it is important to note the interpretations for the PT samples and controls (i.e., positive, negative, equivocal) did not change and were in concordance with reference results.
During submission of PT results, participants were requested to submit the plate washing method they used during YF MAC-HD PT testing. Of the three plate washing methods listed in the YF MAC-HD instructions for use, manual hand washing appeared to produce sample ODs with the least variation (Fig 3). CV values were again calculated for the PCVA and NCVA ODs as representative samples: manual hand washing-PCVA CV = 22.5%, NCVA CV = 26.5%; automated strip-well washing-PCVA CV = 38.4%, NCVA CV = 34.1; automatic Table 1
2019
Total countries issued YF PT 9 33 Total countries responding with PT results 7 a 32 PT results of responding countries 7 of 7 scored 100% 31 of 32 scored 100% 1 of 32 scored 90% Total responding countries using IHPC 6 15 Percent responding countries using IHPC 86% 47% Countries that required follow-up due to incorrect results calculations 0 3 Abbreviations: YF-yellow fever; PT-proficiency testing; IHPC-in-house positive control a Two countries were unable t receive kits due to shipping difficulties https://doi.org/10.1371/journal.pntd.0009417.t001 96-well head washing-PCVA CV = 34.6%, NCVA CV = 41.7%. The lower variability reported with manual hand washing was not entirely unexpected given the specific brand and type of automatic plate washer can vary across laboratories, leading to increased OD variation. Again, it is important to note the interpretations for the samples and controls did not change and were in concordance with reference results. Limitations of this study include verification of whether laboratory personnel that did not attend the workshops could successfully use the YF MAC-HD kit. Also, when laboratories performed and submitted their PT results to the CDC, the participants were not required to manually calculate the results. Although some participants provided both manual and automated calculations, the ability to measure the capability of all trainees to perform manual calculations was limited. Additionally, inter-laboratory variation may have contributed to the variable OD and P/N values reported herein; for example, the participants were not required to report the room temperature of their laboratories, which may have helped indicate whether laboratory temperature indeed contributed to the higher-than-ideal OD and P/N CV values described above. Also, given laboratories within developing countries sometimes encounter challenging environments, the ability of these laboratories to comply to WHO testing performance criteria is often difficult. Lastly, the technical viability of the YF MAC-HD kit in laboratories was addressed in this manuscript; however, operational viability such as distribution and continuity of supply and costs are outside the scope of this manuscript. Nevertheless, mechanisms have recently been initiated as part of the EYE Strategy to address operational challenges to YF diagnostic testing [18].
Implementation and use of the YF MAC-HD kit is currently focused primarily on national laboratories, rather than regional/local laboratories. The use of the YF MAC-HD kit at the regional/local level, while beneficial to surveillance, would require support for infrastructure including the appropriate equipment and reliable power. These are currently available only on a limited basis. Rapid diagnostic tests for YF such as lateral flow assays may be more applicable for use in these laboratories without the burden of improving infrastructure.
The collective results from the 2018 and 2019 YF PT exercises described here demonstrate the successful transferability of the YF MAC-HD kit methods. These data also show that it was used successfully in the two continents where YF is endemic and can be used correctly under the different and sometimes challenging laboratory conditions encountered at national laboratories in these regions. Accommodation of both routine and outbreak testing is often challenging with non-standardized, in-house YF serological assays, due to the difficulty of sourcing individual reagents and reagent stock-outs. Each YF MAC-HD kit can be used to test up to 24 serum samples per plate, whereas the current in-house CDC YF MAC-ELISA accommodates only eight samples per plate. The technical viability of the YF MAC-HD kit demonstrated here lends confidence that during future outbreaks, surge capacity testing should be more easily attainable. Kit production capacity estimates indicate that projected testing volumes could be met.
Thirty-eight of 39 laboratories that submitted PT results for the YF MAC-HD kit scored 100%, one laboratory scored 90%, and approximately half of all laboratories performed the good quality control practice of using an IHPC. These data provide confidence that if the YF MAC-HD kit becomes available for routine use, it will allow laboratories in countries of high and medium YF risk to perform YF surveillance correctly and more easily than using the currently used assays. The implementation of the standardized YF MAC-HD kit will help better inform YF vaccination campaigns, leading to more efficient YF outbreak management.
|
2021-06-06T06:16:36.158Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "bf1ae5806fd5857a92d84652a786a88d4cd78a8a",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0009417&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac49e3e748d2755fba1d5f4cf452e458117efbad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104382319
|
pes2o/s2orc
|
v3-fos-license
|
Selection of emission detection ranges for the laser method of plant stress revealing at a fluorescence excitation wavelength of 355 nm
The paper considers the development of a laser fluorescent method for the detection of plant stress conditions. The results of experimental studies of laser-induced fluorescence spectra of plants in normal and various stress conditions caused by various pollutants in the soil are presented for the laser wavelength of fluorescence excitation of 355 nm. A comparative analysis of various options has been carried out for choosing the spectral ranges of laser-induced fluorescent radiation plant registration. It is shown that for the task of monitoring the state of plants, the most effective (from the point of view of reliability of correct detection of stress conditions) ranges of fluorescent radiation registration are spectral ranges with central wavelengths of 685 and 740 nm.
Introduction
The methods of laser remote sensing are most efficient for the operative control of the natural environment [1].
A most promising development in the use of laser methods is fluorescent monitoring of the vegetation (see, for example, [2][3][4][5][6][7][8][9]). An excess or lack of water, soil pollutants, plant diseases, nutrient deficiencies and other factors make it impossible for the plants to develop normally (stressful conditions of vegetation). Effective methods for vegetation stress conditions detecting are laser fluorescent methods. The physical basis of most methods is a change in the fluorescence spectrum of plants under stress.
A prospective option for a device to monitor the state of vegetation is a laser fluorimeter that receives fluorescent radiation in two narrow spectral ranges (and uses the ratio of fluorescence intensities in these spectral ranges as an information parameter).
From the point of view of laser energy characteristics and eye safety, the third harmonic of a yttrium aluminum garnet with a wavelength of 355 nm represents the greatest interest for creating a laser sensing equipment.
However, the choice of the most effective spectral ranges in which the registration of fluorescent radiation of plants remains unclear.
Laboratory facility for conducting experiments
A laboratory facility was created to study the spectra of laser-induced fluorescence. The block diagram of the facility is shown in figure 1. The third harmonic of the YAG: Nd laser was used as the source of the fluorescence radiation excitation. The main parameters of the installation are shown in table 1. The diameter of the laser spot in the plane of the sample (mm) 20 The radiation detection system was built on the basis of a polychromator and a highly sensitive matrix detector (ICCD) with a brightness amplifier based on image intensifier tube. It allowed to record spectra in the range of 295-750 nm with a resolution of 5 nm.
The calibration of the equipment included: calibration of the polychromator by the wavelength and calibration of the receiving system for detecting radiation by sensitivity. The output power of the laser was monitored and the measurement results were normalized to this value. The calibration of the equipment was additionally controlled by the Raman scattering spectrum of distilled water.
To manage the laboratory facility, a specialized software was developed in the LabView programming environment.
Experimental results
Experimental researches of fluorescence spectra were carried out for different plant species that were in both normal and stressful states. Universal soil was used for planting.
As a result of experimental measurements using laboratory facility for a wavelength of 355 nm fluorescence, spectra of laser-induced fluorescence of mustard, maize, alfalfa, and moss were obtained in a normal state and under stress conditions caused by putting pollutants into the soil (various petroleum productsdiesel, gasoline; coolant for cars). The peak at the wavelength of 532 nm in the fluorescence spectra corresponds to the second harmonic of the radiation of a yttrium-aluminum garnet laser. Figure 2 shows that the spectrum of laser-induced fluorescence is distorted for a plant under stress (caused, in particular, by anthropogenic pollution of the soil). Thus, the analysis of the laser-induced fluorescence spectra shape upon excitation of fluorescence at an eye-safe excitation wavelength of 355 nm allows us to detect vegetation stress state. R with a spectral bandwidth of 10nm. Figure 3 shows the results of processing the measured fluorescence spectra. It reveals the relationships R (options i = 1 -17 described above) for mustard.
Analysis of the results of measured fluorescence spectra processing
The curve with diamond shaped markers is the normal condition of plants, the curve with circular markers is a plant under stress caused by the introduction of A-95 gasoline into the soil (20 ml of gasoline was poured into a container of 100 mm x 60 mm x 40 mm; 20 hours after entering the soil pollutant). 4) is corn 1 hour after pouring A-95 gasoline into the soil (20 ml); (5) is corn 1 day after pouring A-95 gasoline into the soil (20 ml); (6) is alfalfa 1 day after adding diesel fuel to the soil (10 ml); (7) is alfalfa 7 days after adding diesel fuel to the soil (10 ml); 8 -alfalfa, 15 days after adding diesel fuel to the soil (10 ml); (9) is alfalfa 7 days after adding diesel fuel to the soil (20 ml); (10) is alfalfa 12 days after adding diesel fuel to the soil (20 ml); (11) is alfalfa 15 days after adding diesel fuel to the soil (20 ml); (12) is alfalfa 18 days after adding diesel fuel to the soil (5 ml); (13) is moss 1 hour after adding diesel fuel to the soil (20 ml).
On the basis of the fluorescence spectra obtained, mathematical modelling was performed to solve the problem of detecting stress states of vegetation. It was assumed that the recorded values of the fluorescence emission intensities are random. The average values of these values were taken from experimental fluorescence spectra. Measurement noise was assumed normal with a zero mean value and with a relative rms value of 1-10%. The spectral width of the registration spectral ranges was assumed to be 10 nm.
The decision to detect the stress state of the plant was made when the condition 685/740 R > Rth is satisfied, where Rth is the threshold value, which was chosen in the middle between the parameter values for plants in normal and stress state.
Mathematical modeling results of correct detection probability of Pc stress state of plants and the probability of false alarms Pf with the value of the relative mean-square value of measurement errors δ=2% are given in table 2.
The plant numbers in table 2 correspond to the numbers in figure 5. The results shown in table 2 demonstrate the high reliability of detecting stressful conditions of plants using spectral ranges with central wavelengths of 685 and 740 nm. various pollutants in the soil have been carried out. A comparative analysis of various choices of spectral ranges for recording fluorescent radiation shows that, for the task of monitoring plant conditions, the spectral ranges with central wavelengths of 685 and 740 nm are the most effective ones (in terms of the reliability of correct detection of stress conditions). The results of mathematical modelling show that in most situations the laser fluorescent method (using spectral registration ranges with central wavelengths of 685 and 740 nm) allows to detect areas of vegetation that are in stress due to soil contamination, with a probability of correct detection close to 100 percent and the probability of false alarms ~ units and tenths of a percent.
|
2019-04-10T13:12:40.785Z
|
2018-11-30T00:00:00.000
|
{
"year": 2018,
"sha1": "52cbab7473b19286e59e409bd3e710367bf7d93d",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/450/6/062005",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7ef4b82127410a706fe2b44afa1023ac7f310b5b",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
255855932
|
pes2o/s2orc
|
v3-fos-license
|
Association between hypertriglyceridemic-waist phenotype and non-alcoholic fatty liver disease: a general population-based study
Background Hypertriglyceridemic-waist (HTGW) phenotype has been proposed as a practical tool for screening the risk of cardiovascular diseases and glycemic metabolic disease. This study sought to investigate the relationship between HTGW phenotype and non-alcoholic fatty liver disease (NAFLD). Methods A total of 14,251 subjects who took part in health screening were enrolled in the study and NAFLD was diagnosed by abdominal ultrasound. According to triglyceride (TG) and waist circumference, the study population was divided into four phenotypes, in which HTGW phenotype was defined as TG ≥ 1.7 mmol/L and male waist circumference ≥ 90 cm or female waist circumference ≥ 80 cm. Multivariate logistic regression analysis was used to evaluate the relationship between HTGW phenotype and NAFLD. Results In the current study, 2.43% of the subjects had HTGW phenotype, while the prevalence of NAFLD in subjects with HTGW phenotype was 77.81%. After full adjustment for covariates, compared with people with normal waist circumference and TG levels, the risk of NAFLD in people with normal TG levels but enlarged waist circumference increased by 39% [OR:1.39, 95%CI: 1.15, 1.68], in people with normal waist circumference but elevated TG levels increased by 96% [OR:1.96, 95%CI: 1.65, 2.33], and in subjects with HTGW phenotype increased by 160% [OR:2.60, 95%CI: 1.88, 3.58]. Additionally, further analysis suggested that there were significant interactions between age, height, BMI and NAFLD risk associated with TGW phenotypes. Receiver operating characteristic curves analysis suggested that the combination of TG and waist circumference further improved the diagnostic value for NAFLD. Conclusions HTGW phenotype is associated with NAFLD risk in the general population, which may be a novel and accessible indicator for NAFLD screening. Supplementary Information The online version contains supplementary material available at 10.1186/s12944-022-01660-8.
cardiovascular disease and chronic kidney disease, and negatively affect health-related quality of life in the patients [2][3][4]. It was reported that the current prevalence of NAFLD in the world is about 25.24% (27.37% in Asia) [5], and the public awareness rate is about 6.3-18% [6,7]. Compared with diabetes and hypertension (prevalence 9.3 and 31.1%, awareness rates 30-74.8% and 32.3-67%) [8][9][10][11], NAFLD has a very low awareness rate despite a high prevalence. This dangerous situation was also explicitly mentioned in a recent statement of the NAFLD Consensus Consortium and advocated the development of a NAFLD public health roadmap to advance the global public health agenda for NAFLD [12]. As a health worker, to promote the early formulation of NAFLD public health strategy and improve the awareness rate of NAFLD, our current study aims to find a simple, low-cost, and effective method for NAFLD screening in a large population.
The hypertriglyceridemic waist (HTGW) phenotype is a physical feature classified by triglycerides (TG) and waist circumference which is characterized by enlarged waist circumference and elevated TG levels. HTGW phenotype and its concept were first noticed and studied by Lemieux et al. [13]. According to their early description, they found that men with HTGW phenotype will have an increased risk of atherosclerosis, and most people with HTGW phenotype had an obvious metabolic disorder. In this context, a growing number of scholars have conducted an in-depth analysis of the HTGW phenotype. Many studies have shown that the HTGW phenotype was not only associated with coronary artery disease, but also closely related to pancreatitis, metabolic syndrome, diabetes, pre-diabetes, hyperuricemia, ischemic stroke, chronic nephropathy, and visceral obesity [14][15][16][17][18][19][20][21]. Also, in the recent study by Blackburn et al., it was found that the HTGW phenotype had the same ability to identify adverse metabolic characteristics as the National Cholesterol Education Program-Adult Treatment Panel III standard and International Diabetes Federation (IDF) standard [22]. All these pieces of evidence suggested that the HTGW phenotype may be an adverse phenotype for metabolic-related diseases. At present, several studies have specifically assessed the association between HTGW phenotype and NAFLD in children and adolescents, premenopausal and postmenopausal women, and overweight/ obese people [23][24][25]. However, the association between HTGW phenotype and the risk of NAFLD in the general population is not clear. Therefore, through the secondary analysis of the large longitudinal cohort of NAGALA, this study aims to further evaluate the performance of the HTGW phenotype as a screening tool for the risk of NAFLD in the general population.
Methods
The datasets used in this study have been publicly available in the Dryad data repository (source data uploaded by Professor Okamura) and can be accessed through https:// doi. org/ 10. 5061/ dryad. 1n6c4 [26]. According to the user terms of the Dryad database, the Dryad dataset can be used for academic research, but not for commercial purposes.
Study population and design
The current study is a secondary analysis of population data from the NAGALA cohort, whose study design has been published elsewhere [27]. In short, the NAGALA cohort was established by Murakami Memorial Hospital in Japan in 1994 and has continued to the present day, enlisting adults who attended health screening at the hospital's physical examination center and conducting a series of epidemiological studies mainly on diabetes and NAFLD. The focus of this study was to investigate the performance of the HTGW phenotype in assessing the risk of NAFLD. For this objective, we extracted data from 20,944 subjects in the NAGALA cohort from 1994 to 2015, and excluded subjects with the following characteristics: (1) excessive drinking (n = 1952, male ≥210 g/w or female ≥140 g/w) [28]; (2) suffering from viral/alcoholic hepatitis or diabetes or impaired fasting glucose (n = 1547; according to the self-reported diagnosis or baseline survey found abnormal blood glucose); (3) baseline information missing (n = 873); and (4) receiving medication at baseline (n = 2321). Informed consent for data use has been described in previous studies with the subject's authorization [27]. Additionally, the current research protocol has been approved by the Institutional Review Committee of Jiangxi Provincial People's Hospital (review No: 2021-066), and the entire research process follows the Declaration of Helsinki.
Anthropometric and laboratory measurements
The general data were collected by trained medical staff using standardized health questionnaires. The recorded information comprised socio-demographic characteristics (age and sex), living habits (smoking, drinking, and habit of exercise), disease history (diabetes and liver disease), and general body measurements (height, weight, waist circumference, and blood pressure). Body mass index (BMI) was calculated as weight/height 2 . Having an exercise habit was defined as regular participation in any type of sports more than once a week. Drinking status: During the baseline visit, the subjects' weekly alcohol intake was evaluated and classified, in which the weekly alcohol consumption of less than 40 g was defined as not drinking or small drinking, the weekly alcohol consumption of 40-139 g was defined as light drinking, and the weekly alcohol consumption of 140-209 g was defined as moderate drinking. Smoking status: During the baseline interview, the subjects were divided into three groups: non-smoking, past smoking, and current smoking by inquiring the subjects about their smoking history.
Diagnosis of NAFLD
NAFLD was scored and diagnosed by experienced gastroenterologists based on the four abdominal ultrasonographic features of vascular blurring, hepatorenal echo contrast, deep attenuation, and liver brightness without knowing the subjects' other examination results [29].
Statistical analysis
Based on the IDF standard [30], the subjects were divided into four triglyceride waist circumference (TGW) phenotypes (Table 1), and the baseline characteristics of the subjects were summarized according to different TGW phenotypes. Before the inter-group comparison, the distribution pattern of continuous variables was judged by the QQ plot, and the variables were described as mean (standard deviation: SD) and median (quartile1-3) respectively according to the distribution pattern. The differences between groups of continuous data with a normal or approximate normal distribution were compared by one-way ANOVA test, and Tukey's HSD test was used as a post hoc test. The differences between groups of continuous data with skewness distribution were compared by the Kruskal-Wallis H test, and the Steel Dwass test was used as a post hoc test. Categorical variables described as n (%), comparison between groups using chi-square test.
To systematically account for potential confounders, after collinearity diagnosis of covariates (Supplementary Table 1) [31], we tested the effects of four different TGW phenotypes on NAFLD in four stepwise adjusted multivariable logistic regression models [32], with the results expressed as odds ratios (OR) and 95% confidence intervals (CI). In addition, considering the obvious physical differences between the sexes, we also evaluated the effect of TGW phenotypes on NAFLD in men and women separately in four multivariate logical regression models. In model 1, BMI, age, and sex were adjusted; Model 2 additionally adjusted habit of exercise and height on the basis of model 1; Model 3 further adjusted SBP, drinking status, and smoking status; Model 4 continued to adjust HbA1c, FPG, TC, and HDL-C based on model 3. We also exploratory evaluated the associations between different TGW phenotypes and NAFLD in different populations by logistic regression (based on model 4), and used the likelihood ratio test to detect the interaction between TGW phenotypes and covariables. Finally, to further verify the diagnostic performance of TG, waist circumference, and the combination of the two indexes in NAFLD, we also constructed receiver operating characteristic curves (ROC) and calculated the corresponding area under the curve (AUC). Delong test was used to check the difference between the index after the combination of TG and waist circumference and the single waist circumference, TG.
All analyses were conducted using the statistical software R language (version 3.4.3) and Empower (R) (version 2.0). For all analyses, P values < 0.05 (two-sided) were considered statistically significant.
Baseline characteristics of subjects with different TGW phenotypes
A total of 14,251 subjects who participated in health screening were included in this study, and they were divided into groups according to different TGW TC, weight, AST, waist circumference, FPG, and higher blood pressure. Furthermore, in the ETNW group and HTGW group, the proportion of males was significantly increased, and the number of subjects with smoking and drinking habits was significantly higher.
TGW phenotypes and NAFLD
The relationship between TGW phenotypes and NAFLD is shown in Table 3. Whether in men, women, or the whole population, subjects with ETNW, NTEW and HTGW phenotypes had a significantly higher risk of developing NAFLD than subjects with NTNW Values were expressed as mean (SD) or medians (quartile interval) or n (%) Since there are significant differences between almost all groups after pairwise comparison, it is difficult to mark all the groups with differences in the table. Therefore, in Table 2 Table 4 shows the associations between TGW phenotypes and NAFLD in different populations. As can be seen, among all covariates, only age, height and BMI were found to have an interaction effect on NAFLD risk associated with the TGW phenotype. Compared with young people with NTNW phenotype, any other TGW phenotype populations in the same age groups and higher age groups had a higher risk of NAFLD. Compared with non-obese people with the NTNW phenotype, both nonobese and obese people with other TGW phenotypes had a higher risk of NAFLD. Compared with people with short stature with NTNW phenotype, taller people with NTNW phenotype had lower NAFLD risk, while people with other TGW phenotypes had relatively higher NAFLD risk. Table 5 shows the AUC of parameters of TG, waist circumference and the combination of the two which were used to identify the NAFLD, in which TG was 0.7969 and waist circumference was 0.8610, and the AUC after their combination further increased to 0.8803. Compared with TG and waist circumference alone, the combination of the two can enhance the diagnostic ability for NAFLD.
Discussion
In this large epidemiological study of 14,251 people in the general population, we found that compared with people with normal waist circumference and TG levels, hypertriglyceridemia alone or increased waist circumference alone significantly increased the risk of NAFLD, while when hypertriglyceridemia coexisted with elevated waist circumference the risk of NAFLD would further increase. It is well known that hypertriglyceridemia and central obesity are important risk factors for NAFLD [33,34]. In the case of hypertriglyceridemia or increased waist circumference, insulin resistance is a common metabolic change, which leads to excessive releasing of fatty acids from adipose tissue and up-regulating the transcription of genes that promote liver ab initio adipogenesis. These reactions can result in liver steatosis [35]. In the current study, we further confirmed in a large sample that either hypertriglyceridemia alone or central obesity significantly increased the risk of NAFLD [ETNW: OR: 1.96, 95%CI: 1.65, 2.33; NTEW: OR: 1.39, 95%CI: 1.15, 1.68]. Additionally, it is worth mentioning that the OR value associated with NAFLD risk in subjects with ETNW phenotype was higher than that in subjects with NTEW phenotype.
HTGW is a special state that comprises both enlarged waist circumference and elevated TG levels [13]. This special phenotype may be a sign of lipid spillover caused by relative defects in adipose tissue [36]. Past studies have provided a great deal of evidence that this particular phenotype was closely related to a variety of metabolic diseases [14][15][16][17][18][19][20][21][22]. However, at present, the research on the association between HTGW phenotype and NAFLD is [25].
Considering that these similar studies had some particularity in population selections and their sample size was relatively small, this study further explored the relationship between HTGW phenotype and NAFLD in the general population on the basis of a larger sample. According to the current research results, we found that when central obesity and hypertriglyceridemia appeared simultaneously, the risk of NAFLD in the general population would further increase (comparing with NTNW, NTEW, and ETNW; all P < 0.05). This conclusion is also applicable to other metabolism-related diseases [14][15][16][17][18][19][20][21]. We also analyzed the associations between TGW phenotypes and NAFLD in men and women. In both sexes, the risk pattern between TGW phenotypes and NAFLD was consistent with that of the whole population. However, it is worth noting that compared with male subjects, female subjects with HTGW phenotype had a relatively higher risk of NAFLD. Similarly, this gender stratification finding has also been reported in TGW phenotypes related studies conducted by Ren and Chen et al., in which Ren et al. assessed the gender differences in the association between diabetes and HTGW phenotype [16], while Chen et al. analyzed the gender differences between hyperuricemia and HTGW phenotype [18]. In addition, some different results have been shown in other studies related to TGW phenotypes: In a follow-up study of 4081 subjects with stroke, Wang et al. indicated that the HTGW phenotype was only associated with future stroke events in women [19], while in another study on risk assessment of chronic kidney disease, only male HTGW phenotype was associated with chronic kidney disease [20]. In general, there are some differences in the HTGW phenotype across disease risk assessments, and more studies are needed to further validate these results. According to the current results of gender stratification, women with HTGW phenotype should pay more attention to screening for NAFLD.
The current study also exploratory analyzed the interactions between all covariates and TGW phenotypes-related NAFLD risk. From the results, there were significant interactions between age, height, BMI and TGW phenotypes, among which the elderly people, short stature people, and overweight/obese people with HTGW phenotype seemed to have the highest risk of NAFLD. Generally speaking, aging, short stature, and overweight/obesity often indicate potential adverse metabolic characteristics [37][38][39]. Therefore, these people should pay more attention to NAFLD screening.
The high prevalence of HTGW phenotype in some common diseases also requires some special attention. According to the published literature data, about [18], and 17-28.1% had chronic kidney disease [20,43]. Additionally, according to Lemieux et al., more than 80% of male subjects with the HTGW phenotype had abnormal metabolic characteristics that cause atherosclerosis [13]. These findings conveyed an intuitive message that the HTGW phenotype was a very adverse metabolic feature. In the current study, we found that nearly 80% of subjects with the HTGW phenotype had NAFLD. Given the high prevalence of the HTGW phenotype in multiple metabolic diseases, we suggest that the HTGW phenotype should be incorporated into screening programs for NAFLD and other metabolic diseases. Furthermore, it is worth noting that the HTGW phenotype is closely related to cardiovascular risk, and cardiovascular events are the main cause of mortality and morbidity in patients with NAFLD [2,13]. Therefore, we speculate that the HTGW phenotype may be used to predict cardiovascular events in patients with NAFLD. It needs to be confirmed in further research in the future.
Study strength and limitation
The biggest strength of the current study is that it has been confirmed that the HTGW phenotype was associated with an increased risk of NAFLD in the general population in a large sample. These results further expanded the current research evidence and provided useful data for the application of the HTGW phenotype for NAFLD screening in the general population. Limitation: (1) The design adopted in the current study was cross-sectional, so whether there was a causal correlation between HTGW phenotype and NAFLD needs to be further confirmed in longitudinal studies. In addition, there was a lack of follow-up information on cardiovascular disease in the current dataset, so we cannot further evaluate the associations between TGW phenotypes and future cardiovascular events in the NAFLD population.
(2) At present, the gold standard of NAFLD diagnosis is still based on the results of liver biopsy, but in the current study, NAFLD was diagnosed by abdominal ultrasound, which would inevitably lead to missing some patients with mild hepatic steatosis [44]. (3) Although a large number of confounding factors have been corrected in the current research, there were still some unmeasured or unmeasurable confounding factors that would partially affect the results. (4) The correlation between HTGW phenotype and liver fibrosis could not be further analyzed in the current study due to the absence of some parameters used to calculate non-invasive fibrosis scores in the public dataset currently analyzed.
Conclusion
In summary, the general population with ETNW, NTEW, and HTGW phenotypes had a significantly increased risk of NAFLD compared with the population with the NTNW phenotype, and those with the HTGW phenotype had the highest risk of NAFLD. The findings of this study provided evidence of the association between HTGW phenotype and NAFLD in the general population, and these findings may have important public health implications for the early diagnosis and intervention of NAFLD.
|
2022-06-03T13:53:40.945Z
|
2022-06-02T00:00:00.000
|
{
"year": 2022,
"sha1": "eeb002d68e684581a20f793477d4e96a350b1340",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12944-022-01660-8",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "3b30e48f4c35ad5e596ac327e3899c38222ef99e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268688520
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis of external ventricular drainage related infections with real-time 16S PCR and third-generation 16S sequencing
Abstract Objective Investigate the performance of real-time 16S PCR and third-generation 16S sequencing in the diagnosis of external ventricular drain related infections (EVDRI). Methods Subjects with suspected EVDRI were prospectively included at Uppsala University Hospital. Subjects were included into three groups: subjects with negative CSF culture with and without antibiotic treatment and subjects with positive CSF culture, respectively. CSF was analysed with real-time 16S PCR and third-generation 16S sequencing. Real-time 16S PCR positivity/negativity and number of 16S sequence reads were compared between groups. For culture positive subjects, species identification in third-generation sequencing and routine culture was compared. Results 84 subjects were included. There were 18, 44 and 22 subjects in the three groups. Real-time PCR was positive in 17 of 22 subjects in the culture positive group and negative in 61 of the 62 subjects in the two culture negative groups. The sensitivity and specificity for real-time 16S PCR compared to culture was estimated to 77% and 98%, respectively. Species identification in 16S sequencing and culture was concordant in 20 of 22 subjects. The number of 16S sequence reads were significantly higher in the culture positive group than in both culture negative groups (p < 0.001). There was no significant difference in number of 16S sequences between the two culture negative groups. Conclusions Real-time 16S PCR predict culture results with sufficient reliability. Third-generation 16S sequencing could enhance sensitivity and species identification in diagnostics of EVD-related infections. False negative culture results appear to be uncommon in patients with suspected EVDRI.
Introduction
Clinical symptoms and changes in cerebrospinal fluid (CSF) biomarkers associated with central nervous system (CNS) infections are unspecific and hard to interpret in neurosurgical patients; especially in critically ill patients with external ventricular drains (EVDs) and suspicion of EVD-related infection (EVDRI).This leads to high rates of empirical antibiotic treatment and missed diagnoses [1-3].The microbiological gold standard of bacterial culture on CSF may also lack in diagnostic performance in this setting as the sensitivity could be hampered by ongoing antimicrobial therapy.In addition, the common findings of bacteria that are considered potential contaminants with low virulence, such as Coagulase negative Staphylococci (CoNS), reduces specificity [4].
Polymerase chain reaction (PCR) is used to analyse if a specific deoxy-ribonucleic acid (DNA) or ribonucleic acid (RNA) sequence is present in a sample.The 16S rRNA gene in bacteria codes for RNA that is a component of the 30S subunit of the bacterial ribosome.This gene contains both regions that are highly conserved as well as those highly variable between different species of bacteria.This makes it a useful target for PCR since the conserved regions can serve as targets for PCR primers for amplification for a broad spectrum of bacteria, while sequencing of the variable regions in the resulting amplicon can often identify which species of bacteria that is present in the sample [5].Hence, 16S PCR has become a widespread diagnostic method for a wide range of bacterial infections.Next generation sequencing (NGS) based technologies such as Nanopore sequencing have greatly improved the capacity and bandwidth of nucleotide sequencing.Through direct detection of bacterial pathogens by sequencing of the 16S gene, sensitivity and species identification could potentially be improved, turnover times reduced and allow detection of polymicrobial infections [6].
CSF 16S PCR is used in many centres for the aetiological diagnosis of nosocomial CNS infections in neurosurgical patients, but analyses of the performance of PCR-based diagnostic methods in this setting are limited and the results are hitherto somewhat diverging.Generally, PCR results have shown good agreement to bacterial culture results in patients with positive CSF cultures but the rate of PCR positivity in culture negative samples has varied [7][8][9][10][11][12].The turn-around time for PCR-based methods is usually shorter than for culture results.This means that, if the results are reliable, the use of empirical antimicrobial therapy could be reduced while ensuring adequate and early therapy for patients with infections.A recently published study by Jang and co-workers suggests that the detection of 16S with the third-generation sequencing Nanopore technology enhances the detection and etiologic diagnosis of bacterial meningitis after neurosurgery [13].
The main objective of this study was to investigate the diagnostic performance of real-time 16S PCR and third-generation 16S metagenomic sequencing with the long-read Nanopore technology, on CSF from neurosurgical patients with culture confirmed EVDRI.Secondary aims were to investigate the rate of real-time 16S PCR positivity and the number of 16S sequence reads in Nanopore sequencing in CSF samples from patients with clinically suspected infection but negative CSF cultures and, furthermore, study if there were differences between subjects without and with ongoing antibiotic treatment at time of sampling, indicating a risk of false negative culture results in the latter.
Study design and setting
This was a prospective observational study of patients in the neuro-intensive care unit (neuro-ICU) and neurointermediary care unit (neuro-IMCU) with suspected infection at the time of sampling.Subjects were included between February 2018 and November 2021 at the neuro-ICU or neuro-IMCU at Uppsala University Hospital, Uppsala, Sweden.This regional centre for neuro-ICU care for parts of middle Sweden, with a catchment area housing approximately 2 million inhabitants, is supported daily by infectious disease specialists.
Inclusion and exclusion criteria are listed in Table 1.In addition to subjects with EVD, subjects with an externalised ventriculoperitoneal (VP) shunt were eligible for inclusion but no such subjects were included.Inclusion was conducted prospectively, with all study subjects with sufficient volume of CSF in the clinical sampling considered eligible for inclusion.However, inclusion rate was at times reduced during vacation periods due to capacity issues.When inclusion into the culture negative subject groups met their pre-allocated sizes, only culture-positive subjects were included.
Subjects were divided into three groups -(1) subjects with negative CSF culture in the study sample and no antibiotic treatment at sampling, (2) subjects with negative culture in the study sample and ongoing antibiotic treatment at sampling and lastly (3) subjects with positive CSF culture in the study sample.Subjects with a negative CSF culture at study sampling and a previous non-study CSF sample with positive culture were however excluded from the group assignment.Subjects with confirmed or suspected community acquired acute bacterial meningitis (ABM) were also excluded from the group assignment.On inclusion, a CSF sample of 2 ml taken at the time of clinical sampling was collected for this study.Samples were frozen in −20 � C within 4 h from sampling.The sample was shortly thereafter thawed, aliquoted and frozen in −70 � C.After completion of the clinical bacterial culture the subjects were allocated to the pre-defined study groups.For subjects with repeated study samples where several were culture positive, the first culture positive sample was used.Similarly, for subjects with repeated culture negative study samples, the first culture negative sample was used.
EVD catheters and CSF sampling
EVD insertion was performed in the operating room under sterile conditions.EVDs without anti-bacterial coating were used (HanniSet, Xtrans, Smith Medical GmbH, Glasbrunn, Germany).Catheters were inserted by a right sided frontal burr hole and tunnelled subcutaneously approximately five cm from the incision site.The EVD was connected to an external draining system with a pressure monitoring device (HanniSet, Xtrans, Smith Medical GmbH, Glasbrunn, Germany or VentrEX, Neuromedex, Hamburg, Germany).Two grams of cloxacillin was administered intravenously at the start of surgery.Clindamycin, 600 mg, was used if the subject was allergic to penicillin.The catheters were not routinely exchanged.CSF samples were taken under aseptic conditions in accordance with clinical routine.
Cultures, CSF cytochemistry and blood sample analysis
Cultures were performed at the Department of Clinical Microbiology and Hospital hygiene at Uppsala University Hospital according to local clinical procedure.All CSF samples were cultured both on solid agar and in broth.
Agar plates were incubated in 35 � C in CO 2 and anaerobically for 8 days.Broth cultures were performed through the inoculation of 2 mL of CSF in pediatric blood culture flasks (PF-BacT/ALERT PF Plus [bioM� erieux Inc., Durham, NC, USA]) supplemented with 4 mL of horse blood and incubated for 10 days at 35 � C in an incubator with automated growth detection (BacT/ALERT VirtuO[bioM� erieux Inc., Durham, NC, USA]).Species identification was performed using normal procedures at the laboratory, ie matrix-assisted laser desorption/ionizationtime of flight (MALDI-TOF) (Bruker Daltonics) and phenotypic tests.
All cytochemical analysis of CSF and blood/plasma was performed at the Department of Clinical Chemistry and Pharmacology at Uppsala University Hospital, as part of the standard clinical procedure.
16S rRNA gene analysis
Analysis of the 16S rRNA gene was performed at the Department of Laboratory Medicine, Clinical Microbiology at € Orebro University Hospital after study inclusion was completed.In addition to the study samples, 25 negative controls (anonymized clinical CSF samples with leukocyte count <4 � 10 6 /L) were included in the analysis.DNA was extracted from 200 mL CSF and pretreated with 100 units of mutanolysin (Sigma-Aldrich, St Louis, MO, USA) at 37 � C for 30 min before extraction with the MagDEA Dx kit on MagLEAD 12gC (Precision System Science Co., Ltd., Chiba, Japan).The elution volume was set to 50 mL.In each extraction run, a positive control (containing Staphylococcus haemolyticus suspended in NaCl, for the Nanopore sequencing) and a negative control (containing only reagents) was included.
Library preparation for long read 16S metagenomic sequencing of the entire 1500 bp 16S rRNA gene, including variable regions (VR) 1-9, was performed using the 16S Barcoding kit 1-24, SQK-16S024 (Oxford Nanopore Technologies, Oxford, England) with a slightly modified protocol: the PCR annealing temperature was lowered from 55 � C to 52 � C and the number of PCR cycles were increased from 25 to 40.The PCR amplification was performed on a Veriti TM Thermal Cycler (Thermo Fischer Scientific, Waltham, MA, USA).A Qubit fluorometer (ThermoFisher Scientific) was used to quantify the barcoded libraries.Depending on the DNA concentration, either 0.5 mL (>10 ng/mL), 1 mL(1-10 ng/mL) or 2 mL (<1 ng/mL) of the barcoded libraries were pooled.From this pool 10 mL was used for sequencing.Sequencing was run for 12 h on a R9.4.1 flow cell in a GridION instrument (Oxford Nanopore Technologies) using Super-accurate basecalling.The sequence data was uploaded to the 1928 platform (1928 Diagnostics, Gothenburg, Sweden) for taxonomic classification.All sequencing reads were trimmed and filtered on amplicon length (1200-1700 bp for V1-V9) and compared against each other to determine strain-level representatives, which were subsequently mapped against the SILVA (v138.1)reference database for taxonomic assignment.Up to 100 000 reads were used for taxonomic classification and the results were presented as species level and relative abundance.
Each PCR reaction (20 mL) contained 2 mL of LightCycler FastStart DNA Master SYBR Green I (Roche Diagnostics), 4 mM MgCl 2 , 0.3 mM of the respective primers and 5 mL of DNA template.A positive PCR control (DNA from Streptococcus pneumoniae) and the negative extraction control was included in each PCR run.Samples with a crossing point (Cp) value that was lower (1.5 cycles or more) than the negative extraction control were considered PCR positive.All samples were tested for inhibition by adding a spike-in control (Streptococcus pneumoniae DNA), 1 mL to 4 mL sample, in a separate reaction.Sequencing for species identification was not performed on these samples as the main reason for performing this analysis was to evaluate the bacterial load.
Clinical data
Results from CSF cultures and cytochemistry (polymorphonuclear-and monomorphonuclear leukocytes, erythrocytes, CSF/plasma-glucose ratio, lactate and albumin), blood cultures and cultures from normally sterile materials such as extracted EVDs or ventriculoperitoneal shunt components were extracted from electronic patient records.Additionally, results from analysis of biomarkers of inflammation and organ dysfunction in blood samples (C-reactive protein (CRP), leukocytes, thrombocytes, aminotransferase (ALAT) and creatinine) collected in connection to CSF sampling, were also extracted.Finally, information regarding gender, age, comorbidities, cause of admission, duration of neuro-ICU care, duration of EVD treatment, number of CSF samples collected, and antimicrobial treatment were also collected from the electronic patient records.Whether a positive culture was regarded as infection or colonization/contamination during clinical management was defined based on whether the prescribed antibiotic treatment indicated a clinical decision to treat an EVD-related infection.Cases where no antibiotic treatment in appropriate CNS dosing directed at the bacteria cultured in CSF was initiated, or such treatment was discontinued after <7 days, were defined as clinically assessed colonization/contamination.Cases where antibiotics directed against the cultured bacteria in appropriate CNS dosing was prescribed for �7 days were defined as clinically assessed infection.
Statistical analysis
Results are presented as medians (interquartile ranges [IQR]), medians (range), absolute number (%) or means as appropriate.The Mann-Whitney U-test was used for all group-wise comparisons.P-values below 0.05 were considered statistically significant.Statistical analysis and graphic visualisation were performed using R version 4.2.2andR studio version 2022.07.2
Ethical considerations
All procedures performed involving human participants in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.The study was approved by the regional ethics review board in Uppsala (2017/177).Subjects were included in the study after informed consent or, when this was not possible due to the condition of the subject, after consultation with next of kin.Written information was given both to study subjects and, when needed, to next of kin.If additional CSF collection at clinical sampling was considered a risk, inclusion and study sampling were not performed.Results from real-time 16S PCR and 16S Nanopore sequencing were not available to the treating physician and, hence, did not affect the clinical management of the study subjects.Personal data was collected only by JW and pseudonymized before data analysis to minimise the compromise of subjects' integrity.
Study population
In total, 96 subjects were included in the study.After screening of electronic patient records 12 patients were excluded from the group allocation.Out of the 84 remaining subjects, 22 were culture positive and 62 were culture negative.Out of these, 18 had no ongoing antibiotic treatment at time of sampling, while 44 had.The group allocation process is illustrated by a flow chart in Figure 1.
The clinical characteristics and levels of CSF and blood biomarkers of inflammation and organ dysfunction in the study groups are summarised in Table 2.
Microbiology
The microbiological findings of the subjects with positive CSF culture are summarised in Table 3.
Results of 16S Nanopore sequencing and real-time 16S PCR (positive/negative) for the groups are displayed in Figure 2. Using real-time 16S PCR, 17 of the 22 culture positive subjects were positive and 61 of the 62 culture negative subjects were negative.The resulting sensitivity and specificity was 77.3% (95% CI 54.6-92.2%)and 98.4% (95% CI 91.5-100%), respectively.Median number of 16S sequence reads were significantly higher in the culture positive group than in the culture negative groups without antibiotics and the culture negative group with antibiotics (p < 0.001).There was no statistically significant difference in number of 16S sequence reads between the negative control group and the culture negative without antibiotics and the culture negative with antibiotics (p ¼ 0.59 and p ¼ 0.78, respectively).
Characteristics of culture positive samples
Species identification by 16S Nanopore sequencing was in agreement with species identification by MALDI-TOF from cultured colonies in 20 out of 22 culture positive subjects.Discrepant findings were one case where Staphylococcus aureus was identified in culture and Nanopore sequencing showed high numbers of Staphylococcus haemolyticus and Staphylococcus epidermidis sequences and one case where Micrococcus luteus was identified in culture and Nanopore sequencing showed low numbers of unclassified and Acinetobacter tandoii (A gram-negative rod normally isolated from sludge) sequences.
Real-time 16S PCR was positive in 17 of 22 samples.The samples that were negative in real-time PCR had significantly lower numbers of 16S reads in Nanopore sequencing compared to real-time 16S PCR positive samples (median 2935 and 99930, p ¼ 0.0016).These samples were also to a larger extent (4/5) only positive in broth cultures compared to samples positive in realtime 16S PCR (5/17).
Based on prescribed antibiotic therapy, culture findings were assessed as infection in 17 of 22 subjects.In these subjects, real-time 16S PCR was positive in 15 of 17 subjects and 16S Nanopore sequencing showed reads approaching 100 000 in 16 of these 17 subjects.In the five subjects where culture findings were clinically assessed as contaminations, real-time 16S PCR was positive in two subjects and 16S Nanopore sequencing showed high numbers (100 000) of sequence reads for one subject, low numbers for two subjects (<500) and intermediate numbers (8864 and 19 043) in two subjects.Details of species identification of culture positive samples are displayed in Table 4.
Characteristics of culture negative samples with high numbers of 16S sequence reads
An analysis of culture negative samples with number of 16S reads exceeding the third quartile of the negative controls (3369 reads) was performed to assess whether there were signs of culture negative infections among subjects with or without antibiotic treatment at the time of sampling.There were 17 culture negative subjects with more than 3369 16S sequence reads, 12 were treated with antibiotics at the time of sampling and 5 were not.Real-time 16S PCR was negative in all cases except one with a Cp 3.68 lower than negative control.This case also had the highest number of 16S sequence reads (30 258) of all culture negative cases.The sequences in this sample were identified to numerous bacterial species where Variovorax paradoxus and Massilia eurypsychrophila sequences showed the highest relative abundance (20.4% and 18.8% respectively).Details are summarised in Table 5.
Characteristics of samples excluded from group allocation
Real-time 16S PCR and Nanopore 16S sequencing were also performed on subjects excluded at group allocation.To further explore the performance of Nanopore 16S PCR in diagnosis of CNS infection, these results were analysed and presented in Supplementary table 1.
Discussion
This study investigates the potential of real-time 16S PCR and 16S Nanopore sequencing for diagnosing EVDRI.The data shows that samples positive in CSF culture were to a large extent also positive in real-time 16S PCR and yielded high numbers of 16S DNA sequence reads in Nanopore sequencing.In cases with positive culture and negative real-time 16S PCR, the number of 16S sequence reads in Nanopore sequencing were generally low, which could potentially indicate a false positive culture as the cases were also more likely to be assessed as contaminations by the treating clinician.Subjects with negative culture were almost exclusively negative in real-time 16S PCR and had significantly lower numbers of 16S DNA sequences in Nanopore sequencing, regardless of whether antibiotic treatment was ongoing or not at time of sampling.Hence, our findings do not support a common occurrence of false-negative CSF cultures in this setting.Also, species identification with Nanopore sequencing was generally concordant with species identification with phenotypic methods from cultured bacteria.
To this date, there have been few studies of the role of real-time 16S PCR in diagnosing EVDRI and other healthcare-associated CNS infections.Despite this fact, the method is commonly used in clinical practice, but the lack of scientific evaluation could make interpretation of its results challenging for the clinician.Earlier studies have used PCR assays not fully comparable to modern 16S real-time PCR assays and showed conflicting results [7-10].To our knowledge, the most recent publication on real-time 16S PCR for diagnosing bacterial meningitis in neurosurgical patients was published in 2021 by Perdigão and co-authors [12].In this study, a self-developed 16S PCR protocol was used to analyse clinical CSF samples from 43 subjects with varying degree of clinical suspicion of EVDRI.The rate of 16S PCR positivity was between 40-60% in culture negative subjects and the authors concluded that real-time 16S PCR could be used to identify non-cultivable microorganisms in bacterial meningitis post neurosurgery.Our results differ from those of Perdigão and co-authors in the fact that we observed only one instance of real-time 16S PCR positivity in the 62 culture negative subjects in our study.This illustrates the fact that depending on the protocol used for DNA-extraction and PCR amplification, the sensitivity and specificity of a PCR assay can vary.It could be argued that the protocol for real-time PCR in our study could be lacking in sensitivity, explaining the However, the fact that most culture positive subjects were also positive in real-time 16S PCR speak against this.Also, the use of Nanopore sequencing in our study adds another dimension to the data that provides insights regarding true or false PCR and culture negativity.Most PCR negative samples displayed low 16S sequence read numbers comparable to negative controls and the reads were often classified to several bacterial species with low pathogenicity.Contrastingly, most PCR positive subjects exhibited maximum 16S sequence read numbers, generally from a single bacterial species concordant with the cultured species.We believe this data support that real-time 16S negativity reflects true negativity in most cases in our study.There were five culture positive cases that were negative in real-time 16S PCR which reduces the sensitivity to 77%.However, the number of 16S sequences in Nanopore sequencing were significantly lower in these samples than in other culture positive cases and the culture findings were clinically assessed as contamination in three out of five cases, suggesting that one or several of these cases might be false positive cultures reflecting contamination during sampling or sample handling.If this is the case, 16S PCR sensitivity for infection would be higher if using a better gold standard for comparison.
A more recent study by Jang and co-workers have used 16S Nanopore sequencing in diagnosis of bacterial meningitis after neurosurgery [13].In this prospective study of 178 neurosurgical patients, bacterial culture, 16S PCR with subsequent Nanopore sequencing on PCR positive samples were performed on 285 CSF samples.In 14.4% of samples, it was determined that bacterial CNS infection was present and in 56.1% of these the presumed causative pathogen was found in 16S Nanopore sequencing but not in culture, indicating that the culture was false negative.The authors concluded that 16S PCR followed by Nanopore sequencing enhanced detection of bacterial meningitis post neurosurgery.The difference in findings and resulting conclusions compared to our study is interesting and may have several explanations.Firstly, the culture method used was not described leaving questions on the reference method.Secondly, the sensitivity of the PCR protocol used by Jang et al. may be more sensitive even though our data from Nanopore sequencing do not implicate this.Thirdly, determining whether infection was present or not when PCR/sequencingresults were available might result in an overestimation of the diagnostic performance of the PCR/sequencing.
In the study by Jang et al. 16S PCR of the complete 16S gene was performed and Nanopore sequencing was performed for 2-3 h on positive samples.Although attractive from a cost point of view, their method seems to require extensive hands-on laboratory work and thus, might not be as rapid in real life clinical settings as in the study setting.In our study, 16S Nanopore sequencing were performed on all samples, directly from sample, which simplifies the workflow and reduces the risk of laboratory contaminations.However, further optimisation of the workflow, definition of cut-offs and interpretation guidelines are needed to allow implementation in a clinical setting.
The principal strengths of this study are the prospective inclusion, the relatively large proportion of culture positive cases and the combined use of real-time 16S PCR and Nanopore sequencing which gives insights on the performance of both methods as well as bacterial culture regarding sensitivity and specificity.There are also several limitations.One being the fact all culturepositive samples were included in the culture positive group without regard to whether they were clinically assessed as contaminations or genuine infection.This is evident for example in the case with Microccocus luteus in culture.This is seldom a relevant finding and were not regarded as such by the treating clinician.Including such subjects in the culture positive group underestimates the sensitivity of real-time 16S PCR and 16S Nanopore sequencing, but to reduce the risk of bias in selection of which subjects to include, we opted for a protocol including all culture positive cases without regard to their interpretation.
Another point of concern is that there is no standard criterion for what a positive or negative 16S Nanopore result is.Nanopore sequencing gives quantitative information regarding the number of DNA sequence reads.This number is influenced by several factors like DNA-extraction, PCR amplification and sequencing time.Also, the number of 16S gene copies varies between bacterial species which can also affect number of 16S sequences in the PCR product [14].However, the number of 16S sequence reads still reflects the relative amount of bacterial DNA in the original sample and this information combined with the information from the species identification could be used by clinical microbiologists to interpret whether a sample is positive or negative.The interpretation will also need to be made in relation to the clinical specimen analysed and the presence of contaminating DNA (the 'Labome').Thus, establishing a cut-off for positivity is challenging and it is likely that some results will need to be interpreted as uncertain and the interpretation of 16S Nanopore results will require specific competence from both clinical microbiologists and treating clinicians.Further studies and method evaluation can clarify how the method should be implemented and combined with other diagnostic methods for maximised clinical utility.
Figure 1 .
Figure 1.Flow chart of patients in the study.EVD ¼ external ventricular drain.
Table 1 .
Inclusion and exclusion criteria.
� Only applied to culture negative samples.
Table 2 .
Epidemiological and laboratory parameters the study groups.
Table 3 .
Microbiology of culture positive subjects.
Table 4 .
Characteristics of culture positive samples.Klebsiella aerogeneswas formerly named Enterobacter aerogenes.�� In the follow up samples 16S Nanopore sequencing identified the sequences as originating from Staphylococcus aureus.��� The subject developed CNS infection with Enterococcus faecalis despite ongoing intravenous and intrathecal vancomycin.
Table 5 .
Details of culture negative samples with high number of 16S reads.Bacteria with a relative abundance of �10% are listed.
|
2024-03-27T06:18:31.245Z
|
2024-03-26T00:00:00.000
|
{
"year": 2024,
"sha1": "6065ad708647f4006e279389a20cc5083a0ad0b2",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23744235.2024.2331260?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "381fcd0c4cf3b298c7e893ccf9136fd4315287c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211107075
|
pes2o/s2orc
|
v3-fos-license
|
Mesh-adapted stress analysis of multilayered plates using a layerwise model
This paper proposes a new finite-element modelling of a recent layerwise model for multilayered plates. This layerwise model is built from a specific 3D stress-field expansion along the thickness direction and involves, in particular, interlaminar transverse shear and out-of-plane stresses as generalized stresses. Its main feature is that 3D equilibrium equations and free-edge boundary conditions are directly taken into account into the stress-based construction of the model. A dual displacement-based finite-element discretization is implemented using the FEniCS software package and a remeshing strategy is proposed based on a novel error indicator. The error indicator is built based on the 3D stress field directly deduced from the layerwise generalized stresses and compared to a reconstructed stress field based on the model generalized displacements. The proposed error indicator is shown to identify the most critical parts of a laminate structure associated with complex 3D stress fields such as boundaries or stress concentration/singularity regions (near free-edges or delamination fronts). Through the combination of thickness discretization and in-plane mesh refinement in regions of interest, the proposed framework therefore offers an attractive alternative to 3D solid finite elements for an accurate prediction of stress states in composite laminates.
Introduction
Multilayered plates have very interesting mechanical properties that make them widely used in aerospace, automotive, telecommunication structures and civil engineering.A multilayered plate is represented as a pile of homogenized anisotropic plies made of fiberreinforced composites.However, the highly anisotropic and heterogeneous nature of such laminates, the prediction of their overall properties is a challenging task.One of the major issues in design and analysis of such plates is related to free-edge effects.It has been proved that the differences in the elastic properties of adjacent layers generally result in a highly concentrated interlaminar stresses near free edges [1][2][3][4][5].Many models were derived to accurately capture these free-edge effects.Highly detailed three-dimensional (3D) finite-element models are computationally expensive and will only result in accurate stress predictions for sufficiently refined meshes since they rely on displacement interpolations.Two-dimensional plate models have therefore been introduced in order to simplify these computations while trying to keep a sufficiently accurate description of local 3D stress fields.
Equivalent single layer (ESL) models represent the laminate as an equivalent homogeneous plate.Many ESL models based on higher order theories have been proposed in the literature [6][7][8][9][10][11][12][13][14] and are usually derived using two main approaches: asymptotic approaches and axiomatic approaches.The first class derives the plate model from the full 3D formulation of the problem, assuming the thickness of the plate goes to zero and using asymptotic expansion in which the leading order leads to Kirchhoff-Love plate theory [15].The second approach is based on assuming a priori 3D fields, and the plate theory is derived by integration through the thickness and variational tools [16][17][18].Although ESL models can provide acceptable results for the laminate global response, they may lead to very inaccurate estimations of local response especially near free-edges.
Layerwise models, in which each layer is considered as an independent plate, have therefore been proposed to improve the local stress representation [19][20][21][22][23][24].Layerwise models have been proved to a very good alternative to 3D models since interpolation choices along the z-direction take into account the specificities of the laminate.The interested reader can refer to [25,26] for a general overview of such models and to [27,28] for a recent comparison between ESL, zig-zag and layerwise models.
Following the ideas of Pagano's model [29], a layerwise model named LS1 was developed in [30][31][32][33][34][35][36][37][38][39][40].In this model, the laminate is considered as a superposition of Reissner-Mindlin plates linked together by interfacial stresses which are considered as additional generalized stresses.The main difference between LS1 models and other models is that LS1 is a stressbased approach, while other models are either displacement or mixed stress/displacement approaches.
However, the LS1 model presents some conceptual drawbacks since, for instance, 3D stress-free boundary conditions cannot be exactly fulfilled.Second, the model is derived by means of the Hellinger-Reissner mixed variational principle, so that there is no theoretical guarantee of the convergence to the 3D model as the number of mathematical layers per physical layer increases.Generalizing upon the same ideas, a layerwise model called statically compatible (SCLS1), was introduced in [41] in which the divergence of the interlaminar transverse shear stresses is introduced as an additional generalized stress.Doing so, the SCLS1 model produces 3D stress field satisfying the local 3D balance equations and boundary conditions provided that their 2D plate counterparts are satisfied.The model can therefore be derived by means of the complementary potential energy minimum principle ensuring the convergence of its refined version to the exact 3D model as the number of mathematical layers per physical layer increases.Aiming at providing an operational tool for stress analysis in multilayered plates, this paper is concerned with the development of a mesh adaptation strategy based on an error indicator built from the local 3D stress field and a reconstructed 3D displacement field.
The paper is organized as follows: the SCLS1 model equations and finite-element implementation are discussed in section .Section is dedicated the reconstruction of the 3D displacement and to error indicator computation used in the mesh adaptation.Finally, section illustrate the method efficiency in capturing regions of interest in various configurations.
The SCLS1 model
In this section, the equations of the SCLS1 model for elastic multilayered plates are recalled.This model is derived from the 3D continuum equations by considering Statically Compatible Layerwise Stresses with first-order membrane stress approximations per layer in the thickness direction.The generalized stresses of the proposed model are actually those of a Reissner-Mindlin plate per layer in addition to the inter-laminar shear and normal stresses at the interfaces between layers and the divergences of these inter-laminar shear stresses.The plate kinematics is then obtained through duality arguments.
Problem description and notations
We consider a linear elastic multilayered plate composed of n monoclinic elastic layers.The plates occupies the 3D domain where ω ⊂ R 2 is the middle surface of the plate.In the following, x and y are the in-plane coordinates and z is the out-of-plane coordinate.The following notations are introduced: • The superscript i and j, j + 1 indicates layer i and the interface between the layer j and j + 1 with 1 ≤ i ≤ n and 1 ≤ j ≤ n − 1, respectively.By extension, the superscript 0,1 refers to the lower face ω − = ω × {h − 1 } and the superscript n, n + 1 refers to the upper face ω + = ω × {h + n }. • (S i = (S i klmn )) is the fourth-order 3D compliance tensor of layer i with the minor and major symmetries: S i klmn = S i lkmn = S i klnm = S i mnkl and it is positive definite.Its inverse is the 3D elasticity stiffness tensor and is denoted by (C i klmn ) for layer i.The tensor (C i klmn ) possesses the same symmetries as (S i klmn ) and it is also positive definite.• S i is monoclinic in direction z : S i αβγ 3 = S i α333 = 0 • σ αβ (x, y, z) are the in-plane stress components, σ α3 (x, y, z) are the transverse shear stresses and σ 33 (x, y, z) is the normal stress.• αβ (x, y, z) are the in-plane strain components, α3 (x, y, z) are the transverse strain stresses and 33 (x, y, z) is the normal strain.• u α (x, y, z) are the in-plane 3D displacement components, u 3 (x, y, z) is the normal 3D displacement component.
The plate is loaded on its upper face ω + and lower face ω − with the distributed surface forces T + = (T + k ) and T − = (T − k ), respectively.The lateral boundary is decomposed into two complementary parts: a free part ) is set to zero, and a restrained part where the displacement u = (u k ) is set to zero.Here, the subsets ∂ω T and ∂ω u are the partition of ∂ω, and n = (n k ) is the outer normal to ∂ω T .
Equations of the 3D model
The 3D elastic problem is to find in a statically compatible stress field σ = (σ kl ), a kinematically strain field = ( kl ) which comply with the constitutive equation: where a stress field σ is said to be statically compatible if it complies with the equilibrium equations: and the stress conditions on the lower and the upper faces: and on the lateral boundary: A strain field is kinematically compatible if there exists a displacement field u = (u k ) complying with the displacement conditions on the lateral boundary: and such that : The static of SCLS1 model The SCLS1 model assumes the following form of the 3D stresses in layer i: where P i k , k = 0, 1, 2, 3, are the orthogonal Legendre-like polynomial basis defined on layer i by: for and where N i αβ , M i αβ and Q i α are, respectively the classical membrane forces, bending moments and shear forces in layer i, and τ j,j+1 α and ν j,j+1 are the transverse shear and out-of-plane normal stresses at the interface j, j + 1. π j,j+1 is an additional variable whose interpretation will appear later.
Equilibrium equations
The σ 3D stress field will comply with 3D equilibrium equations ( 2), if and only if, the following equations hold true for all (x, y) in ω and for all i = 1, . . ., n and j = 0, . . ., n : The last equation gives the interpretation of π j,j+1 which is equal to the divergence of the interlaminar shear stress vector τ j,j+1 = τ j,j+1 α .Now stress boundary conditions also have to be enforced in addition to Eq. (11).The lateral boundary conditions σ 3D ij n j = 0 on ∂ T are equivalent to the following equations for i = 1, . . ., n and j = 0, . . ., n : And the boundary conditions (3) on the upper an the lower faces write, respectively, and It should be noticed that the boundary conditions ( 12) and ( 13) cannot be simultaneously verified unless T ± α n α = 0 on ∂ω T , which will be assumed in the sequel.Moreover, from the last equations of (11) for j = 0 and j = n, we see that: Finally, the stress field σ 3D is statically compatible when it complies with the generalized equilibrium equations on ω: (11) for i = 1, . . ., n and j = 1, . . ., n − 1, ( 13) and ( 14), and with the generalized stress free boundary conditions on ∂ω T : (12) for i = 1, . . ., n and j = 1, . . ., n − 1.
Generalized displacements and strains
The SCLS1 generalized displacements are U i α (x, y), U i 3 (x, y), i α (x, y) and V j,j+1 (x, y), respectively the two in-plane displacements, the vertical displacement, the two bending rotations, and V j,j+1 is a kinematical variable having the dimension of an area.The following expressions for i = 1, . . ., n and j = 1, . . ., n − 1 give the relation between the generalized displacements and the 3D displacement field. and, The generalized strains dual of the generalized stresses , ν j,j+1 , π j,j+1 for i = 1, . . ., n and j = 1, . . ., n − 1 are respectively expressed in terms of the generalized displacements as:
The SCLS1 model constitutive equations
The constitutive equations of the SCLS1 model are derived using the stress energy associated to σ 3D .They are given by : for 1 ≤ i ≤ n and for 1 • Bending constitutive equations of layer i: • Transverse shear constitutive equation of layer i: • Shear constitutive equation of interface j, j + 1:
Finite element discretization
A finite-element discretization of the SCLS1 model has been proposed in [41] using the MPFEAP in-house software described in [36].In our numerical study, the SCLS1 multilayered plate model has been implemented in the open-source finite element FEniCS package [42,43].The FEniCs Project is a collection of free and open-source software components with the common goal to enable automated solutions of differential equations.The components provide scientific computing tools for working with computational meshes, finite element variational formulations of ordinary and partial differential equations, and numerical linear algebra.We therefore benefit from FEniCS high-level domain-specific language for implementing the variational formulation associated with the SCLS1 model.Building upon the FEniCS implementation of a Reissner-Mindlin plate model [44], we define a generalized function space for the SCLS1 generalized displacement degrees of freedom.More precisely, the retained discretization is based on a mesh of triangular elements with quadratic interpolation for all kinematical variables.As is the case for classical FE discretization of Reissner-Mindlin plate models, FE discretization of the SCLS1 model leads to shear-locking in the thin plate limit.Selective reduced integration is then used on the shear part of the strain [44].
Comparison with other layerwise models
In [41], the SCLS1 model has been compared with the LS1 model and reference 3D computations.This work showed that the SCLS1 model is as accurate as the LS1 model and is even closer to refined 3D solutions near free edges since it can correctly satisfy stressfree boundary conditions.Besides, an intensive comparison between the LS1 model and other layerwise models derived of the Carrera Unified Formulation (CUF) family has been performed in [45].The main conclusion of this work was that the LS1 model exhibits a similar accuracy to LM4 (mixed fourth-order) and LD3 (displacement third-order) layerwise models.This conclusion therefore also holds for the SCLS1 model considered here.Moreover, in contrast to these models, LS1 and SCLS1 exhibit much fewer degrees of freedom per node.For instance for a laminate with n = 4, LS1 has 20 (5n) dofs/node, SCLS1 has 23 (6n − 1) whereas LD3 has 39 and LM4 has 102.One important feature of LS1 and SCLS1 is that no assumption is made on the displacement variations through the thickness but rather on the stress.Therefore, obtaining a complete 3D displacement field must be performed by a post-processing procedure which we will now describe.
Mesh adaptivity based on field reconstructions
Although being much cheaper than LM4 or LD3 CUF models, the SCLS1 model is still quite expensive due to its high number of degrees of freedom per node.It can be seen as specific, mechanically-based, discretization in the z direction and can therefore be compared to a 3D discretization with a more accurate representation of the stress fields in the z direction.It becomes therefore beneficial to optimize the in-plane mesh for improved computational efficiency.The purpose of this section is to fulfil this goal by building an error indicator for mesh adaptation.
We propose to define this indicator as follows: from the finite-element computed generalized displacements (U i α , U i 3 , i α , V j,j+1 ) fields in the (x, y)-plane, we first aim at reconstructing a 3D displacement field u(x, y, z).We then derive the associated 3D strain and stresses using the local constitutive equation.The so-obtained reconstructed stress field σ is then compared to the initial 3D stress σ 3D obtained from the generalized stresses , ν j,j+1 , π j,j+1 ) via Eqs.( 7)- (9).See the illustration of the scheme in Fig. 1.
Field reconstructions
In this subsection, we propose to reconstruct u by considering a continuous piecewise linear variation of its components u i along the z direction.This interpolation will have to be as close as possible to satisfying Eqs. ( 15)- (19).Let us mention that we tried other interpolations (in particular of higher-order) or reconstruction strategies but the latter gave the most satisfying results.
Let us first consider the in-plane displacement field.We first build an auxiliary in-plane displacement u d α , with α = 1, 2, as follows: Fig. 1 The reconstruction scheme u d α is piecewise-linear and complies with Eqs. ( 15), ( 16) but is not continuous at the interfaces.To achieve our initial goal, u α is obtained by performing an L2-projection of u d α over piecewise linear continuous functions over z.Now, we aim to find the reconstructed out-of-plane displacement u 3 as a continuous piecewise linear function of z which is compatible with the generalized displacements U i 3 and V j,j+1 in the sense of the following equations: Introducing a continuous piecewise linear interpolation for u 3 (x, y, z) of the following form: where φ j,j+1 are linear shape functions and q j,j+1 are the corresponding nodal values, the above equations become: where t [q] = q 0,1 , . . ., q n,n+1 , [B] is a matrix of dimension (2n − 1, n + 1) and [F 3 ] a vector of dimension (2n − 1).The solution to the above problem is computed in the least-squares sense and gives a direct characterization of the degrees of freedom [q] as a function of the generalized displacements U i 3 and V i,i+1 .Finally, from the previously reconstructed 3D displacement field u i , the strain field is computed using the 3D compatibility equations and then the reconstructed stress tensor σ is computed using the 3D constitutive equations.
Error indicator and mesh adaptation
The error indicator which will be used for mesh adaptation is then computed based on the difference between σ 3D and σ in terms of elastic energy.This error is computed for each triangular element: where e(σ) = 1 2 σ : S : σ and e denotes a given element e.Each mesh element is then ordered in a decreasing fashion based on its error indicator value: where N is the total number of elements.We then tag the first K elements which contribute to at least a fraction η of the total error The tagged elements are then automatically refined by FEniCS mesh adaptation procedures.
Illustrative applications
In this section, we investigate different illustrative applications assessing the quality of the stress field approximation, error indicator and mesh refinement strategy.The last examples consider more practical situations arising when designing composite laminates, namely stress concentrations near holes with associated free-edge singularities and interfacial stress singularities in the presence of interface delamination.
Homogeneous laminate
This first example considers a homogeneous square plate of length l = 1 and thickness h = 0.2.The constitutive material is assumed to be isotropic with E = 10 GPa and ν = 0.3.
The plate is fully clamped on its boundary and subject to a uniform loading of intensity q = 8.Calculations are performed considering a uniform discretization of n = 3 and n = 5 layers across the thickness and have been compared to finite-element computations using 3D solid elements on a very fine mesh.The initial mesh was a structured mesh with two triangular elements on each side of the square plate.First the case with n = 3 layers is considered.As expected, the multilayered plate solution is of very good quality near the plate center after only one refinement step as shown in Fig. 2.
We therefore investigate the quality of the computed stress field at a point of coordinate (x = 0.01, y = 0.5) near the left edge.In Fig. 3a, we compare the multilayered stress field σ 3D with its reconstruction as described in section at the same point near the edge.It can be observed that the reconstruction does not agree with σ 3D for the initial coarse mesh, indicating that mesh size should be refined in this region.Figure 3b, c illustrate the evolution of σ 3D and σ near the border when refining the mesh.It can be seen that mesh refinement provides a much better agreement between both stress fields.The error indicator therefore correctly identifies regions located near the clamped boundaries as the most critical regions as evidenced by the final mesh layout of Fig. 4a obtained after 6 refinement steps.
Performing the same comparison in the case when the plate thickness is discretized in n = 5 layers shows the same behaviour (Fig. 5).Although σ 3D and σ are a little closer for the initial coarse mesh, the deviation is still significant indicating that in-plane mesh resolution is not fine enough.The mesh refinement procedure yields a similar final mesh layout, with fine cells concentrated along the borders (see Fig. 4b), and better agreement between σ 3D and σ at the final stage.On both Figs.3c and 5c, the reference solid FE solution is also represented, showing a good agreement with the multilayered stress field after mesh refinement.The effect of mesh refinement is further illustrated when plotting the evolution of the total relative error indicator in Fig. 6, defined as: It can be seen that the relative errors decrease when refining the mesh and tend to stabilize after a few iterations only.Besides, relative errors are larger for n = 3 than n = 5 which may indicate that the mesh reconstruction if of higher quality for n = 5 layers than n = 3 layers.Let us point out that the value obtained for such errors cannot be considered neither as a guaranteed level of error with respect to an exact solution nor as an upper bound to the true error.It is however an error indicator, as showed by the previous results, which can be used qualitatively to assess the solution accuracy.
Triple laminate
The second example considers a heterogeneous square plate of length l = 1 and total thickness h = 0.2 made of a triple laminate consisting of a central core of thickness e 2 = 0.12 and two symmetric skins of thickness e 1 = 0.04 each.The constitutive materials are assumed to be isotropic with E = 50 GPa and ν = 0.2 for the skins and E = 10 GPa and ν = 0.3 for the core.Loading and conditions are the same as for the homogeneous plate.Calculations are performed considering a discretization consisting of one mathematical layer in both skins and in the core (total of n = 3 layers) and a discretization consisting of one mathematical layer per skin and 3 layers for uniformly discretizing the core thickness (total of n = 5 layers).Again the multilayered plate model computations have been compared to reference 3D solid finite-element computations on a fine mesh.First, we considered the case with n = 3 layers.As before, the solution is of lesser quality near the supports and stress fields are therefore compared at the same (x = 0.01, y = 0.5) location as before.Figure 7 shows the comparison of the stress field σ 3D with its reconstruction across the plate thickness for various mesh refinement steps.It can be observed that σ 3D and its reconstruction do not match for the initial coarse mesh, indicating the mesh size should be refined in this region.Mesh adaptation improves the quality of the solution in such regions as evidenced by the good agreement with the reference 3D FE solution.Performing the same comparison using a more refined discretization with n = 5 layers in the thickness exhibits a similar behaviour (Fig. 8).Although σ 3D and σ are a little closer for the initial coarse mesh, the deviation is still significant indicating that in-plane mesh resolution is not fine enough.A similar refined mesh layout is obtained with fine cells concentrated along the borders (see Fig. 9), and better agreement between σ 3D and σ at the final stage.
Finally, the evolution of the total relative error indicator as a function of mesh refinement steps in Fig. 10 exhibits a similar behaviour as for the homogeneous plate.
Laminate with a circular hole
The third example considers a rectangular multilayered plate of length l = 6, width w = 1 and total thickness h = 0.01.The plate is perforated by a circular hole of radius R = 0.15 in its center (Fig. 11a).The laminate is made of a transversely isotropic material of elastic properties E T = 14.48 GPa,E L = 137.9GPa,ν T = 0.21,ν L = 0.21,μ T = 5.86 GPa and μ L = 5.86 GPa with L (resp.T ) denoting the fiber longitudinal direction (resp.the perpendicular transverse direction).The laminate consists of 6 plies (one layer per ply) with fibers oriented at [0 • , 90 • , 45 • , −45 • , 90 • , 0 • ] with respect to the horizontal direction.A tensile loading is applied to the plate through an imposed horizontal displacement U i = ±Ue x for all plies i = 1, . . ., 6.
Applying the proposed reconstruction and error estimation on this problem yields a globally more refined mesh with finer regions located near the top and bottom boundaries of the circular hole, see Fig. 11b obtained after 4 refinement steps.More insight can also be gained at visualizing the individual layer contributions to the total error.For instance, Fig. 12 plots the contribution of the 45 • (layer 3) and −45 • (layer 4) layers to the total error.These two contributions are the most dominant one as regards stress concentrations near the hole.The effect of the material anisotropy on these two contributions can also be clearly observed.
Double-Cantilever Beam with delaminated interface
The final example we consider is that of a rectangular multilayered plate of the same dimensions as before (without the circular hole) and the same lamination properties.We model a portion of a delaminated interface located in the middle interface ((i, i+1) = (3, 4)) in the region x ≤ 1 by forcing the interface stresses ν 3,4 and τ 3,4 α to be zero on this region.This results in an appropriate modification of the constitutive equations of the SCLS1 model and the corresponding finite-element implementation.
The plate is clamped on its right boundary, and positive (resp.negative) vertical displacements U i 3 = +U (resp.U i 3 = −U) are enforced on the left part for the top layers i = 4, 5, 6 (resp.bottom layers i = 1, 2, 3), simulating a Double-Cantilever Beam test (see Fig. 13).
As expected, the mesh adaptation procedure mainly concentrates the finer cells near the delamination front at which interface stresses are the most singular, see Fig. 14.The proposed procedure can therefore be considered to be coupled with a delamination Fig. 12 Error indicator maps in layers 3 and 4 (top and middle) and total error for all layers (bottom) on the initial mesh Fig. 13 The initial mesh for the DCB problem propagation model for which stresses driving the delamination front propagation will be well resolved.
Conclusions and perspectives
In this paper, a statically compatible layerwise stress model for laminated plates (SCLS1) is considered for an accurate representation of 3D elastic fields.A mesh adaptation strategy is then developed which relies on the reconstruction of 3D displacement fields from the model generalized displacements, the error indicator being obtained by a constitutive error between both fields.The obtained results indicate that: • the error indicator is able to refine the mesh in regions with complex 3D stress fields • these critical regions indeed correspond to plate edges, notches or delamination fronts The proposed methodology can be further improved by pointing out that refined layerwise models such as the one considered here is appropriate in critical regions near boundaries, free-edges, delaminated interfaces, etc.This point is indeed properly identified by the proposed remeshing procedure.In the bulk region away from these critical zones, it would we sufficient to adopt an equivalent single-layer plate model based on a Love-Kirchhoff kinematics for instance.Although the remeshing procedure favours coarse cells in such regions, mitigating the number of unnecessary degrees of freedom, an additional gain could then be obtained by mixing a layerwise model for critical regions with an equivalent single-layer model for the remaining part.
A second potential line of work is concerned with the fact that, although the layerwise model is built at the continuous level from a stress-based perspective complying with the balance equations, its numerical resolution is performed through a displacement-based approximation for the in-plane variations.As a consequence, the resulting generalized stress fields, and therefore, the associated 3D stress field, do not satisfy strongly the balance equations.In order to maintain the initial philosophy of a stress-based statically compatible construction, developing a stress-based finite-element discretization of the model would be an interesting approach, potentially paving the way to obtaining more rigorous error estimators than the one considered here.
••
Normal constitutive equation of interface j, j + 1: Constitutive equation for the π generalized stress at interface j, j + 1:
Fig. 2 Fig. 3
Fig.2Energy densities across the plate thickness computed for σ 3D and σ at the plate center (n = 3)
Fig. 4 b c 5
Fig.4 Final refined meshes for the homogeneous plate for different thickness discretization levels
Fig. 6
Fig.6 Total relative error evolution for 3 and 5 layers discretizations
Fig. 7
Fig.7 Energy densities across the plate thickness computed for σ 3D and σ at the plate edge for n = 3
Fig. 8 Fig. 9
Fig. 8 Energy densities across the plate thickness computed for σ 3D , σ and σ ref at the plate edge for n = 5
|
2020-02-13T09:25:04.170Z
|
2020-02-08T00:00:00.000
|
{
"year": 2020,
"sha1": "a7353f5b9c22489caad7d700e28aa8bd385e0987",
"oa_license": "CCBY",
"oa_url": "https://amses-journal.springeropen.com/track/pdf/10.1186/s40323-020-0142-y.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4675b49580e296ca786745c9803c04ad8349f8f2",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Computer Science"
]
}
|
268976831
|
pes2o/s2orc
|
v3-fos-license
|
Clinical, Neuroimaging, and Metabolic Footprint of the Neurodevelopmental Disorder Caused by Monoallelic HK1 Variants
Background and Objectives Hexokinase 1 (encoded by HK1) catalyzes the first step of glycolysis, the adenosine triphosphate–dependent phosphorylation of glucose to glucose-6-phosphate. Monoallelic HK1 variants causing a neurodevelopmental disorder (NDD) have been reported in 12 individuals. Methods We investigated clinical phenotypes, brain MRIs, and the CSF of 15 previously unpublished individuals with monoallelic HK1 variants and an NDD phenotype. Results All individuals had recurrent variants likely causing gain-of-function, representing mutational hot spots. Eight individuals (c.1370C>T) had a developmental and epileptic encephalopathy with infantile onset and virtually no development. Of the other 7 individuals (n = 6: c.1334C>T; n = 1: c.1240G>A), 3 adults showed a biphasic course of disease with a mild static encephalopathy since early childhood and an unanticipated progressive deterioration with, e.g., movement disorder, psychiatric disease, and stroke-like episodes, epilepsy, starting in adulthood. Individuals who clinically presented in the first months of life had (near)-normal initial neuroimaging and severe cerebral atrophy during follow-up. In older children and adults, we noted progressive involvement of basal ganglia including Leigh-like MRI patterns and cerebellar atrophy, with remarkable intraindividual variability. The CSF glucose and the CSF/blood glucose ratio were below the 5th percentile of normal in almost all CSF samples, while blood glucose was unremarkable. This biomarker profile resembles glucose transporter type 1 deficiency syndrome; however, in HK1-related NDD, CSF lactate was significantly increased in all patients resulting in a substantially different biomarker profile. Discussion Genotype-phenotype correlations appear to exist for HK1 variants and can aid in counseling. A CSF biomarker profile with low glucose, low CSF/blood glucose, and high CSF lactate may point toward monoallelic HK1 variants causing an NDD. This can help in variant interpretation and may aid in understanding the pathomechanism. We hypothesize that progressive intoxication and/or ongoing energy deficiency lead to the clinical phenotypes and progressive neuroimaging findings.
Introduction
Developments in high-throughput sequencing technologies in recent years, such as exome or genome sequencing, enable us to comprehensively and timely investigate the genetic causes of neurodevelopmental disorders (NDDs) with or without epilepsy. 1Often the clinical spectrum of NDDs is broad, with large interindividual differences; therefore, the clinical phenotypes are nondistinctive.While the identification of new disease genes is progressing rapidly, our understanding of the cellular pathways leading to pathology lags far behind.This hampers variant interpretation, which, in the absence of a biomarker or functional testing, solely relies on reverse clinical phenotyping.[3] The gene HK1 encodes hexokinase (HK)1, one of the 2 isoforms of hexokinases that are associated with mitochondria, following their interaction with the outer membrane mitochondrial porin, the voltage-dependent anion channel.Hexokinases catalyze the ATP-dependent phosphorylation of glucose to glucose-6-phosphate, the first step and rate-limiting reaction in glycolysis.By phosphorylating glucose, hexokinases effectively prevent glucose from leaving the cell and thus commits glucose to intracellular metabolism, e.g., energy metabolism.Of note, HK1 is the sole hexokinase isoform found in the cells and tissues, which rely most heavily on glucose metabolism for their function, including neurons and astrocytes, retinal cells, erythrocytes, platelets, and fibroblasts. 4thogenic variants in HK1 can cause different disorders with different types of inheritance.One specific homozygous variant in the promotor region has been attributed to hereditary motor and sensory neuropathy Russe type (HMSNR; MIM #605285), 5 while different biallelic variants (in all protein domains with exception of the interdomain helix) that result in HK1 variants with reduced stability 6 explain hereditary nonspherocytic to hemolytic anemia (MIM #235700). 7In rare cases, however, HK1 variants in hemolytic anemia were also reported to be associated with multiple malformations including NDD 8,9 or intrauterine fetal death. 10One specific monoallelic variant, which is located outside the catalytic pocket in the HK1 C-terminal subdomain, is known to underly retinitis pigmentosa 79 (RP79; MIM #617460). 11In addition, monoallelic variants, which affect a tissue-specific regulatory element have been reported in congenital hyperinsulinism. 124][15] The variants causing the latter phenotype are found in the regulatory HK1 N-terminal subdomain and at the beginning of the interdomain helix.
The structural analysis suggests that the missense variants within the N-terminal regulatory domain and interdomain alpha helix may disrupt the regulatory glucose-6-phosphate binding site.Because this binding site is responsible for product inhibition of HK1, disruption of glucose-6-phosphate binding could result in an inability to autoregulate, leading to gain of function and constitutive glucose phosphorylation.This is underlined by the presence of >20 individuals with heterozygous HK1 truncating alleles in the ExAC database, suggesting that HK1-associated dominant disease phenotypes probably results from either a gain-of-function or dominantnegative mechanism instead of haploinsufficiency. 14 investigate the clinical phenotypes, MRIs, and the CSF of 15 previously unpublished individuals with monoallelic HK1 variants and an NDD phenotype to add distinctive radiologic and laboratory features (biomarkers), contributing to precise definition and recognition of the phenotype, and to improve our understanding of the underlying disease mechanism.
Methods
Eligible individuals with monoallelic disease causing variants in HK1 and an NDD phenotype were identified through Glossary DEE = developmental and epileptic encephalopathy; G6P = glucose-6-phosphate; HK = hexokinase; NDD = neurodevelopmental disorder.Data not provided in the article because of space limitations may be shared (anonymized) at the request of any qualified investigator for purposes of replicating procedures and results.
Genotypes and Clinical Phenotypes
As summarized in Table 1, 15 previously unpublished cases (6 male), including 1 pair of siblings were identified, in the age range of infants until adults in the fifth decade at last followup.The age at onset ranged from birth/neonatal period to infancy when developmental issues became apparent.In all these individuals, exclusively recurrent monoallelic HK1 variants (reference sequence GenBank NM_000188.3)4][15] All variants are expected to have a gain-offunction effect.For all but individual 7, segregation analysis by trio exome sequencing or targeted Sanger sequencing was performed and showed absence of the variant in both parents.
While the NDD related to de novo HK1 variants span a spectrum as common in nearly all NDDs, we observed 2 main phenotypes.The 8 individuals 1-8 with the c.1370C>T (Thr457Met) variant all presented with a developmental and epileptic encephalopathy (DEE) phenotype in the first weeks or months of life with drug-resistant infantile spasms.They showed virtually no development, and 3 of them died at the ages of 8 months, 18 months and 8.5 years, respectively.In none of these cases a movement disorder or retinal changes were reported.Of note, 2 of our cases (Table 1, individuals 5 and 6) carrying this variant were siblings, suggesting parental germline mosaicism.
In 6 individuals with the c.1334C>T (Ser445Leu) and 1 individual with the c.1240G>A (Gly414Arg) variant, we observed a different course of disease.All these individuals showed global developmental difficulties finally resulting in nonspecific cognitive abnormalities with a normal IQ in 1 individual, learning disabilities in one other, and mild-to-
Neuroimaging Findings
All 15 individuals underwent at least 1 MRI study and 7 of them had multiple (up to 4) MRIs (Table 2).There was a very broad range regarding the age (from day of birth until the 5th His MRI showed severe brain atrophy, swelling of the globus pallidus, and signal alterations in the crura cerebri (Figure 1).In individual 7, the putamen and caudate nucleus had a remarkable "mottled" appearance (Figure 2).All children in this group had global brain atrophy. Age ranges: neonate 0-4 wk, early infant 1-2 mo, infant 2-6 mo, late infant 6-12 mo, toddler 1-3 y, preschool 3-4 y, school 4-10 y, (pre)adolescent 10-18 y, young adult 18-30 y, and adult 30-50 y.
pattern and signal changes within the caudate heads, while globus pallidus and thalamus were not affected.Cerebellar atrophy was seen.Individual 14 had a first MRI at school age with evidence of mild signal changes in the caudate heads and a localized change in 1 putamen.At early (pre)adolescent age (Figure 3), the signal changes were very prominent in the caudate heads with extension into the body of the caudate nucleus.In addition, the putamina were atrophic, with a putaminal eye on 1 side.Remarkably, this individual has not developed cerebellar atrophy so far.(4) MRI in adulthood (older than 18 years; n = 4; individuals 9, 11, 13, and 15).The findings among adult individuals were variable but overall relatively mild; cerebellar atrophy was noted in all 4. It is important that MRI studies in individual 15 (at the middle of the 5 th decade) after a subacutely developed clinical attack with headache and a hemianopsy at the right side showed, apart from cerebellar atrophy, an area of left parieto-occipital cortical ischemia not compatible with a vascular territory (Figure 4).One year later, after another subacute attack with the hemianopsia presenting at the opposite side, the MRI showed a cortical ischemia also of the right parieto-occipital area.The neuroimaging findings were found to be progressive when follow-up MRIs were available (individuals 1, 6, 7, 8, 14, and 15; Figures 1, 2 and 4).
CSF Analysis
Results of CSF analyses were available for 12 of 15 cases (not in individuals 3, 11, and 13).Two individuals underwent multiple lumbar punctures; altogether, data were available for 17 CSF samples (Table 3).This does not include blood lactate concentrations, CSF cell count, and CSF protein levels because these values were generally within reference ranges.While blood glucose concentrations were normal, CSF glucose was below the 10th percentile of age-specific reference values 16 in 15/17 (88%) samples and below the 5th percentile of age-specific reference values in 11/17 (65%) samples.The CSF/blood glucose ratio was available for 13 samples and found to be below the 10th percentile of age-specific reference values in 7 (54%) of these samples; in 4/13 (31%) occasions, the ratio was below the fifth percentile.CSF lactate was measured in 15 CSF samples and found to be far above agespecific references values in all of them (100%).
Discussion
Although our study is too small to draw firm conclusions on genotype-phenotype correlations, we noticed that all individuals with the most severe clinical and radiologic phenotype (neonatal and infantile age group) harbored the same monoallelic HK1 variant c.1370C>T (replacing threonine 457 by a methionine.)These individuals presented in the first months of life with a DEE and showed virtually no achievement of any developmental milestones.They have a greatly shortened life expectancy.On the contrary, of the 4 individuals with the c.1334C>T and the (single) individual with the c.1240G>A variant who already reached adult age, 4 of the 5 showed an attenuated disease course, starting with an apparently static encephalopathy with global developmental difficulties, which later unexpectedly turned into a progressive disorder in their early 20s.Of note, both the c.1370C>T and the c.1334C>T variant are situated in the interdomain helix, while the c.1240G>A is located in the HK1 N-terminal subdomain.
Remarkably, retinitis pigmentosa was found in 3 individuals with the biphasic course.In this context, we considered it noteworthy that RP79, a disorder limited to the retina without any neurologic (or systemic) involvement, is caused by 1 specific monoallelic HK1 variant, namely c.2539G>A (p.Glu847Lys), 17 which is located outside the catalytic pocket in the HK1 C-terminal catalytic subdomain.It is unknown why this specific variant affects only the retinal tissue.
Of note, the 3 different phenotypes (RP79, congenital hyperinsulinism, and NEDVIBA) related to monoallelic HK1 variants are caused by recurrent variants, while HMSNR is caused by 1 recurrent homozygous variant.Furthermore, also biallelic HK1 variants are known in association with anemia.
HK1 is a human gene in which variants lead to exceptionally many distinctive and very diverse clinical phenotypes.
Retrospective analyses of all available images allowed us to recognize some common MRI patterns in this series of individuals, especially after merging the results for different age groups.The number of (serial) MRIs was too small to make subgroups based on the genotype.We found that cerebral MRI did not show gross brain malformations, except in 1 individual with agenesis of the corpus callosum.Individuals who (clinically) presented in the first months of life had (near)-normal images, although they were clinically severely affected and often had a fatal disease course.We assume, given the progressive (clinical) course of the disease in these individuals and the findings in all other age groups, that these early MRI studies simply "lag behind" the clinical course.Severe brain atrophy was a very dominant finding in the MRI studies of the second age group, i.e., infants.In age groups 3 and 4 (juvenile and adult), we noted involvement of specific brain structures, namely basal ganglia and cerebellum, but we also found remarkable variability between individuals.
The putamen was affected in 5 individuals, showing a pattern highly reminiscent of the so-called putaminal eye in 4 of them (case 4 at 16 months; case 12 at 4 years; case 14 at 7 years; and case 9 at 25 years).The term "putaminal eye" was coined in the image analysis of individuals with MEGDEL syndrome (3-MEthylGlutaconic aciduria, Deafness, Encephalopathy, Leigh-like syndrome, MIM#614739) 18 and refers to a spared tissue section in an otherwise abnormal T2 hyperintense putamen.This sign is stage associated and disappears with disease progression in MEGDEL syndrome. 19A putaminal eye is no longer considered pathognomonic for MEGDEL syndrome because it was also observed in single cases with Aicardi-Goutières syndrome, mitochondrial complex 1 deficiency, and SLC19A3-associated disease. 20Putaminal eyes have also been recognized in 2 other individuals with de novo HK1 variants (figure 1 in ref 14).An explanation for the temporary sparing of the central putaminal area in MEGDEL syndrome and the other disorders is lacking.We hypothesize that a distinct form of mitochondrial dysfunction is the common underlying mechanism.
Only in 1 individual (individual 6), the globus pallidus and mesencephalic structures were markedly affected; this occurred in a late stage of the disorder, in which severe supratentorial atrophy had already occurred.We postulate that this imaging pattern was not seen more often because other individuals may already have succumbed or may not have undergone MRI during the final disease stages.Involvement of the caudate nucleus is variable in our cohort.It was never seen in MRI studies in the neonatal and infantile age groups nor in adult individuals, and it never occurred as an isolated finding.In the context of brain atrophy, the thalamus may show some nonspecific volume loss and T2 hyperintensity, but otherwise, this structure is spared in all images.Atrophy of the cerebellum is a common nonspecific finding in many inborn metabolic and genetic disorders. 21It was not seen in neonatal-onset cases and their follow-up (as far as imaging was available), but otherwise found in almost all MRI studies.Apart from atrophy, no other cerebellar (including deep cerebellar nuclei) abnormalities were seen.
Stroke-like episodes were encountered in the oldest individual (individual 15), the only individual with the c.1240G>A variant.Previous reports in 2 individuals with an identical variant showed similar cerebellar atrophy but without stroke-like episodes. 13,15Whether stroke-like episodes are part of the imaging spectrum associated with de novo HK1 variants in general or with the c.1240G>A variant particularly remains to be determined.
In this study, we describe the characteristics of a series of 15 individuals with monoallelic HK1 variants.Of interest, we detected only 3 different HK1 variants in our cohort, all of which had been reported before [13][14][15] and therefore likely represent mutational hot spots.Of note, to date, a total of 6 monoallelic HK1 variants (c.1240G>A, c.1241G>A, c.1252A>G, c.1334C>T, c.1370C>T, and c.1969G>A) have been reported to cause an NDD phenotype, all are thought to lead to gain of function.
Some known mechanisms may, at least partially, explain these hot spots.CpG dinucleotides have, on average, a 10-fold higher mutability than non-CpG dinucleotides. 22In addition, the trinucleotide sequence is another factor significantly determining mutability rate: the mutation rate among the 64 possible trinucleotide sequences may differ up to 75-fold between the most and least mutable trinucleotide. 23The c.1334C>T variant affects an ACG and the c.1370C>T variant a TCG trinucleotide.Analyzing these variants within a bigger DNA sequence context would be interesting; however, we are not aware of any prediction tools that could predict mutational hot spots in genes.We will not examine the mechanisms of evolution and selection in terms of gene mutability.From these perspectives, the c.1334C>T and c.1370C>T HK1 variants both arise in vulnerable trinucleotides.Of interest 2 of our cases (individuals 5 and 6 in Table 1) carrying the c.1370C>T variant were siblings, and the same was true in a previously reported sibling pair. 13We can only speculate about contributing factors.
Spermatogenic cell-specific type 1 hexokinase (HK1S) is abundant in mouse sperm and located mainly in the principal piece of the sperm flagellum, where other spermatogenic cellspecific glycolytic enzymes have been found. 24Three variant transcripts of HK1 that are expressed specifically in spermatogenic cells have different 59 untranslated regions and encode the protein HK1S in which the porin-binding domain of HK1 is replaced by a novel N-terminal spermatogenic cellspecific region.Because HK1S seems to be important for the fitness/motility of sperm cells, a HK1 gain-of-function variant might support its own selection for the germline de novo mode of inheritance.Of note, both in a previous publication 13 and here, sibling pairs with identical de novo variants are reported, suggesting germline mosaicism, which could further strengthen the presented hypothesis.
Of interest, and for the first time, we report that CSF glucose is low in almost all CSF samples of individuals with de novo HK1 variants, often even below the 5 th percentile of agespecific reference values, under normoglycemic conditions.In line with this, the CSF/blood glucose ratio is low on many occasions.
A low CSF glucose concentration and low CSF/blood glucose ratio are considered highly specific biomarkers for glucose transporter type 1 deficiency syndrome (GLUT1DS, MIM#606777), a neurologic disorder that is caused by defective transport of glucose into the CNS (i.e., GLUT1 haploinsufficiency) due to heterozygous SLC2A1 variants. 16,25he disorder caused by de novo HK1 variants seems to share this remarkable CSF profile with GLUT1DS.It is important that, however, CSF lactate was found (far) above age-specific references values in all individuals in this study and in 3 previously reported individuals, 14 while CSF lactate is within normal ranges or even decreased in individuals with GLUT1DS16.Thus, considering both CSF glucose and CSF/ blood glucose ratio and lactate concentrations, the biomarker profiles of individuals with HK1 and SLC2A1 variants differ essentially.
Two questions remain: why individuals with de novo HK1 variants have (1) increased CSF lactate concentrations and (2) low CSF glucose concentrations and CSF/blood glucose ratio (similar to GLUT1DS)?These CSF abnormalities may directly link to the underlying disease mechanism suggested by Poole et al. 14 In their study, the authors suggested, based on structural protein analysis, that certain missense variants (c.1241G>A, c.1252A>G, c.1334C>T, c.1370C>T) within HK1 may disrupt the regulatory glucose-6-phosphate (G6P) binding site, which is responsible for inhibition of HK1 by its own product (G6P).Protein changes within this site may therefore lead to decreased inhibition by G6P and thus gain of function of HK1.According to this hypothesis, excessive glucose phosphorylation and metabolic flux through the glycolytic pathway would play a central role in the underlying disease mechanism.We can imagine that under these conditions, cerebral glucose consumption may simply be greater than cerebral glucose supply (via GLUT1), thus leading to a decreased CSF glucose and CSF/blood glucose ratio.Furthermore, overactive glycolysis would result in the accumulation of glycolytic intermediates including its end product, pyruvate, and therefore also lactate in the brain.An increased flux through the glycolytic pathway would-at least theoretically-also lead to increased concentrations of some of its metabolites such as dihydroxyacetone phosphate and glyceraldehyde 3-phosphate or their byproduct methylglyoxal, which are thought to drive mitochondrial dysfunction and linked to the development of neurodegenerative disorders. 26,27 explain the decreased CSF glucose concentrations and low CSF/blood glucose ratio, we considered that HK1 is the ratelimiting enzyme of the glycolytic pathway and that increases of its activity may significantly affect brain glucose metabolism.Alternatively, we postulate that increased concentrations of the abovementioned glycolytic products may inhibit GLUT1 expression in the CNS (including the blood-brain barrier) to decrease glucose availability for the glycolytic pathway.The existence of such a protective feedback mechanism may be of vital importance, especially for organs such as the brain, which are not protected by insulin-regulated glucose influx, and has been demonstrated under experimental conditions in different cell cultures. 28,29Taken together, both in HK1-related disease and in GLUT1DS, hypoglycorrhachia occurs.The HK1 clinical phenotype is a progressive mitochondrial disease presumably due to forced glycolysis with impaired autoregulation presenting more like a Leigh syndrome with predominantly neurodevelopmental issues and seizures.By contrast, in GLUT1DS, glycolysis is impaired by too little substrate and presents with seizures, but the movement disorder is the other dominating clinical feature.In addition, lactate may contribute to this.While lactate is low in GLUT1DS reflecting the complete shortage of all kind of energy sources, it is high and presumably contributing to the observed damage in HK1 deficiency.
From a therapeutic viewpoint, it would be extremely important to know whether part of the neurologic disorder is indeed the consequence of insufficient GLUT1 transport capacity because ketogenic diet therapy is generally very effective in individuals with GLUT1DS (in whom defective glucose transport into the CNS is the leading disease mechanism).In our cohort, 5 of the individuals had been on ketogenic diet therapy.One was having more seizures when not in ketosis, and ketogenic diet therapy was therefore continued until his early death.In the other 4 patients, ketogenic diet therapy was started upon the suspicion of GLUT1 deficiency.The diet had no obvious effect on the seizures and/or the EEG, and it was stopped after several weeks when GLUT1 deficiency had been ruled out.In addition, Poole et al. 14 reported 2 individuals who had been on ketogenic diet therapy without improvement, but more data are needed to draw firm conclusions on this topic.
Taking into account the spectrum of the progressive neurologic disorder and MRI characteristics including Leigh-like phenotypes, and stroke-like episodes, and the high CSF lactate concentrations, the HK1-associated NDD as reported in this study may be classified among the group of mitochondrial disorders.Categorizing this phenotype as such may facilitate early recognition and aid variant interpretation.Similarly, the severe early-onset, drug-resistant epilepsy of all cases with the c.1370C>T variant allows for classification as a DEE and vice versa adds HK1-associated NDD to the differential diagnoses of the clinical syndrome of DEE.
Finally, we would like to stress that this disorder seems to be only the second disorder (after GLUT1DS) with a remarkably low CSF glucose concentration and low CSF/blood glucose ratio.
Our study has several limitations: the cohort includes only a small number of individuals.MRI was performed at various ages, i.e., likely in different disease stages, using different imaging protocols, and many individuals had no follow-up studies.Likewise, lumbar punctures were performed at different ages and disease stages under different circumstances.Nevertheless, we feel that the results, as discussed earlier, may add valuable novel insights and may guide future research.A rational next step could be to find out whether these individuals indeed have a CSF profile that would fit with 1 or both proposed disease mechanisms (overactive glycolysis and deficient glucose transport into the CNS, respectively, for additional biomarkers, see also. 30We feel that only a deeper understanding of the underlying biochemical defect in this disorder would make it more amenable for (rational) therapeutic approaches, manipulating cerebral glucose handling in general or HK1 enzyme activity particularly.
Figure 1
Figure 1 MRI of Individual 6
Figure 2
Figure 2 MRI of Individual 7 (3) MRI at school age (n = 3; aged 4-5 years at their first MRI; individuals 10, 12, and 14).The images of individual 10 showed only mild cerebellar atrophy at school age.Individual 12 (at school age) was found to have cerebellar atrophy and bilateral involvement of the putamen with a putaminal eye
Figure 3
Figure 3 MRI of Individual 14
Figure 4
Figure 4 MRI of Individual 15
Table 1
Clinical Features and Genotypes of 15 Individuals With Monoallelic HK1 Variants
Table 2
Results of Cerebral MRI Studies
Table 3
Results of CSF StudiesValues in green boxes are normal.In red boxes, values are abnormal, i.e., below the 5th percentile (CSF glucose and CSF/blood glucose ratio) or above the 95th percentile (CSF lactate) of age-related reference values.Asterisks mark normal values of CSF glucose and CSF/blood glucose ratio (above the 5th percentile), which are still below the 10th percentile of age-specific reference ranges.All age-specific reference ranges are from Leen et al.
|
2024-04-07T15:12:41.059Z
|
2024-04-01T00:00:00.000
|
{
"year": 2024,
"sha1": "6b8949fbb813acb1a8c0963f8c788c3c3e92f901",
"oa_license": "CCBYNCND",
"oa_url": "https://www.neurology.org/doi/pdfdirect/10.1212/NXG.0000000000200146",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d579cedc75131229b02229cc9cba594c9d634773",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13439773
|
pes2o/s2orc
|
v3-fos-license
|
Motivational interviewing interactions and the primary health care challenges presented by smokers with low motivation to stop smoking: a conversation analysis
Background Research indicates that one third of smokers have low motivation to stop smoking. The purpose of the study was to use Conversational Analysis to enhance understanding of the process in Motivational Interviewing sessions carried out by primary care doctors and nurses to motivate their patients to quit smoking. The present study is a substudy of the Systematic Intervention on Smoking Habits in Primary Health Care Project (Spanish acronym: ISTAPS). Methods Motivational interviewing sessions with a subset of nine participants (two interview sessions were conducted with two of the nine) in the ISTAPS study who were current smokers and scored fewer than 5 points on the Richmond test that measures motivation to quit smoking were videotaped and transcribed. A total of 11 interviews conducted by five primary health care professionals in Barcelona, Spain, were analysed. Qualitative Content Analysis was used to develop an analytical guide for coding transcriptions. Conversation Analysis allowed detailed study of the exchange of words during the interaction. Results Motivational Interviewing sessions had three phases: assessment, reflection on readiness to change, and summary. The interaction was constructed during an office visit, where interactional dilemmas arise and can be resolved in various ways. Some actions by professionals (use of reiterations, declarations, open-ended questions) helped to construct a framework of shared relationship; others inhibited this relationship (focusing on risks of smoking, clinging to the protocol, and prematurely emphasizing change). Some professionals tended to resolve interactional dilemmas (e.g., resistance) through a confrontational or directive style. Interactions that did not follow Motivational Interviewing principles predominated in seven of the interviews analysed. Conclusions Conversational analysis showed that the complexity of the intervention increases when a health professional encounters individuals with low motivation for change, and interactional dilemmas may occur that make it difficult to follow Motivational Interview principles. Incorporating different forms of expression during the Motivational Interviewing could help to build patient-centred health care relationships and, for patients with low motivation to stop smoking, offer an opportunity to reflect on tobacco use during the office visit. The study findings could be included in professional training to improve the quality of motivational interviewing.
Background
Tobacco use is a preventable health problem linked to 25% of deaths among adults younger than 65 years in developed countries [1,2], making it the principal cause of premature death in these populations. In Spain, the percentage of the general population that smokes daily is declining steadily, from 32.1% in 1993 to 24% in 2012 [3]; nonetheless, health problems related to smoking are one of the most common reasons for visits to the health care system in general, and to primary health care (PHC) centres in particular [4]. The PHC setting is the most common resource for smoking cessation attempts [5]. Given that 70% of smokers annually visit a primary care professional, these centres have a strategic role in smoking cessation [6,7].
A study in Great Britain reported that one third of smokers reported low motivation to stop smoking [8].
Interventions by health professionals improve the likelihood of success. Various meta-analyses have shown that brief advice increases quit attempts by a further 1% to 3% [9,10].
Another approach used in PHC to motivate individuals who are hesitant to make changes or ambivalent about smoking cessation is Motivational Interviewing (MI), based on the work of Miller and Rollnick [11]. This method has been defined as a collaborative, person-centered style for addressing the problem of ambivalence about change. It is designed to strengthen personal motivation and commitment to a specific goal by eliciting and exploring the individual's own reasons for change, within a climate of acceptance, empathy, and mutual cooperation, ultimately respecting the individual's decisions [11]. MI has attracted considerable interest because of evidence that it produces better results than brief advice [12], which constitutes usual care in our PHC context [13]. Meta-analyses of smoking cessation interventions have reported that, compared to brief advice, MI achieves a modest but significant increase in the number of cessation attempts and in abstinence rates. However, the authors recommend caution in interpreting the results because of study limitations: variations in the quality of the study design, inadequate evidence of fidelity to MI principles (which had repercussions for motivation to change), and the possibility of publication biases [14][15][16][17].
Numerous studies have evaluated the efficacy of MI, focusing on how to measure MI counsellor fidelity in real-world settings and MI trainings [18][19][20]. These authors applied behavioural coding of MI sessions with fidelity assessment systems like the Motivational Interviewing Skills Code (MISC) [21] and the Motivational Interviewing Treatment Integrity (MITI) [22,23]. These instruments identify relational and behavioural characteristics of the therapy sessions for both the counsellor and the patient. Although this line of research is important, another approach is based on conversational analysis (CA) that identifies sequences that can offer deep insights into the interaction between the health professional and the patient. This method focuses on a turn-by-turn analysis, which allows a sequential examination of interactions and could shed greater light on the interpretations and assumptions established by the communication [24], compared to the more established MI coding schemes such as the MITI and MISC.
In the sociological discipline, CA has been used to study the health care interaction as a moment-bymoment production space for "human social life" [25]. This approach emerged from Garfinkel's ethnomethodology [26] and the ethnomethodological CA approach described by Sacks [27], both of which acknowledge talk-in-interaction as a social reality that occurs as turn-taking.
In recent years, several studies and reviews have been published that uses CA to examine patient-health professional interactions to deliver bad news [28] and offer advice on lifestyle changes [29], for example, with findings that may prove to be key to successful professional practice.
Articles by Maynard & Heritage and Pilnick, Hindarsh & Gill showed the collaborative nature of health care interactions. When individuals are paying attention to the conversation and to the behaviour of the other, one will initiate a sequence and this will become a point of reference for the other, generating the second part of the sequence. In addition, co-constructing the interaction involves the notions held by both participants about the subjects they are discussing, as well as the social context in which the interaction takes place [24,30].
Another topic of interest in CA is the study of how the office visit is organized, the tasks that are completed, and the dilemmas that arise in the interaction. Mikesell [31] reviewed interaction studies that utilized CA, and reported the major findings that could help to build relationships of support and trust, and substantially improve patient health. The findings suggest that a dynamic, collaborative interaction is key to a positive office intervention. Among the studies reviewed was that of Barry et al. [32], which identified four types of health care interaction, defined by the shared or one-directional use of the Voice of the Lifeworld and the Voice of Medicine (using Misheler's terms) [33]: a) only the Voice of Medicine is used; b) the Voice of the patient (Lifeworld) is blocked by the Voice of Medicine; c) the Voice of the Lifeworld is ignored, and d) the point of departure is the Voice of the Lifeworld. When patients and health professionals work collaboratively, the outcomes improve; this can be measured by the presence or absence of misunderstandings, adherence to therapy, and each participant's satisfaction with the interaction [25].
Other significant contributions of the CA approach are found in studies by Pilnick & Coleman of office visits that include smoking cessation interventions. The advice to stop smoking is more effective when the health professional incorporates specific strategies to adjust the conversation to a patient's needs (negotiation of needs and personalization of the message). The patient is then more likely to adopt an attitude of consent that advances the conversation towards the target [34].
Finally, Coleman et al. analysed interactions that occurred while quit-smoking advice was being given, using an adaptation of CA as their method of analysis. They observed that health professionals had a confrontational reaction when faced with rejection of their advice. They also suggest that smoking cessation counselling aimed at patients with low motivation would have better outcomes if health professionals had more advanced conversational skills [35].
All of these aspects, analysed using CA (collaborative nature of the interaction, use of open-ended questions, negotiation), must be identified within MI sessions because they form part of that interaction style. Therefore, CA allows the analysis of specific practices that may, in our case, make motivating the patient more difficult and provide recommendations about the type of specific actions a health professional should carry out to introduce motivational elements into conversation and to improve patient satisfaction [24,36].
Only a few studies of smoking cessation interventions have examined brief advice about smoking from the CA perspective; no studies were identified that used CA to examine the MI sessions with patients having low motivation to quit smoking.
For these reasons, the present study aimed to use CA to analyse the structure of MI sessions carried out by primary care doctors and nurses in conversations with patients having low motivation to quit smoking. In addition, we examined the actions of the health professionals during the MI session and assessed the consequences in the patient response. These objectives arose from questions such as, "how is the MI session organized? What do people do to understand each other during the MI? What patterns of interaction are in line with the basic MI principles?
The CA results concerning the encounter between a patient with low motivation to stop smoking, and a health professional conducting the MI session can provide useful knowledge to support the studies of the effectiveness of such interviews [37,38]. The results can also be used to improve the training offered to health professionals about patient communication, in an effort to advance our knowledge in this field.
Methods
The present work is a substudy of the Systematic Intervention on Smoking Habits in Primary Health Care Project (The ISTAPS study, Spanish acronym), a multicentre, cluster-randomized clinical trial in Spain [39,40]. This substudy applied a CA approach [26,41,42] to analyse the health care interaction, assessing how the conversation between the health professional and patient was structured during the MI session. This research focused on individuals with low motivation to stop smoking. All participants, both patients and health professionals, were concurrently participating in the ISTAPS study.
Four doctors (2 males, 2 females) and one female nurse, all with more than 10 years of professional experience, agreed to record their office MI sessions and to recruit smokers. Before beginning the ISTAPS study and during that study period, the health professionals in the intervention group attended 20 hours of workshop training on smoking cessation interventions. The workshops used techniques such as roleplaying and included a fourhour training session in the practical aspects of the MI protocol. In addition, participants attended eight hours of reinforcement sessions [39].
Patients were recruited to the ISTAPS study if they identified themselves as smokers in response to a question from the attending health professional when they came to the PHC office for any reason. Patients who provided informed consent were invited to make another appointment at the office, when the PHC professional collected personal and smoking habit data (selection interview). At the end of the selection interview, smokers identified as being at the precontemplation or contemplation stage of change were interviewed for about 10 minutes, using the brief MI format of incorporating personalized motivating elements into the conversation, based on the Rollnick & Butler model [43]. They were also given a leaflet containing motivational information and told about the help available to them if they changed their minds and decided to quit smoking [39].
The strategy used to select the smokers included in this study was maximum variation sampling [44]. Selection criteria were sex (male-female), age (young-adult-elderly), socioeconomic status [45], low motivation to quit smoking (<5 points according to the Richmond Test score) [46] being in the precontemplation or contemplation stage in the change process (precontemplation-contemplation-preparation-action) [47]. In addition, patients were selected if they agreed to their office visits being recorded for a period of six months. Nine ISTAPS participants met these inclusion criteria. Two of the nine participants came to the office for a second visit during the study period because of a health issue, and at the end of the visit the health professional took the opportunity to conduct a second MI session, for a total of 11 interviews conducted by the five participating health professionals. The characteristics of the nine smokers interviewed are presented in Table 1.
MI sessions were conducted during 2006 in the Barcelona metropolitan area. The recordings (4 hours, 11 minutes) were transcribed following established CA recommendations [48,49]. Data were managed using Atlas/ti.5.1. The analysis process ( Table 2) was two-fold: a) Qualitative Content Analysis (QCA) of the intervention's action protocol ISTAPS study identified the actions that needed to be taken during a visit ( Table 2, point 1). The ISTAPS research team reached consensus on the meaning of these actions and generated an analytical guide for the video recordings. This analysis allowed us to develop a framework for coding the transcriptions ( Table 2, point 2). b) Conversation Analysis (CA) consisted of analysing in detail the semantic content, interactional effects, and consequences observed in the selected sequences corresponding to the study objectives [24]. In our study, nonverbal behaviours were not analysed. The procedure involved coding by topics -structure of the office visit and the actions taken ( To ensure the rigour and quality of the study, we based it on the following criteria [50][51][52][53][54]: CA was selected as the research methodology because it is focussed on what "happens" in the interaction and "how". (criterion: epistemological and methodological appropriateness). The context for each interview was described (place, interference or interruptions, climate), taking these elements into account in the analysis. Participant selection was done intentionally, with the goal of achieving maximum variation in the sample to ensure generalizability. Recordings were repeatedly played while the transcribed text was read and reread, in order to catch nuances of the interaction. The analysis was carried out independently. All members of the research team have extensive experience in smoking cessation interventions and reviewed the findings carefully, providing feedback to ensure that the results were consistent with the study objectives. The study findings were illustrated with specific, relevant sequences that support the interpretations of study results. (criterion: validity). The research team reflected on the entire process of the study, including their assumptions and the possible impact on study results, and discussed the role of the professional, various smoking cessation intervention models, and the difficulties patients face when they try to quit smoking (criterion: reflection).
The Ethics and Clinical Research Committee of the Jordi Gol Institute of Research in Primary Care approved the project. Participants were informed that the focus of the research was the doctor-patient relationship in discussions about smoking, and provided signed informed consent that included permission for audiovisual recording of the interviews. Confidentiality was ensured by the coding of participant data. The transcripts were anonymous but linked to the participant code, providing a context (age, sex, etc.) for comments selected from the verbatim transcripts.
Results
The results were classified into two categories, organization of the motivational interviewing sessions and Professional MI session Practices and Actions, and subcategories with descriptive examples.
Organization of the MI sessions
This category explains what takes place in the interaction between the health professional and the smoker with low motivation to stop, how it happens and why. In the 11 interviews analysed, the conversation has three phases: assessment, reflection, and summary. Assessment: The professional begins the MI session according to the intervention protocol of the ISTAPS study, summarizing the data collected in the selection interview (tobacco use, motivation and stage of change). The summary establishes a rapport based on shared understanding and verifies the patient's readiness to change and to initiate the conversation. This phase of the protocol lasts about three minutes.
Reflection: This phase is central to the MI encounter and requires the most time. In all of the conversations analysed, this phase was initiated by the health professional with a question, such as "why do you think that it is not important for you to quit smoking?" or "can you tell me why you don't think you could quit smoking?" In each case, the smoker had a chance to express his or her concerns and the health professional noted the individual's current consumption, statement of positive and negative aspects, and level of intent to change. The conversation was built around these data provided by the smoker. The health professional asked questions, offered information, and affirmed the doubts expressed by the smoker (e.g., "I see, there is a lot of smoking going on in your surroundings and that makes it more difficult for you to quit"). All patients showed some ambivalence about their smoking, with no difference between those in the precontemplative and contemplative stage of change. This phase lasts about five minutes.
Summary: The professional ends the conversation by reviewing the topics covered (e.g., "you told me that you don't feel prepared to make a quit attempt") and offering help in the event that the patient wants to quit smoking. The end of the MI encounter is always initiated by the health professional and the smoker responds. This phase lasts about two minutes, for a total average interview length of 10 minutes. During the different phases of the MI session, the interaction is constructed and, despite a standard organizational structure, different interactional dilemmas arise that are resolved in different ways in each interaction and therefore have an effect on the MI encounter.
Professional MI session practices and actions
This analysis reveals how professionals construct different types of MI encounters and identifies communication that actively centre the conversation either on the patient's or the professional's perspective. Interpersonal skills, use of language and application of the ISTAPS study protocol represent distinct social realities reflected in two very different types of practice: patient-centred vs. problem-centred (i.e., the professional uses resources oriented toward resolving the problem, a familiar clinical interaction for the person with a health concern). When patient-centred practice is the dominant aspect of the MI, it is possible to think and talk about tobacco use. If the professional focuses instead on smoking as a health problem, the interview will not achieve its motivational objective. Of the 11 interviews analysed, the predominant practice was problem-centred in seven interviews and patient-centred in only four.
Actions that illustrate these two types of practices are analysed below. The selected actions are patterns of interaction that show the key action that either facilitates or hinders the patient's reflection on his or her readiness to change in the patient-professional interaction.
Actions that facilitate reflection on readiness to change
Use of reiterations, declarations and open-ended questions
All of the MI sessions included actions that led to a reflection about smoking, although at different levels of intensity, and the health professionals used them in all phases of the protocol. These actions allow the patient to take an active role and build the narrative about his or her use of tobacco. In this example (the third interview in the analysis), a woman states that smoking is harmful to health but she does not have the confidence to quit. The professional uses different strategies to examine the patient's problematic situation in depth and helps her reflect on her tobacco use.
Extract 1 (1) HP3: … and you say that you aren't very confident about quitting and… (2) P3: No, more than anything it's because my partner is a smoker. (1) HP3: Anyway, you went 15 days without smoking, that's really good! (2) P3: Yes (3) HP3: And these 15 days, what happened? (4) P3: Well, it was a special situation. He left, and I was a little depressed and didn't leave the house.
At the end of the conversation: (1) HP3: You've said that you don't feel ready to try quitting yet. (2) P3: Not right now, mostly because of my partner.
The example starts with a sequence in which the professional reviews an important point that the patient has stated, demonstrating modal reiteration or reflection. The reiteration (1) allows the patient to expand upon the situation and further develop her thoughts (2 and 4). Later, the professional uses a reflection followed by an affirmation, to express support and approval of a smoking cessation attempt (5) and also elicits a consent response (6).
This open-ended question obligates the patient to answer and reflect on what has happened (7 and 8). The conversation ends with another modal reiteration by the professional (9), which allows the patient to consciously explain the reasons for not quitting (10).
These actions (reiteration, declaration, and openended questions) allow for a patient-professional interaction that is oriented towards letting the patient reflect on her position. Together with the health professional, the patient constructs a "relationship framework" focused on her own daily life and individual concerns.
Actions that do not facilitate reflection on readiness to change
Even when professionals take actions to help a patient reflect on smoking, they also use other interaction styles to resolve interactional dilemmas that do not follow MI principles.
Focusing the conversation on the risks of smoking
In all of the MI sessions, the smokers explained their reasons for smoking and their intention to continue. In four interviews -all of them in the problem-centred professional practice group-the medical professional responded with expert medical advice warning about the risks of smoking. This stance produced an interaction that ignored or blocked the Voice of the Lifeworld. If this style of interaction persists, the intervention is not motivational.
In the second example, a woman is not motivated to quit smoking because she smokes only a few cigarettes a day.
Extract 2 (1) HP6: … it can cause cancer, (2) P6: … you can get cancer just because! I had an athletic uncle, he didn't smoke, he didn't drink, he had a set sleep schedule, and he had the bad luck of getting cancer and died in two years. Later: (1) HP6: But you know that you have a higher risk of having health problems! (2) P6: Yes, and if I get in the car and get on the highway, I have a higher risk, ha-ha. And also if I stay at home.
At the end of the conversation: (1) HP6: Just know that even if you don't smoke a lot, you are harming yourself.
In the first declaration, the professional focuses on the health risks of smoking (1), provoking resistance in the woman, who has another way of thinking about the professional's declaration of risk (2). This creates a discord in the interaction The professional and the patient address various meanings of the act of smoking; however, the professional does not leave space for any personal reflection on how a meaning applies to the individual patient's situation, which would help to ensure that, by communicating with the patient, a shared understanding of that meaning has been achieved (3)(4)(5). The professional's interpretations are perceived by the patient as an exercise of power over her discourse, provoking resistance.
This excerpt illustrates the lack of agreement between professional and patient on the existence of a problem. The professional implicitly interprets the patient's attitude as resulting from a lack of information and attempts to provide details of the possible problems that could arise. However, the patient rejects the arguments because this meaning has not been negotiated and is not shared. When the patient has low motivation to stop smoking, the health professional has an interactional dilemma that he or she resolves by talking about the risks associated with smoking. Focusing the conversation on risk provokes an interaction in which each participant is speaking about different concepts of risk. Misunderstandings arise that make it difficult to build a framework for a shared relationship and easy to move away from MI principles. In the MI approach, information about health risks should be used when the patient asks about them or shows interest in obtaining this information [17].
Clinging to the protocol
Clinging to the protocol is one way of resolving the interactional dilemma the professional faces when the smoker states his or her intention to continue smoking. This action occurred in seven interviews, all of them dominated by problem-centred professional practice. The professional turned to the intervention protocol to resolve the dilemma and responded by asking a question from the protocol that allows him or her to take control of the conversation.
In this example, the professional diligently follows the MI protocol with a male smoker, but when confronted with a dynamic and complex situation he resolves it rather mechanically by using the ISTAPS study protocol form as a guide.
Extract 3 (1) HP4: Yes, zero, that would be no importance, and "10" is the maximum importance. Between zero and 10, where would you place your level of importance to quit smoking right now? (2) P4: Right now? (a pause of 6 seconds). Well, right now it's zero. When this professional encounters a patient who expresses, from the beginning, little interest in quitting (1-2), there is an opportunity to explore why he has so little motivation. Instead of making an effort to gain an understanding of the patient's perspective, the interviewer carefully follows the structured sequence of data points, inquiring about confidence and readiness (3)(4)(5)(6) to do something the patient has expressed no interest in doing. In this interaction, the professional's opportunity to explore the patient's possible ambivalence or other potentially important factors is lost. The patient's low motivation to stop smoking causes a new interactional dilemma for the health professional. Clinging to the protocol is a strategy that blocks the "Voice of the Lifeworld"; the conversation goes forward but does not necessarily follow the principles of the MI.
Prematurely emphasizing change
Premature emphasis [55] consists in stressing a behaviour change when the patient has not expressed his or her clear intention to change and occurred in five of the interviews dominated by problem-centred professional practice. This action occurred after the patient expressed some reason to stop smoking or described a previous cessation attempt. The professional grasped onto the declaration and proposed a behaviour change, without taking into account other information the smoker had provided during the conversation.
The selected example shows a man who is motivated to change, but not immediately because of anxiety that is sufficiently severe to require treatment with tranquilizers. In the conversation, the patient explains that the major obstacle to smoking cessation is tobacco dependence in the morning.
Extract 4 (1) P9: I don't know, when I tried quitting that week, I had a really hard time. (2) HP5: Maybe it's because you tried quitting without any help, don't you think? I assure you that with a little bit of help it would be better, because you have a high level of dependenceyour nicotine score is quite highso that's why the first two or four weeks would be hard for you without any help, and there are methods. (3) P9: The worst time was the mornings. I can't. (4) HP5: Yes, yes, mornings are the worst for those with the highest dependence because that's when they need it the most, which is why I think it would be worth it if you tried again with some treatment. Later: (1) P9: But I had a really hard time.
(2) HP5: You had a really hard time, but it was a week.
(6) HP5: Sure, if we had some way to alleviate this a bit, how would that be? (7) P9: I think I could quit.
At the end of the conversation: (1) HP5: Do you think that works for you? (2) P9: Well, really it's the morning. The professional emphasizes what is needed for change too soon and does not ask why the patient's most recent attempt to quit failed (1)(2)(3)(4). The professional believes that the problem is that the patient tried to quit without help. Premature emphasis and lack of exploration into the problem prevents advancement in the reflection process. Rather than explore (beyond what the health professional knows to be true) why it is so hard for the patient to quit smoking in the morning (7)(8)(9)(10)(11), the interviewer resumes the conversation without negotiating the next step. MI session has shifted towards the professional's goal, without using strategies such as reflective listening and further development of the issue. Furthermore, the professional does not address the patient's use of tranquilizers and the effect this might have on breaking the smoking habit (12)(13)(14)(15)(16)(17)(18)(19). The CA of this sequence shows how the health professional confronts a new interactional dilemma. In order to advance the interview, a decision is made to ignore the patient's "Lifeworld" experience.
Main findings
Our study has three main findings. The first is that, despite a similar structure in all of the MI encounters analysed (assessment-reflection-summary), we identified different professional practices used to motivate a patient to quit smoking. One of these resembles the Miller-Rollnick model [11], in which interaction is centred on ambivalence toward change. Our results also concur with other reports indicating that these strategies favour a patient-focused interaction [32,37,56,57]. The second practice is a directive interaction, without negotiation and agreement on the existence of a problem, led by the professional and producing hostile or brief answers from the patient and silences from both participants.
The second main finding is that CA shows the complexity of constructing an interaction with a patient whose motivation to stop smoking is low. In order to avoid a confrontation, in which the conversation would become a professional challenge, the health professional must adapt to the patient's declarations of reasons not to quit smoking. Studies of CA acknowledge that the patient-health professional interaction is collaborative by nature, and also recognize the difficulty in constructing a personalized and negotiated process [30,36,58].
Although all participating health professionals attended four-hour training sessions, differences were seen in implementation of the MI sessions. These could be related to the appearance of new interactional dilemmas due to low patient motivation and an accompanying lack of interest in the MI session. Some actions taken to resolve these dilemmas -such as confronting non-negotiated problems, clinging to the protocol, or prematurely emphasizing willingness to change-shift the MI session towards the professional. This often triggers a defensive patient response and/or results in lost opportunities to help the patient reflect on the smoking habit itself. Francis et al. [56] affirmed that professionals tend to enhance confrontational behaviours when the patient has a high resistance to change, making the interaction difficult. Coleman et al. [35] reported that when a patient presented smoking-related health problems, the doctor took a more directive approach. The conversation was focused on the health problem without considering the patient's point of view, producing confrontational interactions that made it difficult to advance the conversation.
As demonstrated by the different results reported in CA studies, agreement on the existence of a problem is necessary at the beginning of the interaction to avoid hostile responses. Equally important is the way in which health professionals follow up on concerns expressed by the patient; this follow-up facilitates supportive, patientcentred relationships [30,35,59,60]. According to Parry [61], these CA findings have been achieved in the academic sphere and must now be incorporated into training in patient communication offered to all health professionals.
The third main finding is that CA reveals various types of interaction that show how the "Voice of the Lifeworld" and "Voice of Medicine" are used during the MI conversation [32,62,63]. The interactional dilemmas with which health professionals are confronted are often resolved using biomedical logic, or the "fix the disease" model. The professional and the patient speak exclusively in "the Voice of Medicine"; "the Voice of the Lifeworld" is ignored or blocked out by the professional. Although health professionals take an interest in having a motivational conversation, the "fix the disease" model persists. This might be explained by adherence to the institutional roles of patient and health professional during the office visit and their interactions are constructed around a health problem to be resolved (diagnosis, treatment, follow-up). It would be interesting to conduct further study of the impact on a normal office visit that could be achieved if both the health professional and patient spoke in "the Voice of the Lifeworld".
Strengths and limitations of the study
Several strengths of this study should be highlighted. First, the methodology was an innovative approach, contributing to the CA literature an analysis of MI sessions with low-motivation individuals. These results complement and help to explain, in part, the results of the ISTAPS clinical trial, which found no significant differences between the intervention and control groups in patients who were in the precontemplation stage of change [40]. Secondly, the study demonstrates that CA is a useful approach to analysing the fidelity to MI principles [11,64] observed in the conversations studied. This is an important strength because of the limited evidence available on this topic [14].
Four potential study limitations should be considered. The present analysis included 11 interviews that illustrate different MI practices. A larger sample could help to identify a wider range of practices and develop a better understanding of how the interaction between the health professional and patient is organized during a MI session about smoking in the PHC setting. Nonetheless, non-motivating patterns of interaction predominated at different points in the conversation during seven of the MI sessions involving three health professionals.
Another possible study limitation is that the MI conversation was conducted in the clinical setting with smokers who had a low motivation to quit, during a visit focused on a health concern and not specifically or solely on smoking. This may have affected the dynamics of implementing a MI session about smoking cessation. On the other hand, the study data were collected in the typical context of the MI.
A third limitation is that classic CA insists on "taking into account" all of the details of the interaction. Although our transcripts substantially followed Atkinson & Heritage, they are somewhat less exhaustive and did not permit context-rich analysis, including intonation, body language, and other nonverbal elements.
Finally, the voluntary participation of the health professionals could have generated bias because these participants were actively interested in smoking interventions, in using the MI session, and in improving this clinical technique. Other professionals who are less interested in this technique would likely follow other MI practices.
Recommendations for clinical practice
The study findings suggest the following processes that may be advisable to implement in clinical practice: Before beginning a MI conversation: Be aware that the least favourable situation for the MI about smoking cessation involves smokers with low motivation to change that behaviour; this increases the complexity of the intervention.
MI At the beginning of the interview:
Summarize the information you have about the individual's tobacco use. Adjust that information as the patient indicates and begin the conversation with an open-ended question, such as "do you feel OK about how much you smoke?" It is not recommended to ask a question to which the obvious correct answer is "stop smoking".
As the conversation develops: Provide continuing feedback to the patient. Ask open-ended questions and incorporate the "Lifeworld" voice into the conversation. Align the information provided as a health professional with relevant personal concerns expressed by the patient.
Conclusions
This study underlines the importance of the methods and procedures used by professionals in their patient interactions during a MI encounter. Our analysis suggests that when a health professional encounters individuals with low motivation for change, this increases the complexity of the intervention and several interactional dilemmas may occur that make it difficult to follow basic MI principles. Different forms of expression (reiterations, declarations, open-ended questions) during the MI session could be enough to build a patient-centred relationship.
Clinging to the protocol (whether a suggested interview protocol or the process involved in treating the health problem), focusing on risk, or not following up on the patient's expressed concerns makes it more difficult for health professionals and patients to construct the essential "shared understanding" that allows them to take advantage of the opportunity to reflect on tobacco use during the office visit. Although health professionals take an active interest in having a collaborative relationship, they resolve the dilemmas of interaction from a biomedical perspective. The study shows that CA is a valid approach to analysing fidelity to MI principles. Therefore, it is important to incorporate the findings of CA studies into professional preparation and practice.
|
2017-06-22T23:33:29.561Z
|
2014-11-26T00:00:00.000
|
{
"year": 2014,
"sha1": "95f15e3d51d13931bbd661c4e82d4e9e548b5030",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-1225",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cfbb3424e6437ea13530452fcf5c2955d479c417",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245393463
|
pes2o/s2orc
|
v3-fos-license
|
Sisal Fibre Based Polymeric Composites
Nature origin fibres have drawn in extraordinary consideration from commercial and specialists for the use of polymer composites as a result of their ‘greener’ nature and commitment to maintainable practice. Different enterprises have moved towards reasonable innovation to work on the harmony between the climate and social and financial concerns. Innovative work has demonstrated that normal fibres have been effectively applied as fortifications in the composites business, for example, for transportation, inside segments, building, airplane.
Introduction
Sisal can be simply cultured in a little cultivated area time. The plant grows in nature in the hedges of fields and railway tracks. Research has showed that approximately 4.5 million tons for every year sisal fibres are extricated all through the planet. Sisal fibre is taken out from trees of the sisal plant (named: Agave sisalana), which is currently planted in tropical parts of the Africa and some other regions of Far East. Usually, sisal plant contains nearly 250 sisal leaves and each leaf is having approximately 1000 to 1200 fibre bundle. On an average, a sisal plant has 4% fibre, 0.75% cuticle, 8% dry substance and 87.25% water. In general, sisal fibre is taken by soaking and by scrapping and using other mechanical resources [1][2][3][4][5].
Higher quantities of supplies and ecological system these days include rise in the claim of industry to use green composite resources. It has also turned out to be the major dynamic compel of current research on the progress of eco-friendly and sustainable natural fibre-toughened polymer composites as an alternative of the synthetic one. An established fact is that that synthetic fibres-reinforced polymer by now has exceptional property and uses. Likewise, glass fibre-based polymeric composites were known for its outstanding properties, which were used in railway track sleepers [1]. Phenol-based glass fibre polymeric composites have high-class fire-resistant property to convene the fire necessity of construction resources [2]. Although from eco-friendly aspect, reinforcement using natural fibres might be a better option as they can be taken out from plant life, birds and farming wastes. From many past years, agriculture-based waste fibres encompass the preferred choice of researchers for its better utilization purposes. Exemplar of agro-waste fibres are palm oil, bagasse fibre, corn fibre, stalks, coir, bamboo, pineapple, banana and rice husk. These fibres are normally extracted from part of the plant such as stem, leaf, seed or even its fruit [3].
It is noteworthy that artificial fibres built up polymer as of now have exceptional properties and uses. It has been reported that glass fibre-supported polymeric composites are having brilliant characteristics and had the option for their use in sleepers of railroad track [1]. Phenol-based glass fibre polymeric composites have great fireproof properties to convene the fire necessity of structure resources [2]. Notwithstanding, the environment-accommodating perspective, supportutilizing normal fibres might be a superior decision as they can be obtained from plants, creatures and agribusiness squanders. Over the previous decade, agribusiness squander fibres have been the most loved options of analysts for its reasonable assets. For example, horticulture squander fibres are oil palm, bagasse, corn, coir, bamboo, pineapple, banana and rice husk. These fibres are ordinarily extricated from parts of the plant such as stem, leaf, seed or even its natural product [3].
Furthermore, natural fibre composites are being less expensive than manufactured composites, which are bio-degradable, richly accessible, sustainable and light in weight. Regular fibres start from three sources, in particular, plants, creatures and minerals. There are more than 2000 kinds of fibre plants on the planet, and these are generally made out of cellulose, for example kenaf, sugar palm, bamboo, corn, cotton, flax, feed (from grass cutting), hemp, henequen, jute, pineapple leaf, banana, ramie and sisal. The utilization of normal fibres in composites can likewise tackle some different issues, for example, moderate energy utilization during production, leaving basically no carbon impression and diminishing removal issues.
The new patterns in the advancement of the more up to date materials have driven in supplanting materials such as glass and carbon built-up composites with the normal fibres-supported composites, for instance in car inside, passer-by connect, transporting beds, composite rooftop tiles, furniture, toys and so on. Notwithstanding, the primary disadvantage of natural fibres as support is that they are incongruent with thermoplastics because of their hydrophilic nature, which brings about the poor interfacial connection between the fibres and grid. This resulted in the poor mechanical properties of the composites. Hence, the change of synthetic fibres is needed to make them less hydrophilic. Endeavours are made to brief about different substance medicines on regular fibres.
Sisal fibre can be taken out from its leaf and can be categorized into three types: mechanical fibre, ribbon fibre and xylem fibre. First type known as mechanical fibres can be taken out from the edge of the leaf, which is similar to horseshoe and can be alienated by the removal process. The second type that is known as ribbon fibre is the greatest fibre and be capable of be tearing up longitudinally throughout its dispensation.
The third type known as xylem fibres are generally uneven in shape and split up easily throughout the dispensation. These fibres crop up in between vascular bundles in contrast to the ribbon fibres [16][17][18][19].
In addition to this, the chemical structure of sisal fibre differs from one place to another be dependent on the source of accessibility, measuring techniques, age issue, etc.; similar to additional normal fibres, sisal fibre also contains cellulose, lignin, hemicelluloses and dampness. Sisal fibre contains cellulose 65-68%, hemicellulose 10-22%, lignin 9.9-14% and moisture content 10-22%.
Sisal fibre-toughened polymer composites
Along with the different natural fibre-based composites, sisal fibre-supported composite produces predominant effect potency with reasonable ductile and flexural property. It can be used in applications where high-effect strength is required. Presently, sisal fibre utilizes different types of polymers such as thermoset-, thermoplastic-and bio-degradable polymeric-based resources and their different properties have been reported in the literature.
Ramesh et al. [6] researched the mechanical properties of sisal, jute and glass fibre-supported polyester composites and seen that the expansion of glass fibre into jute fibre composite brought about greatest elasticity. Similarly, in the case of jute and sisal combination composites were having maximum flexural force and most extreme effect power was obtained from the sisal fibre composite. Properties of elasticity, flexural strength and compressive strength of epoxybased sisal-glass-based composites are reported [8,21] and sisal normal fibre composites are created amid and with no silica by joining fully biodegradable sisal fibres in addition to the polymer framework. The outcomes depicted that the elasticity and pliable modulus of composites with silica are 1.5 and 1.08 occasions more prominent than those of composites without silica individually. The sway strength of composite with sand is 1.36 and 1.8 occasions as compared to the composites without silica what's more, pure polyester, individually. The effect of sisal fibre on the properties of the polymers has been reviewed [9].
Utilizations of sisal fibre and plant fibres show huge commitment in vehicle applications because of its qualities, for example high solidness with light weight per unit region, simple to reuse, 30-40% lighter than glass fibre, decreased fuel utilization, minimal expense, no wear of panels or any part of tiles.
Sisal fibre-based polymer composites and their applications
Tooling has no well-being danger, great warm and acoustic protecting properties and so forth. Real modern interest for normal fibres has expanded distinctly in the course of recent years. In 2005, the first run through normal fibres (without wood and cotton) was utilized in car composites [6]. Regular fibre composite materials are being utilized for making a huge number in the car area [21]. Sisal and jute fibres have been utilized in the German auto industry for quite a long time [8]. Mercedes first utilized jute-based entryway boards in quite a while E-Class vehicles in 1996.
As of late, there has been expanding interest in the substitution of glass fibres in built-up plastic composites by normal plant fibres such as flax, hemp and sisal parts [9]. Like glass, the normal fibres consolidate promptly with a thermoplastic or thermosetting grid to deliver item merchandise [10]. The car business requires composite materials to meet execution not really set in stone in a wide scope of tests. Common place market detail incorporates extreme breaking power and stretching, flexural properties, sway strength, hazing qualities, combustibility, acoustic assimilation, appropriateness for preparing: temperature and abide time, scent, water ingestion, dimensional steadiness and crash conduct [11][12][13][14]22]. Plant fibres are as of now just utilized in the inside of traveller vehicles and truck lodges. Other than their utilization in trim parts, for example entryway boards or lodge linings, plant fibres are utilized broadly for thermo-acoustic protection. Such Fiber-Reinforced Plastic protecting materials, basically dependent on cotton fibres reused from materials, have moderately high fibre content of over 80% by weight. Trim parts in Brazilian trucks, made of a combination of jute, espresso pack squanders and polypropylene sacks, show that reusing some of the time can prompt progressed applications. Another grounded field of use is the utilization of coconut fibres fortified with regular latex for seat pads. For this application, the capacity of plant fibres to ingest a lot of mugginess prompts an expanded solace that cannot be reached with manufactured materials. Beside this sort of improvements, essentially new applications have not been acknowledged as of late.
Normal fibre composites with thermoplastic and thermoset lattices have been embraced by European vehicle producers and providers for entryway boards, seat backs, main events, bundle plate, dashboards and numerous inside parts. Other natural fibres such as kenaf, hemp, flax, jute and sisal offer such advantages as decrease in weight, cost and CO 2 , less reliance on unfamiliar oil sources and recyclability. Glass fibre built-up plastics has demonstrated to meet the underlying and solidness requests of car inside and outside parts. Be that as it may, it displays deficiencies, for example it is somewhat high fibre thickness (40% higher than natural fibres), trouble to machine, helpless reusing property and potential well-being danger. An environmental advancement of normal fibre mat when contrasted with glass fibre mat offers another forthcoming utilization of regular fibre support. Flax, sisal and hemp are prepared into entryway cladding, seatback linings, floor boards and different other car parts [11]. The utilization of plant fibre (sisal/flax/hemp and so forth)-based vehicle parts such as trim parts, different boards, retires and brake shoes are drawing in auto businesses overall on account of its decrease in weight of about 20%, energy creation of 90% and cost of the segments of 15%. Moderate assessments demonstrate that around 7000 TPA plant fibre-based materials can discover their direction keen on traveller vehicles and multi-utility vehicles [9]. Sisal is utilized in entryway cladding, seatback lining and for bundle racks (the gap at the back seats of vehicles).
Prospects for use of sisal fibre in automotive manufacturing
sisal fibres can be utilized in Door boards,Lodge linings; Brake liners;thermo acoustic protection,trim parts,seat pads and back etc. The potential partners for the utility of sisal fibre in auto-segment industry are as follows: Mercedes Benz, Freightliner, Daimler Chrysler, Chevrolet, and General Motors, Mahindra and Mahindra, Tata Motors and Hero Honda and so on. The money-saving advantage investigation, techno-business attainability and the difficulties for sisal fibre double-dealing for different designing applications are as per the following: sisal is xerophytes and fills in badlands, which moderates soil and procures carbon credits. Assured maintainable fibre creation is 2.5 ton/ha for 6-8 years. Surface medicines empower sisal fibres to be utilized as support in a polymer lattice and it has advantageous over mineral and other ordinary regular fibres [10][11][12].
Electrical application of sisal fibre
In request to use sisal fibre for electrical applications, a few analysts have considered distinctive electrical properties of sisal fibre at various temperatures and frequencies. Expanding the plant age moves the dissemination factor (tan d) top to higher temperature. Further, the wonders were clarified based on primary DOI: http://dx.doi.org /10.5772/intechopen.101107 charges. Water consumed by sisal fibres has OH anions that go about as dipoles. Other than OH anions, there are a few pollutants and particles on the fibres. At high frequencies, the commitment of polarization of assimilated water particles and space charge diminishes and electronic and nuclear polarization becomes employable. Expansion in temperature influences the portability of particles and subsequently changes the ionic commitments [13,14]. The electrical properties of sisal fibre built-up LDPE have been concentrated as for the impacts of recurrence, fibre content and fibre length. The dielectric constant increases consistently with expanding fibre concentration for all frequencies in reaching 1-107 Hz. Similarly it is noted that dielectric consistent declines on an increment in fibre length and recurrence. Greatest dielectric consistent qualities are obtained at low frequencies.
Sisal/LDPE composites of 1 mm fibre length and 30% fibre concentration contain the most noteworthy upsides of dielectric constants at all frequencies. The upsides of volume receptiveness decline on an increment of recurrence and fibre concentration; that is, the electric conductivity of composites is more prominent than slick LDPE. When contrasted with glass/LDPE composites, similar pattern in electrical properties is noticed; however, the charges of dielectric constants of the last composites on recurrence and fibre concentration are more modest because of their lower interfacial polarization [2][3][4]22].
Application of sisal fibre in railways
Composite materials offer some huge benefits to metals in numerous underlying applications in rail lines such that they are lightweight, practical, consumption safe, energy-lessening underway, and stuff and fire retardant. Composite materials can be utilized in rail routes,the stuff case,primary entryways,gear racks etc
Prospects for the use of sisal fibre in construction industries
The addition of fibre strengthening in building materials can improve a lot of the manufacturing properties of the essential resources, such as fracture toughness, flexural strength and resistance to fatigue, impact and thermal shock. In few years, a huge deal of attention has been made all over the worlds on the possible application of natural fibre-based materials and on other construction materials. Research have been done in various nations on different properties such as mechanical physical properties, and toughness of concrete-based matrix toughened with natural fibres such as sisal, coconut, jute, bamboo and wood fibres. Natural fibres are good choice for strengthening of concrete-based materials due to their easy access, less prices as compared with synthetic fibres and less utilization of energy. In this chapter, an effort is made to describe the properties of the natural fibre-based composites [2][3][4]22].
Conclusion
The inclusion of sisal fibre strengthening in existing technologies can improve a lot of the manufacturing properties of the basic material, such as fracture toughness, flexural strength and resistance to fatigue, impact, thermal shock and spalling.
|
2021-12-23T16:04:21.933Z
|
2021-12-10T00:00:00.000
|
{
"year": 2021,
"sha1": "bff0d57b48020ce249e7ec84c19cf922b021d25a",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/79625",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "cd0288f42da92f9f8430efffc6a9066e60fd5be3",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
195746095
|
pes2o/s2orc
|
v3-fos-license
|
A STUDY ON THE REARING OF LAMPITO MAURITII KINBERG (ANNELIDA: OLIGOCHAETA) IN VEGETABLE KITCHEN WASTES WITH SOME NOTES ON COCOON, HATCHING PATTERN, FECUNDITY AND GROWTH
An increase in human population and rapid urbanization has led to an increased accumulation of organic wastes. Time-immemorial it has been proved that earthworm plays an important role in recycling biodegradable organic wastes and in solving problems of deteriorating soil conditions. vermitechnology is the method of converting waste into useful products through the action of earthworm. This species is considered as a potential one for vermitechnology in Indian conditions (Dash and Senapati, 1986; Senapati and Julka, 1993; Bhattacharjee and Chaudhuri, 2002).
Method of reproduction in Lampito mauritii Kinberg, is amphimictic, sexual and biparental (Gates, 1972).Observation has been mad that Lampito mauritii Kinberg, the selected species of earthworm, is successfully survive and reproduce in municipal waste disposal site of Kolkata and predominant earthworm species in that area.This study is intended to know whether this species is capable to survive, reproduce and grow in vegetable kitchen wastes (grows in West Bengal) medium, which is a prerequisite for successful vermicomposting.
MATERIALS AND METHODS
A. Method of Culture: 1. Earthen pot (upper dia.-25.4 cm., lower dia.-14cm., height-19 cm.); 2. Broken brick; 3. Sand; 4. Soil; 5. Cowdung; 6. Vegetable wastes; 7. Jute cloth.The earthworms were collected from Dhapa, municipal waste disposal site of East Kolkata and kept for rearing in the laboratory of Zoological Survey of India, Kolkata.At first earthen pot is filled with broken bricks (4 cm) followed by sand (3 cm) and soil (5-6 cm).The soil used in this experiment was brought from the same site and precaution was taken so that no foreign cocoon could enter in the culture pot.Water was added to moistening the pot.Adult worms were introduced (20 in no. in each pot) and culture media was added on top.The pot is then wrapped with jute cloth.Regular watering was made to maintain moisture level 30%-35% with the temperature ranges from 25°C-28°C.Maintenance of moisture in the culture media is a key factor for obtaining good result, pH is maintained within 6.5-7.5.Old culture media was replaced by the same amount of fresh media at fortnight intervals, for maintenance of optimum supply of food.
B. Media:
Vegetable kitchen wastes (like potato, banana, green leafy vegetables, cucurbita etc.) were collected from own and neighbouring families and stored in a plastic bucket, which contains holes for aeration.Cowdung was also added in the waste, in a ratio of 20 : 1 (wastes : cowdung) for primary decomposition.Sprinkle of water and mixing was done regularly to facilitate decomposition.Decomposition process continues for 15 days.
C. Sorting out of Cocoon :
Another pot containing earthworm and culture media with above composition was maintained.Cocoons were collected from this pot to study the incubation time, hatching pattern and juveniles.Cocoons were sorted out with great care from culture pot by wet sieving (0.5 mm mesh size) and hand sorting method.The size and weight of cocoons were measured.Before weighing the cocoons were washed gently in sterile water to remove debris and organic particles adhering to the sticky hull.Freshly laid cocoons were placed on wet blotting paper in a closed petridish (15 cm diameter) under ambient condition (30°C) and hatching of cocoons were observed until juvenile worms comes out from the cocoon.Sterile water was periodically added to the blotting paper to keep the paper moist.
OBSERVATIONS A. Rearing:
It was observed that at regular interval of 30 days, up to 90 days the number of cocoons and juveniles were increased in pot-I, 2 and 3 (Table 1).It is revealed from Table 1, that the rate of hatching of cocoon was significantly high from 75% to 78.78%.It is also revealed that the mean hatchling production is 0.836 adult-1 month-1 (± SD : 0.063) (Table 2).There is a distinct increase in biomass (Avg.1.6 times) (Fig. 1) and increase in population up to juvenile stage (Table 1).Altogether 25 cocoons were under keen observation of which 18 cocoons were hatched.The estimated hatching success was 72%.Only one hatchling was hatched from each cocoon.
DISCUSSION
The present study deals with 60 example of Lampito mauritii Kinberg for studying the biology in ambient laboratory condition.From the available literature it has been found that the shape, size, weight, incubation time, hatching success and production of cocoons differ greatly among earthworm species.Satchell (1967) reported Aporrectodea caliginosa, A. Zonga and OctoZasion cyaneum produced between 3 and 13 cocoons yeacI, AlloZobophora chZorotica produced 25-27 and Lumbricus rubellus, L. castaneus and DendrodriZus rubidus 42-106 cocoons yeac i .Edwards (1988) reported that Dendrobaena veneta could produce 84 cocoons yearl; EudriZus eugeniae, 188; Eisenia foetida, 198 and Perionyx excavatus, 1014 cocoons yeac i .In field condition Dash and Senapati (1980) observed that the number of cocoons produced by Lampito mauritii Kinberg was 14.25 adulc i yeari.In the present study under laboratory culture this species shows an average cocoon production is 13.24 adult-Iyr-I (Table 1).The slight decrease in number might be due to the change in micro climatic condition in laboratory.According to Bhattacharjee and Chaudhuri (2002), values of cocoon production for this species are at the rate of 43 adult-I year-I, which is much higher than the present investigation.This low rate in this experiment may be due to the higher parent worms density.Senapati and Sahu (1993) postulated that, considering both temperate and tropical species, the size of the worms bears a negative relationships with cocoon production; but worm diameter to cocoon diameter, worm biovolume to cocoon biovolume, worm dry weight to cocoon dry weight all bear significant positive correlation.Lee (1985) correlated the higher risk of mortality in early life with higher rate of cocoon production.According to Satchell (1967) there is a clear relationship between the number of cocoons produced and their location in the soil profile.Those species living near the surface and facing adverse conditions produce many more cocoons.A relationship between reproductive strategies and ecological categories in tropical earthworms was proposed by Lavelle et aZ., (1998) and Barois et aZ., (1999).They distinguished four groups of earthworms.According to their classification Lampito mauritii falls within group 3 : small, mainly polyhumic endogeic species with intermediate fecundity (10-68 cocoons adult-I yr-I ) and usually one hatchling per cocoon (Bhattercharjee and Chaudhuri, 2002).In the present observation only one hatchling emerge out from each cocoon (n = 25).But Bhattacharjee and Chaudhuri (2002) observed 53% of the cocoons produced more than one hatchling (2, rarely 3).Dash and Senapati (1980) observed, cocoons on hatching usually give rise to one and very rarely to two juveniles, in this species.
The development time of cocoons varies considerably among earthworm species.Hallatt et aZ., (1990) observed mean incubation period was 18.7±0.26days in Perionyx excavatus.Kaushal et aZ., (1999) observed mean incubation period of 31.9±1.2days in Metaphire houZetti in different culture media.Edwards (1988) reported that the time that cocoons of E. foetida took to hatch was 32-73 days; E. eugeniae 13-27 days; P. excavatus, 16-21 days and D. veneta, 40-126 days.Bhattacharjee and Chaudhuri (2002) observed 15 days incubation period in pot culture, Ismail (1997) observed incubation period for 18 days in artificial culture, Sahu and Senapati (1991) observed 28 days in field condition, Dash and Senapati (1980) observed 28-30 days incubation period during October-December for Lampito mauritii.In this experiment mean incubation period of 17.96 (SD : ± 1.754) days is observed.So, incubation period is shorter in laboratory culture than to the field condition.Soil moisture and temperature both have considerable effect on cocoon incubation and emergence pattern of juveniles.In complete hydric conditions and in very dry condition « 5% soil moisture) cocoons never hatch (Dash and Senapati, 1980).Bhattacharjee and Chaudhuri (2002) observed 60% hatching in this species, on the contrary in the present investigation 77.12% (SD : ± 1.577) hatching observed within the culture pot and 72% hatching in moist blotting paper.Hatching success is significantly higher may be due to the inhabitation of the species on the topsoil environment.Kaushal et aZ., (1999) observed 100% hatching success in Metaphire houZetti, when kept in moist filter paper.Hallatt et aZ., (1990) observed mean hatching success of all the cocoons produced from parental worms of different ages was only 63.4% in Perionyx excavatus.
High fecundity, short incubation period, high hatching success in anecic (Dash andSenapati, 1980 : Ismail, 1997) or top soil endogeic (Bhattacharjee and Chaudhuri, 2002) worm Lampito mauritii is probably adaptive strategies of 'r' selected worms (Sahu and Senapati, 1991) to enable them to survive drastic environmental changes in top soil.
According to Evans and Guild (1948), Satchell (1967), Lee (1985), Edwards and Bohlen (1996) cocoon production, time of incubation varies with species, population density, age structure and with different environmental parameters viz.temperature, moisture and the energy content of the available food.
Growth in biomass (Fig. 1) clearly indicates that vegetable kitchen waste serves as a good food source for the present studied species.It can be concluded from the present study that this may be used as a good vermicomposting species from vegetable kitchen waste.
Fig. 1 .
Fig. 1. : Showing the conversion rate of vegetable kitchen waste into biomass (gm) by L. mauritii with time.
ACKNOWLEDGMENTS
Authors are grateful to Dr. 1.R.B.Alfred, Director, Zoological Survey of India for providing laboratory facilities.Thanks are also due to Dr. 1.M. lulka, Emeritus scientist, Zoological Survey of India, Solan for constructive criticism and showing keen interest for this study, last but not the least to Prof. B.K. Senapati, Sambalpur University, Orissa for providing valuable literatures to the authors.
Table 1 .
: Showing number of cocoons and juveniles of Lampito mauritii in vegetable kitchen waste in three culture pots in the laboratory.(P = pot, C = cocoon, J = juvenile)
Table 2 .
: Showing different features of cocoon, hatchling and culture of L. mauritii in vegetable kitchen wastes (means ± SO).
|
2019-06-27T12:17:06.980Z
|
2006-09-01T00:00:00.000
|
{
"year": 2006,
"sha1": "cb60775693373fc79bcab1b1c5254fe02c718b42",
"oa_license": "CCBY",
"oa_url": "http://recordsofzsi.com/index.php/zsoi/article/download/159186/109875",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cb60775693373fc79bcab1b1c5254fe02c718b42",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
234976391
|
pes2o/s2orc
|
v3-fos-license
|
Benefit of Extracorporeal Membrane Oxygenation before Revascularization in Patients with Acute Myocardial Infarction Complicated by Profound Cardiogenic Shock after Resuscitated Cardiac Arrest
Author's summary Few studies have focused on acute myocardial infarction (AMI) with cardiogenic shock after resuscitated out-of-hospital cardiac arrest (OHCA). Only a small number of studies have reported the timing of extracorporeal membrane oxygenation (ECMO) in patients with AMI with cardiogenic shock. The current study, which used the large nationwide OHCA registry, shows that ECMO treatment before revascularization can decrease 30-day mortality, compared to ECMO after revascularization, in patients with AMI complicated by profound cardiogenic shock after resuscitated cardiac arrest. The current study emphasized the importance of early ECMO therapy before revascularization in circumstance which is difficult to determine optimal revascularization timing.
INTRODUCTION
Mortality rates for patients with acute myocardial infarction (AMI) decreased dramatically from the 1980s to the 2000s, due to the widespread use of reperfusion strategies and adjuvant pharmacological therapies. 1)2) However, cardiogenic shock develops in approximately 7% of patients with AMI, and is a leading cause of death. 3) The mortality rate for AMI complicated by cardiogenic shock still remains high, at over 40%. [4][5][6] Therefore, current guidelines for AMI recommend early revascularization in both ST-segment elevation myocardial infarction (STEMI) and non-STEMI (NSTEMI) complicated by cardiogenic shock. 7)8) Unfortunately, approximately 4-7% of patients with AMI experience out-of-hospital cardiac arrest (OHCA), and many patients with OHCA and AMI also present with cardiogenic shock. [9][10][11] Although immediate revascularization should be performed in patients with AMI and OHCA after successful resuscitation, many patients die without return of spontaneous circulation (ROSC). Furthermore, it is too difficult to perform coronary angiography on resuscitated patients with profound cardiogenic shock, because of the risk of cardiac death during coronary intervention. Venoarterial extracorporeal membrane oxygenation (ECMO) provides both cardiac and respiratory life support in patients with cardiogenic shock. [12][13][14] ECMO has a relatively rapid cannulation time; therefore, it can be used in the AMI setting, which requires immediate reperfusion for life saving. Several studies have shown the benefit of ECMO support in patients with AMI complicated by cardiogenic shock. However, few studies have focused on AMI with cardiogenic shock after resuscitated OHCA, [15][16][17][18] and only a small number of studies have reported the timing of ECMO support in patients with AMI with cardiogenic shock, who require rapid revascularization.
In the present study, we investigated whether ECMO support before revascularisation is beneficial in patients with AMI complicated by profound cardiogenic shock after resuscitated OHCA using the Korean nationwide registry.
Ethical statement
The study protocols were approved by the Ethics Committee of Chonnam National University Hospital Institutional Review Board (CNUH IRB No. CNUH-2018-261) and we have complied with the latest version of the Declaration of Helsinki (2013). A waiver for informed consent was obtained from the IRB.
Study setting and data sources
The Korean emergency medical services (EMS) system is a single-tiered, governmentbacked system that provides basic-to-intermediate level ambulance services. Emergency medical technicians are able to provide cardiopulmonary resuscitation (CPR) with an automated external defibrillator, evaluate cardiac rhythms, manage advanced airway, and inject intravenous or intraosseous fluids. The current EMS CPR protocol calls for emergency medical technicians to perform on-scene CPR using an automated external defibrillator every 2 minutes for at least 5 minutes. Advanced cardiac life support is not available at the scene, and emergency medical technicians are not permitted to declare death at the scene unless there are signs of irreversible death. EMS providers cannot stop CPR during transport to an emergency department. Consequently, all EMS-assessed patients are transported to a hospital. 19)20) Data were collected from EMS run sheets and hospital medical records using the Utsteinstyle reporting templates, and these data were extracted by medical record reviewers of the Korea Centers for Disease Control and Prevention. 21) EMS run sheets are completed by EMS personnel and include patient information, ambulance operation information, clinical information, and treatment and transport information. The Korea Centers for Disease Control and Prevention visited all hospitals to evaluate medical records and document hospital outcomes electronically. A quality management committee composed of emergency physicians, epidemiologists, statistical experts, representatives from the fire department, and medical record review experts ensured the quality of the medical record review process. The quality management committee educated all medical record reviewers prior to joining the project, provided a standard manual for data abstraction, monthly feedback to the reviewers, and consultation on equivocal cases as needed. 22)
Study population
This study used a cross-sectional design based on a nationwide, prospective registry involving all patients who experienced OHCA and were transported to a hospital by EMS, with resuscitation efforts performed in South Korea from 2013 to 2016. A study flow chart is presented in Figure 1. Briefly, a total of 116,374 patients experiencing OHCA in all EMS of South Korea between January 2013 and December 2016 were enrolled. Among these, 37,708 patients with obvious non-cardiac causes, 39,250 with ROSC before the emergency room (ER) visit, and 21,942 who died without ROSC were excluded. Among the remaining 17,474 patients with ROSC after the ER visit, 895 with AMI complicated by profound cardiogenic shock after ROSC, and treated with percutaneous coronary intervention (PCI) or thrombolysis, were selected for this study. After the exclusion of 22 patients who received thrombolysis, 27 who did not receive successful PCI and 662 who were not treated with ECMO, a total of 184 patients who received ECMO therapy before (n=117) or after PCI (n=67) were analyzed.
Study definitions and endpoints
The diagnosis of AMI was based on the criteria for a third universal definition of myocardial infarction. 23) Cardiogenic shock was defined as hypotension (<90/60 mmHg) for >30 minutes or a need for vasopressors or inotropes to maintain systolic blood pressure >90 mmHg, pulmonary congestion or elevated left-ventricular filling pressure, and evidence of end organ hypoperfusion (cool extremities, oliguria, lactic acidosis). 12) The decision to apply ECMO was made at the physicians' discretion. The ECMO device was implanted by percutaneous or surgical cannulation, using a 14-17 Fr cannula for the femoral artery and a 21-24 Fr cannula for the femoral vein, in the ER, catheterisation room or coronary care unit. Successful PCI was defined when thrombolysis in myocardial infarction flow grade 3, with a minimum stenosis diameter <20%, was achieved, with or without coronary stenting in the culprit artery. Patients received 300 mg aspirin and 300 or 600 mg clopidogrel, 60 mg prasugrel, or 180 mg ticagrelor as a loading dose prior to PCI. After PCI, 100-300 mg aspirin and 75 mg clopidogrel daily, 5 or 10 mg prasugrel once daily, or 90 mg ticagrelor twice daily was prescribed as the maintenance dose. Therapeutic hypothermia was defined as a case receiving hypothermia treatment using external, internal or mixed cooling, with target temperature between 32 and 34°C and a target duration of 12-24 hours. 22) Anticoagulation strategy and target for therapeutic hypothermia in patients underwent ECMO were deponed on each institution protocol. The definition of successful hypothermia was a recovery to alert mental status after finish of target temperature management. The primary endpoint was 30-day mortality. We also analysed the incidence rates of inhospital mortality and good neurologic function at discharge, with the latter defined as a score of 1 (no neurologic disability) or 2 (moderate disability; able to perform daily activities independently) on the Cerebral Performance Category scale, which is a 5-point scale used to evaluate neurologic functioning.
Statistical analysis
Continuous variables are presented as means±standard deviation and were compared using the unpaired t-test or the Mann-Whitney rank-sum test. Discrete variables are expressed as counts with percentages and were analysed by Pearson's χ 2 test or Fisher's exact test. Kaplan-Meier curves were constructed to compare primary endpoints between the ECMO before and after PCI groups; differences were assessed using the log-rank test. Cox's proportional hazards regression model (with adjustment for covariates) was used to assess clinical outcomes. Variables that were significant in the univariate analysis (p<0.1) were included in the multivariate analysis.
All analyses were 2-tailed, and a p value <0.05 was considered to reflect statistical significance. All statistical analyses were performed using SPSS for Windows (ver. 21.0; SPSS Inc., Chicago, IL, USA).
RESULTS
Baseline characteristics and in-hospital care according to the timing of extracorporeal membrane oxygenation Table 1 shows the baseline characteristics and in-hospital care data according to the timing of ECMO. The mean age was similar between the 2 groups, and the rate of male patients was also comparable. In total, 76.1% of patients suffered sudden cardiac death, as observed by a witness, and only 28.8% of patients received bystander CPR. There was no significant difference in the rate of bystander CPR between groups (29.1% vs. 28.4%, p=0.919). Initial shockable rhythm at the scene was seen in 53.8% of patients, with almost all receiving ventricular fibrillation (50.0% of patients). At the ER visit, shockable rhythm was observed in 27.7% of patients, and ventricular fibrillation was documented in 25.5%. The rate of shockable rhythm at the scene or ER was comparable between the 2 groups. Although the total duration of CPR was similar between the 2 groups, door-to-balloon time was significantly longer in the pre-PCI ECMO group (128.5±57.3 vs. 105.5±39.1, p=0.002). Therapeutic hypothermia was attempted with similar frequency between the 2 groups and the success rate was also comparable. Approximately half of patients in the pre-PCI ECMO group received ECMO in the ER; however, 71.6% of patients in the post-PCI ECMO group received ECMO in the catheterization room.
Baseline characteristics and in-hospital care according to survival or death at 30 days
Thirty-day mortality was 80.4% (148 patients). Table 2 shows the baseline characteristics and in-hospital care data according to survival or death at 30 days. Surviving patients were younger and mostly male. Although the rate of witness CPR was similar between the 2 groups, bystander CPR was performed more often in survived patients (47.2% vs. 24.3%, p=0.007). Shockable rhythm, both at the scene and in the ER, was observed more often in survived patients, most of whom received ventricular fibrillation. The total duration of CPR tended to be longer in the death group and door-to-balloon time was comparable between the 2 groups. Therapeutic hypothermia was tried with similar frequency in both groups; however, the success rate was higher in surviving patients. Surviving patients received more pre-PCI ECMO therapy compared to the death group (83.3% vs. 58.8%, p=0.006). In total, 58.3% of surviving patients received ECMO in the ER and 60.1% of expired patients received ECMO in the catheterization room. Although 16.2% (24 patients) of the deceased patients were successfully weaned from ECMO, they eventually died of cardiac or non-cardiac problems after ECMO weaning. The total durations of ECMO and hospitalization were much longer in survived patients. Figure 2A shows the incidence of study endpoints. In-hospital mortality was 78.8% (145 patients) in the entire study population and was significantly lower in the pre-PCI ECMO group (73.5% vs. 88.1%, p=0.020). Thirty-day mortality was also lower in the pre-PCI ECMO group compared to the post-PCI ECMO group (74.4% vs. 91.0%, p=0.006), and the result was similar on Kaplan-Meier survival analysis (Figure 2B, log-rank p=0.017 of patients with better neurologic function at discharge, defined as a Cerebral Performance Category score of 1 or 2, did not differ significantly between groups (54.8% vs. 37.5%, p=0.382). In multivariate Cox-regression analysis, pre-PCI ECMO significantly lowered 30day mortality compared to post-PCI ECMO (
DISCUSSION
In the present study, we compared 30-day mortality between ECMO before PCI and ECMO after PCI in patients with AMI complicated by profound cardiogenic shock after resuscitated cardiac arrest, using data from a nationwide prospective OHCA registry. The principal findings were as follows: 1) in-hospital and 30-day mortality were unacceptably high despite successful PCI in patients with AMI complicated by cardiogenic shock after resuscitated cardiac arrest; 2) early ECMO before PCI significantly reduced both in-hospital and 30-day mortality compared to ECMO after PCI; and 3) there was a tendency toward more favourable neurologic outcomes at discharge in patients who received ECMO before PCI than those who received ECMO after PCI. As far as we know, there are no randomized controlled trials regarding the benefit of early ECMO therapy before PCI in patients with AMI complicated by cardiogenic shock. [15][16][17][18] Recently, one retrospective study compared short-term survival between ECMO before PCI and ECMO after PCI in 46 STEMI patients, and showed that ECMO before PCI improved the chance of survival in patients with STEMI with complicated refractory cardiogenic shock. 24) However, there was limited evidence of the benefit of early ECMO therapy in patients with AMI after resuscitated cardiac arrest. Despite a lack of randomized controlled trials, a large meta-analysis showed the usefulness of early ECMO therapy for increasing survival and favourable neurologic outcomes in patients after resuscitated cardiac arrest. 18) CPR presentation in the AMI setting is associated with high short-term mortality in cases of AMI with cardiogenic shock treated by ECMO 25) ; however, no study has entirely enrolled AMI patients after resuscitated OHCA. Retrospective data from 253 patients who underwent ECMO indicated that a composite endpoint of in-hospital mortality, left ventricular assist device implantation, and heart transplantation was significantly lower in ECMO before PCI than in ECMO after revascularization (32.0% vs. 49.5%; odds ratio, 0.48; 95% CI, 0. 24-0.98; p=0.045). 26) In the current study, all patients were survivors after OHCA in the Korean nationwide registry contrary to above-mentioned study. Although the population recruited to this study included survivors after OHCA, in-hospital and 30-day mortality were unacceptably high, which could be due to the prolonged duration of CPR in the study population compared to other studies. 17) Consequently, the early ECMO before PCI can be useful in both AMI patients complicated by shock with or without CPR. The nationwide OHCA registry used herein did not have detailed information about procedural data and initial diagnosis, such as STEMI or NSTEMI. However, patients enrolled in the current study had thrombolysis in myocardial infarction flow grade 3 after PCI, which is associated with improved mortality after ECMO. 27) Furthermore, NSTEMI patients complicated by cardiogenic shock seem to have similar or worse clinical outcomes compared to STEMI patients with shock. 28) Therefore, the initial diagnosis may not have impacted clinical outcomes. Early ECMO may improve the chance of favourable neurologic outcomes 18) ; however, there was no significant difference in the frequency of good neurologic outcomes at discharge. Because of the high in-hospital mortality in the current study, the sample size was not sufficient (the total number of survivors was 39) to assess the effect of early ECMO on neurologic outcomes. Nevertheless, there was a higher tendency toward favourable neurologic outcomes at discharge in patients who received ECMO before PCI than those who received ECMO after PCI (54.8% vs. 37.5%). However, the reduced mortality with similar neurologic outcomes suggests the potential for generating survivors with poor outcomes.
Cannulation site between early and late ECMO group was significantly different in our study. Early ECMO group more received ECMO at ER (50.4% vs. 6.0%), and late group more received it at catheterization room (45.3% vs. 71.6%) or coronary care unit (4.3% vs. 22.4%). However, the detailed baseline or angiographic characteristics, and the data about reason for late ECMO insertion after successful PCI were not available in the current registry. Although there was no definite reason for late ECMO in the current study, there was a possibility that many patients in late group received ECMO due to CPR or profound cardiogenic shock even after successful PCI. This difference rather strengthens our conclusion that early ECMO before PCI can be useful compared to ECMO after PCI in AMI patients with profound cardiogenic shock after ROSC.
In the current study, successful therapeutic hypothermia and shockable rhythm were preventive factors of 30-day mortality. Patients with poor neurologic outcomes may be prone to severe infections, such as pneumonia, urinary tract infection or pressure sore infection, which could lead to septic shock or multi-organ failure. Shockable rhythm was also related to favourable clinical outcomes in patients treated with ECMO for refractory cardiogenic shock post-cardiac arrest. 29) The current study had several limitations. First, it used a non-randomised, observational design despite being based on a large, prospective, nationwide OHCA registry. Although we performed multivariate analysis, other variables not included in our registry may have influenced the study outcomes. Second, baseline characteristics, comorbidities and laboratory findings associated with clinical outcomes, such as serum lactate and prothrombin activity, were not considered. Third, the nationwide data we used did not include echocardiographic or renal replacement therapy data, nor detailed data of ECMO, such as pump flow. Fourth, data on in-hospital complications, such as limb ischemia, bleeding, stroke, and sepsis, which could impact mortality, were not available. Finally, the rate of ECMO implantation in the ER was high in the current study. Because all study population in the current study underwent CPR on arrival, and there might be a high probability of ECMO implantation in ER. Because nonfluoroscopy guided ECMO implantation was associated with higher complication rate, such as insertion site bleeding or catheter mal-apposition, 30) this high rate of ECMO insertion at ER could impact clinical outcomes.
In conclusion, ECMO support before revascularisation was associated with improved shortterm survival compared to ECMO after revascularisation in patients with AMI complicated by profound cardiogenic shock after resuscitated cardiac arrest. In the absence of randomised controlled trials, this study provides valuable information on the optimal management of these high-risk patients.
|
2021-05-22T00:02:58.319Z
|
2021-04-13T00:00:00.000
|
{
"year": 2021,
"sha1": "3c8e869470c04811f971d0bdbf7555d8bd0b89ca",
"oa_license": "CCBYNC",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8176069",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5c0ac8657299cc633d5ed1242ac2d7c90e75b7d0",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251333171
|
pes2o/s2orc
|
v3-fos-license
|
Research on Bending Performance of Three-Dimensional Deep Angle Interlock Kevlar/EP Armor Material
Three-dimensional (3D) woven composites have attracted much attention in the lightweight research of protective armor due to their high specific strength and good impact resistance. However, there are still many gaps in terms of the performance and influencing factors of three-dimensional deep-angle-interlock (3DDAI) Kevlar/EP armor materials. Therefore, in order to prepare 3DDAI Kevlar/EP armor materials with excellent ballistic resistance and mechanical properties, this paper studies the bending performance of 3DDAI Kevlar/EP armor materials and the influence of the number of stacking layers, resin content, laying method, and weft density. Finally, we compare it with the traditional two-dimensional (2D) plain laminated Kevlar/EP armor material. The results showed that when the 3DDAI Kevlar/EP armor material was subjected to bending load, the upper and bottom layers of the material had a great influence on the initial stiffness and fracture strength of the material, respectively; when the material’s warp and weft density are quite different, the utilization rate of the yarn and the strength of the material are negatively affected; the fracture energy of the 3DDAI Kevlar/EP armor material prepared by the orthogonal laying method was about 20% higher than that of the 3DDAI Kevlar/EP armor material with the unidirectional layering method; and the bending performance of the 3DDAI Kevlar/EP armor material in the weft direction was better than that of the 2D plain laminated Kevlar/EP armor material, with the 3DDAI Kevlar/EP armor material having better delamination resistance. The research results will lay the foundation for structural optimization and engineering applications of such materials.
Introduction
Fiber-reinforced composite materials are widely used in aerospace, transportation, military protection, and other fields due to their low density and high strength [1][2][3]. In the field of protection, fiber-reinforced composite materials can be directly prepared into armor for protection and can also be used as a back plate to form composite armor with a ceramic/metal panel [4,5].
Three-dimensional (3D) woven composite materials outperform the traditional twodimensional (2D) fabrics laminated to prepare laminate armor [5], in terms of impact resistance and interlayer shear resistance [6][7][8][9]. Due to the designability and structural complexity of 3D woven composites, previous studies have mainly focused on fabric structure, hybrid effects, and failure mechanisms. Warren et al. [10] combined a digital image method to study the tensile, compression, and in-plane shear performance of 3D orthogonal, shallow bend-interlock composite materials and compared them with 2D woven composites. Dai et al. [11] studied the tensile, compression, and bending properties of four 3D orthogonal and two 3D deep-angle-interlock composite materials, and found that the initial damage positions of the six composite materials with different structures
Materials and Equipment
The 3D deep-angle-interlock Kevlar fabrics (three different weft densities, as shown in Table 1), were self-made, as shown in Figure 1; Kevlar plain fabrics were supplied by Yantai Taihe New Material Co., Ltd. (Yantai, China); Epoxy resin E-51 was supplied by Nantong Xingchen Synthetic Material Co., Ltd. (Nantong, China); Polyetheramine D230 curing agent was supplied by Changzhou Runxiang Chemical Co., Ltd. (Changzhou, China); 101A-4S electric heating blast-drying oven was supplied by Nanjing Wohuan Technology Industrial Co., Ltd. (Nanjing, China); WG-1200 multifunctional ceramic tile cutting machine was supplied by Sichuan Wanguang Machinery Equipment Co., Ltd. (Guanghan, China); Instron 5969H universal material testing machine was supplied by Instron Testing Equipment Trading Co., Ltd. (Shanghai, China); LEICASAPO stereo microscope was supplied by Leica Microsystems Trading Co., Ltd. (Shanghai, China). The bending stress-strain curves of 3DDAI Kevlar/EP armo fabric layers are shown in Figure 2.
Sample Preparation
We utilized the 300 mm × 300 mm 3D deep-angle-interlock Kevlar fabric as the reinforcement. Epoxy resin E-51 and curing agent polyetheramine D230 were mixed uniformly in the ratio of 4:1 as the matrix, and the vacuum-assisted molding process was used to compound. The material was cut by the WG-1200 multifunctional ceramic tile cutting machine, according to the experimental requirements, and the parameters of the 3DDAI Kevlar/EP armor material sample are listed in Table 2.
The resin content/fiber volume fraction has an important influence on the mechanical properties of the material [21][22][23], so it is necessary to control the resin content during the Materials 2022, 15, 5321 4 of 18 material preparation process. Among them, the calculation formula for resin content and fiber volume fraction is as follows: where M m is the resin content. m f is the fiber quality. m c is the composite material quality. V f is the fiber volume fraction. V c is the volume of composite ρ f is the density of fiber.
Bending Test
The bending performance of the 3DDAI Kevlar/EP armor material was tested on the Instron 5969 H universal material testing machine according to GB/T1449-2005 [24]. The ratio of the bending span to the thickness of the pattern was 16:1. The bending test speed was 2 mm/min. The preload was set to 3-8 N, and the load continues until the pattern fails. In order to ensure the validity of the data, each data-point was tested at least five times, and the average value was taken.
The bending stress was calculated using Equation (3); the bending strain was calculated using Equation (4); and the bending modulus was calculated using Equation (5).
where σ is the bending stress. P is the bending load. l is the bending span. b is the sample width. h is the sample thickness. ε is the bending strain. S is the bending deflection. ∆P and ∆S are the load increments of the initial line segment and the displacement increment of the middle point of the span, respectively.
Effect of Stacking Layers of 3DDAI Kevlar Fabrics on Bending Properties
The 3DDAI fabric with a weft density of 50 picks/cm was used as the reinforcement, and armor materials (resin content 44% ± 1%) of different fabric pieces were prepared through the superimposition of the unidirectional laying method, including 3DDAI Kevlar/EP armor materials made of single, two, three, and four fabric laminates. The warp and weft directions of the material were tested for bending performance, respectively.
Bending Stress-Strain Curves
The bending stress-strain curves of 3DDAI Kevlar/EP armor materials with different fabric layers are shown in Figure 2. The bending stress-strain curves of 3DDAI Kevlar/EP armor materials with different fabric layers are shown in Figure 2. It can be seen from Figure 2 that 3DDAI Kevlar/EP armor materials with different stacking layers have different bending stress-strain responses. Among them, the curve characteristics of 1 to 2 layers are similar. The stress first increases linearly with the strain, then increases nonlinearly, and then decreases slowly after reaching the maximum value. This is because Kevlar fibers have good toughness, and the material failure is mainly due to the decrease in stiffness caused by buckling deformation. The damage morphology of 1-2 layers of 3DDAI Kevlar/EP armor materials is shown in Figure 3. When there was only one layer of fabric, the yarn did not break; when there were two layers of fabric, the local yarn broke. The 3 to 4 layer bending curve characteristics are similar and are described as follows: in the initial stage, the stress increases linearly with the strain; subsequently, the stress increases nonlinearly with the strain; and then, the stress reaches its maximum value, and the material fails. Finally, as the strain increases further, the stress drops sharply. In order to further study the bending characteristics of multilayer 3DDAI armor materials, taking 4 layers of 3DDI armor materials as an example, the damage morphology was analyzed by a LEICASAPO stereo microscope. Among them, the slope of the curve was calculated with a strain of 0.002 mm/mm as the limit, and the test was stopped at the point of sudden change of the slope to get the failure morphology of the material at different stages, which is shown in Figures 4 and 5. In these pictures, we noticed that the B and C points when the material was loaded in the weft direction come earlier than when the material was loaded in the warp direction. This is because the buckled warp yarn in the material has a higher failure strain than the straightened weft yarn. Among them, there was no obvious damage to the material in the linear growth phase (AB section, A is the curve's starting point). In the non-linear growth stage (the BC section), the axial yarns in the upper layer of the material accumulate damage, and the matrix is broken. When loaded in the warp direction, it shows warp yarn damage and matrix fragmentation around the interweaving point on the upper surface of the material, as shown in Figure 4c
Bending Properties
The bending properties of the 3DDAI Kevlar/EP armor materials with different superimposed layers are listed in Table 3. In order to more intuitively evaluate the influence of different stacking layers on the bending properties of materials, the bending performance comparison of different stacking layers of materials is shown in Figure 6. It can be seen from Table 3 that: (1) The bending performance in the weft direction was significantly better than that in the warp direction. It can be seen from Figure 6 that the bending strength of the 3DDAI Kevlar/EP armor material first increases with the number of fabric stacking layers, reaches a peak when the number of stacking layers reaches three, and then decreases. According to the bending strength formula, the bending strength of the material increases as a logarithmic function with an increase in thickness. However, in George J. Dvorak's study [25], it was found that with the increase in the thickness of the laminate layer, the strength of the matrix in the material decreased. Therefore, when the number of 3DDAI Kevlar/EP armored material overlays increases to a certain extent, the increase in bending strength is weaker than the decrease in matrix strength, resulting in a decrease in the overall strength of the material. The increase in the number of stacking layers in the 3DDAI Kevlar/EP armor material and the increase in the total amount of fibers involved in resisting deformation shows that the bending modulus of the material increases. However, the increase in bending module decreases with the increase in the number of stacking layers due to the material's resistance to deformation by synergy between different layers, rather than the participation of all areas of the material itself in resisting deformation.
Bending Properties
The bending properties of the 3DDAI Kevlar/EP armor materials with different superimposed layers are listed in Table 3. In order to more intuitively evaluate the influence of different stacking layers on the bending properties of materials, the bending performance comparison of different stacking layers of materials is shown in Figure 6.
(a) (b) Figure 7 shows the bending stress-strain curves of 3DDAI Kevlar/EP armor materials with different resin contents. In the figure, we found that the stress-strain curve of the material was more unstable when the resin content was 34.27% (fiber volume content of 54.92%) and 36.75% (52.71%), which may be because the fabric structure of 3DDAI was relatively loose and low resin content will result in more voids inside the material and thus an unstable response of the material during the bending process. When the resin content was 40.39% (48.94%), 44.61% (46.12%), and 48.59% (43.38%), the stress-strain curve of the material was relatively stable, indicating that the resin content in this range can prepare 3DDAI Kevlar/EP armor material with stable performance.
Bending Properties
The bending properties of 3DDAI Kevlar/EP armor materials with different resin content are listed in Table 4. In order to evaluate the influence of different resin contents on the bending properties of the material, the bending properties comparison of the material with different resin contents is shown in Figure 8.
Effect of Epoxy Resin Content on Bending Properties
The 3DDAI fabric with a weft density of 50 picks/cm was the reinforcement, and the armor material, with different contents of epoxy resin, was prepared by the symmetrical laying method, including resin contents of 34.27%, 36.75%, 40.39%, 44.61%, and 48.59% of the 3DDAI Kevlar/EP armor materials were tested in the warp and weft directions, respectively. Figure 7 shows the bending stress-strain curves of 3DDAI Kevlar/EP armor materials with different resin contents. In the figure, we found that the stress-strain curve of the material was more unstable when the resin content was 34.27% (fiber volume content of 54.92%) and 36.75% (52.71%), which may be because the fabric structure of 3DDAI was relatively loose and low resin content will result in more voids inside the material and thus an unstable response of the material during the bending process. When the resin content was 40.39% (48.94%), 44.61% (46.12%), and 48.59% (43.38%), the stress-strain curve of the material was relatively stable, indicating that the resin content in this range can prepare 3DDAI Kevlar/EP armor material with stable performance. of different stacking layers on the bending properties of materials, the bending performance comparison of different stacking layers of materials is shown in Figure 6. Figure 7 shows the bending stress-strain curves of 3DDAI Kevlar/EP armor materials with different resin contents. In the figure, we found that the stress-strain curve of the material was more unstable when the resin content was 34.27% (fiber volume content of 54.92%) and 36.75% (52.71%), which may be because the fabric structure of 3DDAI was relatively loose and low resin content will result in more voids inside the material and thus an unstable response of the material during the bending process. When the resin content was 40.39% (48.94%), 44.61% (46.12%), and 48.59% (43.38%), the stress-strain curve of the material was relatively stable, indicating that the resin content in this range can prepare 3DDAI Kevlar/EP armor material with stable performance.
Bending Properties
The bending properties of 3DDAI Kevlar/EP armor materials with different resin content are listed in Table 4. In order to evaluate the influence of different resin contents on the bending properties of the material, the bending properties comparison of the material with different resin contents is shown in Figure 8.
Bending Properties
The bending properties of 3DDAI Kevlar/EP armor materials with different resin content are listed in Table 4. In order to evaluate the influence of different resin contents on the bending properties of the material, the bending properties comparison of the material with different resin contents is shown in Figure 8. As can be seen from Table 4, the resin content of the 3DDAI Kevlar/EP armor material's bending strength was the highest at 44.61% (along the warp direction: 206.8 MPa, along the weft direction: 307 MPa), and the bending strength was the lowest when the resin content was 34.27% (along the warp direction: 167 MPa, along the weft direction: 216.4 MPa).
As can be seen from Figure 8, the bending strength of the material increases with the increase of the resin content when the resin content is increased from 34.27% to 44.61%. When the resin content of the material is 44.61% to 48.59%, the bending strength decreases with the increase of the epoxy resin content, but the degree of the decrease is not very obvious. This is because if the resin content is too low, the matrix will be unable to play a useful role in load transmission and reducing the synergy between fibers and the matrix will result in the lower material's strength. However, if the resin content is excessively high, the material strength will decrease too, due to the relative reduction of fiber as the carrier main body [26,27]. Therefore, in the armor material preparation process, it could improve the utilization rate of the material by controlling the appropriate resin content, and the optimal resin content of the 3DDAI Kevlar/EP armor material should be controlled within the range of 40% to 49%. As can be seen from Table 4, the resin content of the 3DDAI Kevlar/EP armor material's bending strength was the highest at 44.61% (along the warp direction: 206.8 MPa, along the weft direction: 307 MPa), and the bending strength was the lowest when the resin content was 34.27% (along the warp direction: 167 MPa, along the weft direction: 216.4 MPa).
As can be seen from Figure 8, the bending strength of the material increases with the increase of the resin content when the resin content is increased from 34.27% to 44.61%. When the resin content of the material is 44.61% to 48.59%, the bending strength decreases with the increase of the epoxy resin content, but the degree of the decrease is not very obvious. This is because if the resin content is too low, the matrix will be unable to play a useful role in load transmission and reducing the synergy between fibers and the matrix will result in the lower material's strength. However, if the resin content is excessively high, the material strength will decrease too, due to the relative reduction of fiber as the carrier main body [26,27]. Therefore, in the armor material preparation process, it could improve the utilization rate of the material by controlling the appropriate resin content, and the optimal resin content of the 3DDAI Kevlar/EP armor material should be controlled within the range of 40% to 49%.
The Effect of Laying Method on Bending Properties
In the literature on woven and woven-composite ballistic materials, it was found that the ballistic performance of quasi-isotropic materials in the macroscopic plane was better than that of anisotropic materials in the macroscopic plane [28][29][30]. Therefore, the following 3DDAI Kevlar/EP armor materials with different laying methods were designed as far as possible to be macro-quasi-isotropic except for the unidirectional laying materials.
The 3DDAI fabric, with a weft density of 50 picks/cm, was used as the reinforcement and was combined with the resin system to prepare armor materials with different laying methods. 3DDAI Kevlar/EP armor materials (resin content 44% ± 1%) was prepared using the unidirectional laying method, the orthogonal laying method, the symmetrical laying method, and the 2/2 laying method, and bending performance tests were conducted along the warp and weft directions, respectively. Figure 9 is a schematic diagram of different laying methods.
The Effect of Laying Method on Bending Properties
In the literature on woven and woven-composite ballistic materials, it was found that the ballistic performance of quasi-isotropic materials in the macroscopic plane was better than that of anisotropic materials in the macroscopic plane [28][29][30]. Therefore, the following 3DDAI Kevlar/EP armor materials with different laying methods were designed as far as possible to be macro-quasi-isotropic except for the unidirectional laying materials.
The 3DDAI fabric, with a weft density of 50 picks/cm, was used as the reinforcement and was combined with the resin system to prepare armor materials with different laying methods. 3DDAI Kevlar/EP armor materials (resin content 44% ± 1%) was prepared using the unidirectional laying method, the orthogonal laying method, the symmetrical laying method, and the 2/2 laying method, and bending performance tests were conducted along the warp and weft directions, respectively. Figure 9 is a schematic diagram of different laying methods. Figure 10 is the bending stress-strain curve of the 3DDAI Kevlar/EP armor materials with different laying methods. From the figure, we can see that the materials with different laying methods have similar bending characteristics. However, the response of each stage was different, so t\[he fracture energy of the material obtained by calculating the curve area of each stage is listed in Table 5. Figure 10 is the bending stress-strain curve of the 3DDAI Kevlar/EP armor materials with different laying methods. From the figure, we can see that the materials with different laying methods have similar bending characteristics. However, the response of each stage was different, so the fracture energy of the material obtained by calculating the curve area of each stage is listed in Table 5.
The Effect of Laying Method on Bending Properties
In the literature on woven and woven-composite ballistic materials, it was found that the ballistic performance of quasi-isotropic materials in the macroscopic plane was better than that of anisotropic materials in the macroscopic plane [28][29][30]. Therefore, the following 3DDAI Kevlar/EP armor materials with different laying methods were designed as far as possible to be macro-quasi-isotropic except for the unidirectional laying materials.
The 3DDAI fabric, with a weft density of 50 picks/cm, was used as the reinforcement and was combined with the resin system to prepare armor materials with different laying methods. 3DDAI Kevlar/EP armor materials (resin content 44% ± 1%) was prepared using the unidirectional laying method, the orthogonal laying method, the symmetrical laying method, and the 2/2 laying method, and bending performance tests were conducted along the warp and weft directions, respectively. Figure 9 is a schematic diagram of different laying methods. Figure 10 is the bending stress-strain curve of the 3DDAI Kevlar/EP armor materials with different laying methods. From the figure, we can see that the materials with different laying methods have similar bending characteristics. However, the response of each stage was different, so t\[he fracture energy of the material obtained by calculating the curve area of each stage is listed in Table 5. From Table 5, we can observe that: (1) The nonlinear phase of the material lasts longer and can absorb more energy. (2) In general, the order of fracture energy was orthogonal laying method > symmetric laying method > unidirectional laying method > 2/2 laying method. Furthermore, the fracture energy of the material of the orthogonal layer was about 20% higher than that of the material of the unidirectional layer. (3) The fracture energy of the material whose bottom layer was in the weft direction (90 • ) was significantly higher than that of the material whose bottom layer was in the warp direction (0 • ). (4) The material whose upper layer was in the warp direction shows higher ultimate strain at each stage than the material whose upper layer was in the weft direction.
Bending Properties
The bending properties of the 3DDAI Kevlar/EP armor materials with different laying methods are listed in Table 6. It can be seen from the table that: (1) In the unidirectional and symmetrical laying methods, the bending strength and bending modulus of the armor material along the weft direction were far greater than the bending strength and bending modulus of the material along the warp direction. (2) The bending strength of the armor material along the weft direction was greater than the bending strength of the material along the warp direction in the orthogonal and 2/2 laying methods, and the bending modulus of the material in the warp direction was slightly greater than the bending modulus of the material in the weft direction. (3) The bending properties of the 3DDAI Kevlar/EP armor materials with different laying methods were different, and the bending performance of the materials along the weft direction was in the order of unidirectional laying method > symmetrical laying method > orthogonal laying method > 2/2 laying method; the bending performance of the materials along the warp direction was in the order of orthogonal laying method > 2/2 laying method > symmetrical laying method > unidirectional laying method. (4) When the 3DDAI Kevlar/EP armor material was subjected to bending load, the upper and bottom layers of the material were the main bodies that bore the load, and the stiffness contribution of the upper layer was greater than the bottom layer; however, the strength contribution of the bottom layer was greater than the upper layer. In order to make the material appear quasi-isotropic in the macroscopic plane, the mechanical properties of the material in different directions are improved by changing the laying method. However, as was described above, there were still anisotropic features in the material when bearing the bending load after changing the method of laying. Therefore, we used Equation (6), the Coefficient of Ascension (CA), to express the comprehensive bending performance of the material relative to the unidirectional laying material, and the difference between the bending properties of the material in different directions was expressed by Equation (7), the Coefficient of Difference (CD; the closer to 1, the smaller the difference). A comparison of the bending coefficients of the 3DDAI Kevlar and EP armor materials in different laying methods was made, as shown in Figure 11.
where Q CA is the Coefficient of ascension, Q CD is the coefficient of difference, Q 1 is the bending performance of the material along the warp direction, Q 2 is the bending performance of the material along the weft direction, Q warp is the bending performance of the unidirectional laying material along the warp direction, Q weft is the bending performance of the unidirectional laying material along the weft direction. As shown in Figure 11, after changing the laying method, the comprehensive bending performance of the 3DDAI Kevlar/EP armor material was improved. Among these changes, the comprehensive bending performance improvement of the orthogonal laying material was the highest. The difference in bending performance between 2/2 laying materials in different directions was the smallest, and the difference in bending performance of orthogonal laying materials was also very small. In summary, the preparation of 3DDAI Kevlar/EP armor material by the orthogonal laying method is beneficial to maximize the potential to improve the comprehensive bending performance of the material, while reducing the difference in bending performance between different directions of the material. As shown in Figure 11, after changing the laying method, the comprehensive bending performance of the 3DDAI Kevlar/EP armor material was improved. Among these changes, the comprehensive bending performance improvement of the orthogonal laying material was the highest. The difference in bending performance between 2/2 laying materials in different directions was the smallest, and the difference in bending performance of orthogonal laying materials was also very small. In summary, the preparation of 3DDAI Kevlar/EP armor material by the orthogonal laying method is beneficial to maximize the potential to improve the comprehensive bending performance of the material, while reducing the difference in bending performance between different directions of the material.
The Effect of Fabric Weft Density on Bending Properties
As noted above, the preparation of 3DDAI Kevlar/EP armor materials in an orthogonal laying method was conducive to maximizing the potential to improve the comprehensive bending performance of the material, while reducing the difference in bending performance between different directions of the material. Therefore, armor materials were mainly prepared by the orthogonal laying method to study the effect of different fabric weft densities on the bending properties of the materials.
The 3DDAI fabrics with weft densities of 43, 46, and 50 picks/cm were used as reinforcements to prepare armor materials (resin content 46% ± 1%), and bending performance tests were carried out along the material's warp and weft direction. The bending properties of the 3DDAI Kevlar/EP armor materials with different weft densities are listed in Table 7. In order to more intuitively evaluate the effect of different fabric weft densities on the bending performance of the material, the comparison of the bending performances of different fabric weft densities is shown in Figure 12. As shown in Figure 11, after changing the laying method, the comprehensive bending performance of the 3DDAI Kevlar/EP armor material was improved. Among these changes, the comprehensive bending performance improvement of the orthogonal laying material was the highest. The difference in bending performance between 2/2 laying materials in different directions was the smallest, and the difference in bending performance of orthogonal laying materials was also very small. In summary, the preparation of 3DDAI Kevlar/EP armor material by the orthogonal laying method is beneficial to maximize the potential to improve the comprehensive bending performance of the material, while reducing the difference in bending performance between different directions of the material. As can be seen from Table 7, the 3DDAI Kevlar/EP armor material has the highest bending strength at 43 picks/cm of weft density (along warp direction: 220.8 MPa, along weft direction: 267.6 MPa), and the lowest bending modulus (along warp direction: 10.42 GPa, along weft direction: 8.81 GPa); when the weft density of the fabric was 50 picks/cm, the bending strength was the lowest (along the warp direction: 194.6 MPa, along the weft direction: 238.7 MPa), and the bending modulus was the highest (along the warp direction: 11 GPa, along the weft direction: 10.71 GPa).
As can be seen from Figure 12, when the weft density of the fabric increases from 43 picks/cm to 50 picks/cm, the bending strength of the material decreases with the increase of the fabric weft density. The internal voids of the fabric are reduced when the fabric weft density is increased, and the reason why the material can macroscopically ensure the same resin content is that more resin accumulates on the surface of the material to form a "resin-rich zone," which means the material with the higher fabric weft density actually has less resin content in the interior than the material with the lower fabric weft density. Firstly, this is because the increase in the weft density causes the "lean resin zone" in the material to be more prone to cracks. Secondly, due to the increase in the weft density, the degree of squeezing inside the yarn increases and the internal stress increases, resulting in a decrease in the overall bending strength of the material; these results were also found in the study of HA Aisyah [31]. The bending modulus of the material increased as the fabric weft density increased from 43 picks/cm to 50 picks/cm, and the increase in the bending modulus of the material along the weft direction is greater than that of the material along the warp direction. This is owing to the fact that when the material is subjected to bending load, the upper layer bears the axial compressive loading while the bottom layer bears the axial tensile loading [32]. With an increase in fabric weft density, the axial fibers along the weft direction increase and the transverse fibers along the warp direction rise. On the other hand, the axial mechanical properties of the fibers far outperform the transverse mechanical properties [33]. For the reasons stated above, the increase in the bending modulus of the material along the weft direction is greater than the rise in the bending modulus of the material along the warp direction.
The Effect of the Structure on Bending Properties
Many studies [8][9][10]34] compared the performance of woven composites with different structures by controlling the material thickness and ensuring similar resin content/fiber volume fraction. For bending properties, Khatkar et al. showed that the flexural strength of 3D orthogonal composites was 50.7% higher than that of 2D plain composites. Therefore, this article compares the bending properties of 3DDAI armor material (unidirectional laying) and 2D plain weave laminated Kevlar/EP armor material by controlling the fabric surface density and resin content to achieve a similar thickness and material density. The specifications of the 3DDAI Kevlar/EP armor material and the 2D plain laminated Kevlar/EP armor material are shown in Table 8. Figure 13 shows the comparison of the bending properties of the 3DDAI Kevlar/EP armor material and the 2D plain laminated Kevlar/EP armor material. As can be seen from the figure, the bending strength of 2D plain laminated Kevlar/EP armor materials was 304 MPa and the flexural modulus was 15.48 GPa, which was lower than that of the weft direction of 3DDAI Kevlar/EP armor materials but much higher than that of the warp direction of 3DDAI Kevlar/EP armor materials. This is due to the 3DDAI structure being through the buckled warp system in the thickness through the straightening weft system, intertwined into a three-dimensional structure. This structure improves the mechanics and structural integrity in the weft direction and thickness direction by sacrificing the mechanics of the warp direction. For further investigation, the bending failure morphology of the 2D plain laminated Kevlar/EP armor material in Figure 14 was compared with the bending failure of the 3DDAI Kevlar/EP armor material that was loaded in the weft direction as shown in Figure 5d. The analysis revealed that, in contrast to the oblique fracture path presented by the axial yarn of the 3DDAI Kevlar/EP armor material loaded along the weft direction, the axial yarn of the 2D plain laminated Kevlar/EP armor material presents the horizontal fracture path of the interwoven point area, and then fracture and delamination occur layer by layer. This indicates that the 3DDAI Kevlar/EP armor material has better in-plane performance in the weft direction. Additionally, since the 3DDAI Kevlar/EP armor material along the weft direction of the axial yarn (weft yarn) was actually larger than the 2D plain weave laminated Kevlar/EP armor material, the bending performance of the 3DDAI Kevlar/EP armor material was superior to that of the 2D plain laminated Kevlar/EP armor material. For further investigation, the bending failure morphology of the 2D plain laminated Kevlar/EP armor material in Figure 14 was compared with the bending failure of the 3DDAI Kevlar/EP armor material that was loaded in the weft direction as shown in Figure 5d. The analysis revealed that, in contrast to the oblique fracture path presented by the axial yarn of the 3DDAI Kevlar/EP armor material loaded along the weft direction, the axial yarn of the 2D plain laminated Kevlar/EP armor material presents the horizontal fracture path of the interwoven point area, and then fracture and delamination occur layer by layer. For further investigation, the bending failure morphology of the 2D plain laminated Kevlar/EP armor material in Figure 14 was compared with the bending failure of the 3DDAI Kevlar/EP armor material that was loaded in the weft direction as shown in Figure 5d. The analysis revealed that, in contrast to the oblique fracture path presented by the axial yarn of the 3DDAI Kevlar/EP armor material loaded along the weft direction, the axial yarn of the 2D plain laminated Kevlar/EP armor material presents the horizontal fracture path of the interwoven point area, and then fracture and delamination occur layer by layer. This indicates that the 3DDAI Kevlar/EP armor material has better in-plane performance in the weft direction. Additionally, since the 3DDAI Kevlar/EP armor material along the weft direction of the axial yarn (weft yarn) was actually larger than the 2D plain weave laminated Kevlar/EP armor material, the bending performance of the 3DDAI Kevlar/EP armor material was superior to that of the 2D plain laminated Kevlar/EP armor material. This indicates that the 3DDAI Kevlar/EP armor material has better in-plane performance in the weft direction. Additionally, since the 3DDAI Kevlar/EP armor material along the weft direction of the axial yarn (weft yarn) was actually larger than the 2D plain weave laminated Kevlar/EP armor material, the bending performance of the 3DDAI Kevlar/EP armor material was superior to that of the 2D plain laminated Kevlar/EP armor material.
Conclusions
Based on the potential of 3DDAI Kevlar/EP armor material in the field of protection, the bending mechanics response of the 3DDAI Kevlar/EP armor material was carried out in terms of the number of fabric superimposed layers, resin content, laying method, and fabric weft density, and then compared with the bending performance of 2D plain laminated Kevlar/EP armor material. The following conclusions were obtained through research: (1) When 3DDAI Kevlar/EP armor material is subjected to bending load, the upper and bottom layers of the material become the main carrying bodies, which has a greater impact on the initial stiffness and breaking strength of the material, respectively. Wherein the bending response of the 3DDAI Kevlar/EP armor material was nonlinear, the damage of the upper layer in the axial yarn and the damage of the matrix leads to the phenomenon of bending softening, and the fracture of the axial yarn in the bottom layer is the main cause of the material failure. (2) Due to the particularity of the 3DDAI fabric structure, when the material's warp and weft density are quite different, the utilization rate of the yarn and the strength will decrease. Furthermore, its loose structure needs to be appropriately increased in the resin content to prepare stable armor materials, where the appropriate range of resin content is 40%-49%. In addition, the 3DDAI Kevlar/EP armor can be prepared by the orthogonal laying method to improve the macroscopic mechanical properties of the material and effectively increase the fracture energy of the material. (3) The 3DDAI Kevlar/EP armor material was in-plane anisotropic, and its bending performance along the weft direction was better than the 2D plain laminated material. Additionally, due to the penetration of the yarns in the thickness direction in the 3DDAI structure, even the lamination can effectively slow down the delamination of the material.
|
2022-08-05T15:06:07.561Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "f4c3afceddd3ace06ccf59a0e378557a610515cf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/15/5321/pdf?version=1659443756",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6e5955f2533b91d0b23215b87755d3c021ee0ab4",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220716543
|
pes2o/s2orc
|
v3-fos-license
|
A PoleP286R mouse model of endometrial cancer recapitulates high mutational burden and immunotherapy response
Cancer is instigated by mutator phenotypes, including deficient mismatch repair and p53-associated chromosomal instability. More recently, a distinct class of cancers was identified with unusually high mutational loads due to heterozygous amino acid substitutions (most commonly P286R) in the proofreading domain of DNA polymerase ε, the leading strand replicase encoded by POLE. Immunotherapy has revolutionized cancer treatment, but new model systems are needed to recapitulate high mutational burdens characterizing human cancers and permit study of mechanisms underlying clinical responses. Here, we show that activation of a conditional LSL-PoleP286R allele in endometrium is sufficient to elicit in all animals endometrial cancers closely resembling their human counterparts, including very high mutational burden. Diverse investigations uncovered potentially novel aspects of Pole-driven tumorigenesis, including secondary p53 mutations associated with tetraploidy, and cooperation with defective mismatch repair through inactivation of Msh2. Most significantly, there were robust antitumor immune responses with increased T cell infiltrates, accelerated tumor growth following T cell depletion, and unfailing clinical regression following immune checkpoint therapy. This model predicts that human POLE-driven cancers will prove consistently responsive to immune checkpoint blockade. Furthermore, this is a robust and efficient approach to recapitulate in mice the high mutational burdens and immune responses characterizing human cancers.
Introduction
DNA mutations are the fundamental drivers of cancer (1). Accordingly, a central hallmark of cancer is an incidence of mutations more numerous than can be explained on the basis of the intrinsic mutation rate of normal (nonmalignant) cells (2,3). In the last decade, systematic characterization of cancer genomes has underscored the high incidence of mutations in most cancers -especially carcinomas -and the underlying mutator mechanisms that initiate cancers and support subsequent diversification. These "mutator phenotypes" reflect the complexity of pathways that ensure high DNA replication fidelity and repair DNA damage sustained from mutagens, such as ionizing ultraviolet radiation and environmental toxicants, as well as the mutagenic potential of normal cell-intrinsic metabolic processes (3,4). In many if not most cancers, the acquisition of a mutator phenotype is the initial instigating event driving tumorigenesis. For example, defective mismatch repair (dMMR) is common in endometrial and gastrointestinal carcinomas, and experimental evidence (genetic, genomic, mouse models, etc.) points to dMMR as the initial cancer-driving event (5)(6)(7).
The prevalence of somatic mutations (base substitution rate) varies dramatically across and within individual cancer types, ranging from less than 0.01/megabase to more than 500/megabase (Mb). Most carcinomas have base substitution rates of at least 1/Mb. Cancer types with the highest averages (5-12/Mb) include lung, colorectal, and endometrial carcinomas. A mutation rate 10/Mb or higher (hypermutation) is associated with dMMR. More recently, cancers with base substitution rates of at least 100/Mb (ultramutation) have been Cancer is instigated by mutator phenotypes, including deficient mismatch repair and p53associated chromosomal instability. More recently, a distinct class of cancers was identified with unusually high mutational loads due to heterozygous amino acid substitutions (most commonly P286R) in the proofreading domain of DNA polymerase ε, the leading strand replicase encoded by POLE. Immunotherapy has revolutionized cancer treatment, but new model systems are needed to recapitulate high mutational burdens characterizing human cancers and permit study of mechanisms underlying clinical responses. Here, we show that activation of a conditional LSL-Pole P286R allele in endometrium is sufficient to elicit in all animals endometrial cancers closely resembling their human counterparts, including very high mutational burden. Diverse investigations uncovered potentially novel aspects of Pole-driven tumorigenesis, including secondary p53 mutations associated with tetraploidy, and cooperation with defective mismatch repair through inactivation of Msh2. Most significantly, there were robust antitumor immune responses with increased T cell infiltrates, accelerated tumor growth following T cell depletion, and unfailing clinical regression following immune checkpoint therapy. This model predicts that human POLE-driven cancers will prove consistently responsive to immune checkpoint blockade. Furthermore, this is a robust and efficient approach to recapitulate in mice the high mutational burdens and immune responses characterizing human cancers.
attributed to somatically acquired POLE missense mutations leading to single amino acid substitutions in the proofreading (exonuclease) domain, most commonly P286R (8). POLE encodes DNA polymerase ε, which replicates the leading strand during normal DNA synthesis (9). The incidence of POLE-driven ultramutation is highest in endometrial and colorectal cancers (~5%-10%), but POLE mutations also occur in sporadic sarcomas, hematopoietic malignancies, glioblastomas, and diverse carcinomas (10). P286R interferes with DNA binding and produces a hyperactive polymerase that introduces numerous errors during DNA synthesis (11,12), leading to an error rate much higher than what results from inactivation of the exonuclease domain (13)(14)(15). POLE P286R and the rarer ultramutating amino acid substitutions (such as V411L) are genetically dominant, with retention of 1 WT allele (13). Analogous mutations leading to recurring single amino acid substitutions also occur in the lagging strand polymerase δ (encoded by POLD); however, these are less common and exceedingly rare in endometrial cancers, for unclear reasons (8,9,14,16).
Intriguingly, some POLE-driven ultramutant cancers also exhibit dMMR. The majority of errors POLE mutants produce are presumed to be corrected by MMR (17), leading to the prediction that dMMR and POLE mutations should cooperate. On the other hand, excessive mutation rate might decrease cell fitness and lead to extinction (18). Children with inherited biallelic mismatch repair deficiency have very early onset of central nervous system and hematologic cancers with massive accumulation of mutations (>250/ Mb), greater than all childhood and most adult cancers. All such cancers analyzed (10/10 cases) also harbored a somatically acquired mutation in POLE (7/10) or POLD (3/10) that appeared to be the initiating event (19). Some POLE-ultramutated adult colorectal and endometrial cancers are also dMMR, and specific mutational signatures have been ascribed to tumors harboring these combined defects (10,20,21). However, such cases may occur less frequently than predicted based on the incidence of POLE mutations and dMMR. Thus, whether POLE mutations cooperate with or antagonize dMMR in adult cancers is unclear, and defined genetic model systems are needed to investigate such interactions (22). Immune checkpoint blockade (by monoclonal antibodies against programmed cell death 1/programmed cell death ligand 1 [PD1/PDL1] and cytotoxic T lymphocyte-associated protein 4 [CTLA-4]) result in long-term responses and even cures of otherwise untreatable malignancies. However, objective responses occur in a minority of patients, prompting concerted efforts to uncover mechanisms of blockade and resistance (23). There is a linear relationship between base substitutions and amino acid changes producing neoantigens that evoke immune responses by tumor-infiltrating CD8 + T lymphocytes (24,25). In 2017, the FDA approved pembrolizumab as the first "tissue-agnostic" anticancer therapy for dMMR tumors irrespective of anatomic location or other histopathologic/molecular parameters (26,27). There have not been systematic studies of immune checkpoint blockade in POLE-mutant tumors either in patients or in model systems, although there are isolated case reports of treatment responses (28,29).
With this background in mind, it is notable that genetically engineered mouse cancer models have dramatically lower average mutational frequencies than human cancers. Egfr-, Kras-, or Myc-driven models of human lung cancer exhibit fewer than 0.1 mutations per megabase, several logs lower than human lung adenocarcinoma (30), and thus do not recapitulate mutational loads defining human cancers. That such models have not proved useful for testing immune checkpoint therapies has been attributed to mutational burdens too low to model human tumor immunology (31). Alternative strategies are needed to optimize mouse models with respect to mutational load, now known to define many aspects of tumor biology, clinical behavior, and treatment responses (30,32,33). In this study, we hypothesized that a new kind of mouse model of human cancer could be developed based on ultramutation driven by conditional Pole P286R expression and that such a model would be of broad investigational utility.
Results
Conditional Pole P286R expression provokes endometrial cancers with 100% penetrance. Heterozygous mice harboring BAC-Sprr2f-Cre and the conditional LSL-Pole P286R alleles were interbred. The functionality of LSL-Pole P286R , including Cre-mediated induction of Pole P286R expression equal to that of the WT Pole allele, was validated in a systemic knockin model, which elicited malignancies across many cell lineages (22). BAC-Sprr2f-Cre is a BAC-transgenic line with Cre inserted into the Sprr2f locus, which is expressed exclusively in the endometrial epithelial cells that give rise to all endometrial carcinomas. BAC-Sprr2f-Cre induces Cre-mediated recombination only in endometrium and is naturally estrogen inducible because of estrogen response elements in Sprr2f regulatory regions (34). Cre-mediated recombination by BAC-Sprr2f-Cre begins at 5 weeks (puberty onset), and is approximately 50% efficient within endometrial epithelial cells, leading to efficient (but mosaic) recombination (35).
This single-generation breeding scheme is simpler and more efficient than for most mouse models, which can require multiple alleles and generations and are inefficient in yielding experimental animals. In this breeding scheme, one-fourth of the progeny are the desired genotype, and one-eighth are of the desired genotype and sex. Siblings not inheriting BAC-Sprr2f-Cre were used as controls ( Figure 1A).
First, we generated a cohort of transheterozygous BAC-Sprr2f-Cre /+ LSL-Pole P286R/+ mice (abbreviated Pole P286R/+ ) to study age-related cancer onset. Whereas no deaths occurred in controls up to 600 days of age, the first death in Pole P286R mice occurred at 313 days (45 weeks), and all mice were dead within a short time span, by 450 days (64 weeks) (P < 0.0001, log-rank test) (Figure 1, B-D). The cause of death was always an aggressive endometrial cancer that replaced normal uterine tissues and metastasized to adjacent organs (ovary, bladder, kidney) or more distant sites. Malignancies of nonendometrial origin were not found. Histologically, the tumors invaded through the entire myometrial (uterine smooth muscle) layer. The tumors were histologically surprisingly homogenous, with most tumors appearing as well-differentiated endometrioid adenocarcinomas forming glands resembling normal endometrium. Nuclear atypia ranged from moderate to severe and atypical mitoses were characteristic ( Figure 1E). However, some tumors were poorly differentiated ( Figure 1E). About 20% of tumors exhibited squamous differentiation, seen in a similar percentage of human endometrial adenocarcinomas (36). Some cancers exhibited striking nuclear atypia (giant nuclei), implying abnormal ploidy ( Figure 1E). These features (well-differentiated tumors with paradoxically high nuclear grade and giant nuclei) closely resembled human P286R endometrial cancers (see Supplemental Figure 1A for examples; supplemental material available online with this article; Top panels show architecturally well-differentiated endometrioid adenocarcinomas with atypical mitoses (arrows). Middle panels show more poorly differentiated adenocarcinomas without gland formation. Bottom panels show cases with squamous differentiation (sq) or striking nuclear atypia (right panel) as described for human POLE P286R cancers. Scale bars: 50 μm.
https://doi.org/10.1172/jci.insight.138829DS1) (see also refs. 37,38). However, some sporadic tumors exhibited distinctive patterns or histotypes, such as being highly invasive but lacking distinct gland formation or with exophytic, clear cell, or spindle cell components consistent with carcinosarcoma (Supplemental Figure 1, B-E). This also resembles the histologic distribution of human POLE-driven endometrial cancers, which are usually endometrioid but can be of the clear cell or carcinosarcoma subtypes (39)(40)(41). The aggressive and infiltrative nature of the mouse cancers was further evidenced by spread to adjacent structures such as ureters, frequent lymphovascular invasion, and metastases to more distant abdominal organs such as the pancreas and spleen (Supplemental Figure 1, F-I). Thus in summary, murine P286R /+ endometrial cancers closely resemble human counterparts histotypically and in clinical behavior.
dMMR induction through Msh2 inactivation accelerates Pole P286R -driven tumor progression. To study genetic interactions with dMMR, we employed an Msh2 L floxed allele used to investigate dMMR in colorectal cancer progression in vivo (5). Additional cohorts of mice were generated with BAC-Sprr2f-Cre to inactivate both alleles of Msh2 by itself or in combination with Pole P286R . Msh2 and Msh6 proteins form dimers that bind to DNA and upon detection of base-base mismatches recruit Mlh1/Pms2 dimers to excise mismatches on the newly synthesized strand (42). Inactivating mutations (point mutations or deletions) of either Msh2 or Msh6 resulting in dMMR destabilize Msh2/Msh6 dimers with degradation of both proteins (Supplemental Figure 2A). This destabilization is the basis of immunohistochemistry (IHC) as the principal assay in clinical practice to screen for dMMR defects (to identify Lynch syndrome or tumors likely to respond to pembrolizumab) (6,43). In all n = 21 Pole P286R/+ tumors examined, there was retention of Msh2, Msh6, and Mlh1 in all cells, suggesting that spontaneous dMMR does not occur frequently in Pole P286R/+ murine endometrial cancers, consistent with human data (Supplemental Figure 2B) (21). All BAC-Sprr2f-Cre /+ LSL-Pole P286R/+ Msh2 L/L mice (abbreviated Pole P286R/+ Msh2 -/-) harbored distinct multifocal clones of Msh2 and Msh6 loss, as expected, and all invasive primary or metastatic cancer cells at later time points were Msh2/Msh6 deficient (Supplemental Figure 2C).
In contrast to Pole P286R/+ mice, BAC-Sprr2f-CreMsh2 L/L mice (abbreviated Msh2 -/-) did not harbor cancers, and no deaths occurred up to 600 days of age ( Figure 1C), showing that Pole P286R is a much more potent mutator allele and effective cancer driver than dMMR/Msh2 loss, at least in endometrium. Interestingly, however, dMMR and Pole P286R showed clear genetic cooperation, with a significant leftward shift of the survival curve (P < 0.0001, log-rank test). The cause of death was more aggressive endometrial cancers ( Figure 1B), as evidenced by the decreased survival and extensive tumor spread found in this cohort. At death, uterine weight (a metric for primary tumor burden) was significantly decreased in Pole P286R/+ Msh2 -/mice, consistent with more rapid spread from the primary site resulting in earlier deaths ( Figure 1D). Histologic spectra were similar in Pole P286R/+ versus Pole P286R/+ Msh2 -/mice, except that striking nuclear enlargement/ atypia was more common in Pole P286R/+ Msh2 -/mice (12/16 versus 6/21 for Pole P286R/+ alone, P = 0.0081 per Fisher exact test) ( Figure 1E). Thus, the data showed that (a) most Pole P286R tumors do not spontaneously undergo dMMR and (b) dMMR and Pole P286R cooperate in tumor progression. Pole P286R/+ tumors exhibit a propensity for tetraploidization. Nuclear atypia and enlargement imply abnormal karyotypes, suggesting that although Pole P286R is a pure base substitution mutator, tumor progression may be associated with additional, and possibly adaptive, layers of genomic instability. To explore this hypothesis, cell lines were established from n = 3 Pole P286R/+ and n = 3 Pole P286R/+ Msh2 -/tumors. All 6 cell lines showed some tetraploid cells, and 1 cell line for each genotype was essentially tetraploid. Spectral karyotyping (SKY) of these cell lines confirmed the presence of tetraploid cells and showed a few chromosome-level aberrations such as fusions or translocations (Figure 2, A-D). These results indicate that Pole P286R/+ cancers exhibit a tendency toward tetraploidization. Tetraploidization, which occurs in some cancers, may be an adaptive response to buffer against high mutational loads (44). To explore whether this might also occur in human tumors, tissue sections from n = 6 POLE P286R endometrial cancers were subjected to DNA fluorescence in situ hybridization (FISH) with enumeration probes for chromosomes X, 8, 13, 18, and 21. In all cases, a substantial proportion of nuclei had 4 signals consistent with tetraploidy ( Figure 2E), suggesting that tetraploidy is shared by human and murine POLE cancers, in agreement with prior studies (45).
Pole P286R/+ endometrial cancers harbor very high base substitution rates, in the range of human ultramutated tumors. To define base substitution rates, n = 3 primary tumors and cell lines (total of 12 samples from the 2 mutant cohorts) were subjected to whole-genome sequencing (WGS) at an average depth of 40 times, in the general range of The Cancer Genome Atlas (TCGA) studies and permitting comparison to prior murine and human studies (22). Pole P286R/+ endometrial cancers exhibited base substitution rates of 48-105/Mb, far greater than previous genetically engineered mouse models of cancer and in the range of human ultramutant cancers. Cell lines exhibited a modest increase in base substitution rates of about 2-3 times relative to the primary tumors, perhaps in part because of clonal purification enhancing detection, although such relatively small differences could also be due to random variation. Pole P286R/+ Msh2 -/tumors also exhibited very high base substitution rates that appeared modestly elevated as compared with Pole P286R/+ alone; cell lines exhibited similar base substitution rates as tumors ( Figure 4A). Next, trinucleotide contexts for base substitutions were evaluated. All Pole P286R/+ samples exhibited virtually superimposable signatures, with few C>G and T>A substitutions, and a preponderance of T>G substitutions, especially with a T at the third position. All Pole P286R/+ Msh2 -/samples also exhibited virtually superimposable signatures with each other, with a shift to C>A substitutions, especially when the third position was a T. This signature closely resembled the recently described SBS14 signature for rare human cancers harboring simultaneous POLE mutations and dMMR (20).
Next, we analyzed predicted coding impacts. As expected, Pole P286R/+ samples had a strong predominance of missense mutations, with a number of nonsense and splicing mutations, occasional readthrough mutations due to conversion of a terminal stop codon, and infrequent indels. Pole P286R/+ Msh2 -/samples, in contrast, exhibited an elevation of the indel rate ( Figure 5A and Supplemental Table 1) as a consequence of microsatellite instability, where expansion of microsatellite repeats leads to indels (20). The favored trinucleotide contexts described above resulted in highly skewed and nonrandom codon substitution tables, which were distinctive in Pole P286R/+ versus Pole P286R/+ Msh2 -/samples ( Figure 5B and Supplemental Table 1). Consistent with SKY results, read mapping to visualize genome-wide copy number variations (CNVs) revealed only modest alterations, with occasional chromosomes exhibiting copy number alterations relative to the whole genome ( Figure 5C), and such CNVs were fewer than (for general comparison) in immortalized WT mouse embryo fibroblasts (Supplemental Figure 3).
Pole P286R/+ endometrial cancers recruit T cells that serve to restrict tumor progression. Amino acid substitutions create neoantigens that stimulate T cell-mediated antitumor immune responses (24). Concordantly, Pole P286R/+ endometrial cancers harbored numerous infiltrating CD3 + T cells ( Figure 6A). Pole P286R could itself be immunogenic, particularly because polymerase ε is a housekeeping enzyme expressed in all cells, and P286R is believed to be the initiating tumor event shared by all tumor cells in a P286R-driven malignancy. However, no 8-to 11-mer peptides spanning P286R are predicted by NetMHC to bind mouse MHC (51), suggesting that Pole P286R does not create an effectively immunogenic neoepitope and that it is more likely the subsequent accumulation of amino acid substitutions that could invoke an antitumor response.
T cells were systemically depleted in Pole P286R/+ mice by injection of an anti-CD8 (αCD8) antibody starting at 150 days of age, with mice followed serially by MRI to aid in determination of an appropriate time point for necropsy ( Figure 6B). Flow cytometric analysis and tissue immunostains (CD8) confirmed CD8 + T cell depletion in peripheral blood ( Figure 6C) and in tissues ( Figure 6D). At necropsy, MRI and uterine weights showed significantly increased tumor burden in animals treated with αCD8 versus vehicle (P = 0.0014, unpaired t test) (Figure 6, E and F). However, while uterine weight is useful as an objective and easily measurable parameter, it likely underestimates increases in tumor burden; for example, residual normal uterine tissues make up a significant percentage of tumorous uteri. Consistent with this, histologic examination consistently revealed larger areas of tumor infiltration in the CD8 + T cell-depleted uteri ( Figure 6G). These findings suggest that Pole P286R results in immunogenic responses that limit tumor progression.
Pole P286R/+ and Pole P286R/+ Msh2 -/tumors are highly responsive to immune checkpoint blockade. First, we established an F1 hybrid syngeneic graft model. The LSL-Pole P286R allele was generated in and maintained in a pure 129S6/ SvEvTac background, whereas the BAC-Sprr2f-Cre allele was maintained (and extensively backcrossed) in an FVB background. Thus, experimental Pole P286R/+ mice were F1 hybrids comprising 50% each of the 2 backgrounds. Their tumors are thus syngeneic and should be engraftable into F1 hybrid mice generated by interbreeding WT animals of the 2 strains ( Figure 7A). Pole P286R/+ endometrial cancer cell line B3E ( Figure 2B) was engrafted into F1 hybrid mice, with tumors showing continual growth following successful engraftment ( Figure 7B). The cell line was selected at random and was subsequently determined to be tetraploid ( Figure 2B); the impact of ploidy on this experiment was not further investigated. Strikingly, tumor-engrafted animals subjected to just 3 injections of αPDL1/CTLA-4 combined therapy over 10 days showed complete regression of the tumors with lack of regrowth following cessation of treatment (P < 0.0001, Figure 7B). IHC double-labeling against pan-cytokeratin (CK) and CD8 showed that regression was accompanied by massive infiltration of CD8 + T cells ( Figure 7C). Therefore, tumors were immunogenic and highly responsive to immune checkpoint blockade.
Primary tumors expressed PDL1 and CTLA-4 in infiltrating lymphocytes ( Figure 7D). To further test combined αPDL1/CTLA-4 blockade, an increasingly common therapeutic combination (52), treatment responses were measured in live Pole P286R/+ and Pole P286R/+ Msh2 -/mice. Pole P286R/+ and Pole P286R/+ Msh2 -/treatments were initiated at 300 days and 220 days, respectively, because of the accelerated mortality of the latter ( Figure 7D). Survival analysis showed significant clinical benefit with statistically significant survival extension in both cohorts ( Figure 7E). Pre-and posttreatment MRIs showed significant responses 2 weeks after initiation of therapy (P = 0.041 per paired t test, Figure 7, F and G). Tumors showed increased numbers of infiltrating CD8 + T cells, especially within malignant gland epithelium ( Figure 7H). Holistic T cell receptor (TCR) sequence analysis showed that mice syngeneically engrafted with a Pole P286R/+ cancer cell line and treated with αPDL1/CTLA-4 had significantly increased TCR clonal expansion in both peripheral blood and tumor tissues. Tumor-grafted mice treated with vehicle showed significantly increased TCR expansion in tumor tissues but not in peripheral blood (Supplemental Figure 4, A and B). The expansion of the 50 most represented TCR rearrangements were analyzed, and tumors in treated mice had the highest expansion of TCRs ( Figure 7I and Supplemental Figure 4C), indicating that immune checkpoint therapy resulted in larger changes in TCR repertoires associated with tumor diminution. The frequency of 1 TCR in blood samples from mice receiving vehicle was unusually high (>0.02, Supplemental Figure 4C), suggesting that this TCR might be related to a dominant T cell-responding clone. The significantly extended survival in both Pole P286R/+ and Pole P286R/+ Msh2 -/mice was thus likely related to functional TCR repertoire expansion suppressing tumor development. These results demonstrate the mouse cancer models with Pole P286R -driven ultramutation are robust models for further investigations into the biology of Pole-driven immunogenicity and mechanisms of responsiveness versus nonresponsiveness to immune checkpoint blockade.
Discussion
In this study, we present a potentially novel and efficient conditional, tissue-specific approach using an LSL-Pole P286R allele to generate a specific cancer mouse model with a far higher mutational burden than previously feasible in live genetically engineered animal models. Pole P286R proved genetically dominant, as has been observed in human cancers. We documented a 100% incidence of aggressive and fatal endometrial cancers, even when the allele was only heterozygous. That cancers could be generated with a single monoallelic driver and in only 1 generation stands in contrast to previous mouse models of cancer, which have typically required multiple alleles and complex breeding schema. Murine Pole P286R endometrial cancers closely resembled their human counterparts in terms of histology and clinical behavior. We demonstrated that Pole P286R -driven endometrial cancers have high mutational burdens in the range of human ultramutant cancers and were sensitive to immune checkpoint blockade, providing a model with robust responses to immunotherapy. This work provides a new approach for modeling cancer that may overcome current limitations of mouse models, namely very low mutational load and consequently limited tumor heterogeneity -which are not representative of any human tumor (30)(31)(32).
The initial TCGA study of endometrial cancer reported that POLE-mutant endometrial cancers have an exceptionally good prognosis (16). POLE mutations are present in diverse endometrial cancer histologic subtypes, including some associated with poor outcome, such as clear cell carcinoma and carcinosarcoma. This suggests that POLE testing of tumors (e.g., by cancer gene panel) could be useful to identify patients who could forego additional treatments associated with substantial morbidity, such as surgical staging/ lymph node dissection or adjuvant chemotherapy/radiotherapy (21), much as dMMR testing has become standard practice. Meta-analysis of 23 studies of dMMR and clinical outcome found no significant association between MMR status and survival in the setting of endometrial cancer (53), but dMMR testing is standard for all new endometrial cancer cases to (a) screen for Lynch syndrome and (b) identify patients who are candidates for immune checkpoint blockade (pembrolizumab) (54,55). That POLE-ultramutated cancers will also prove responsive to immunotherapy has been suggested by isolated case reports of exceptional responders (56), but large clinical trials have not yet been conducted. Such trials will be complicated by (a) the need for prospective identification of POLE-mutant cancers by cancer gene panel (not yet routine), (b) the relative rarity of such cancers, and (c) the even smaller subset with advanced disease at the time of diagnosis. Thus, our preclinical model is useful in that it provides compelling in vivo evidence that ultramutant POLE-mutant endometrial cancers (and by extension, POLE-driven malignancies at other anatomic sites) will also prove consistently sensitive to immune checkpoint blockade.
Recent studies of patient cohorts have challenged the idea that POLE endometrial cancers have an invariably good prognosis. For example, in 1 single-institution study of n = 23 POLE endometrial cancers identified by cancer gene panel (MSK-IMPACT), 17% (4/23) were of advanced stage with extrauterine disease at the time of diagnosis, including 2 cases that were stage IV (distant metastasis). After a median follow-up of 30 months, 17% (4/23) of patients developed recurrences, of which 3 were distant metastases, including 2 brain metastases, and 1 patient died after 33 months (57). A separate large, multi-institutional study of POLE cancers by the NRG Oncology/Gynecologic Oncology Group found improved outcomes for the POLE group, but the differences were not statistically significant (58). Although additional patient studies are needed to better define clinical outcomes, these later studies found that a significant proportion of POLE cancers metastasize, and such patients should benefit from targeted therapeutic approaches.
In our BAC-Sprr2f-Cre models, tumors were aggressive, with metastatic disease present in 100% of animals. This apparently more aggressive clinical course in mice relative to women likely reflects the nature of the model. In women, a single endometrial epithelial cell spontaneously acquires a POLE P286R mutation, giving rise to a single somatic clone that eventually becomes malignant. Whereas some endometrial cancers can show heterogeneity with respect to drivers such as TP53 (see below), all studies to date suggest that POLE P286R and other POLE ultramutator alleles are present throughout the tumor and thus are the initial driver. In contrast, in our models, the Pole P286R mutation is induced in hundreds, and probably thousands, of independent clones. It seems very likely that such multiclonality provides greater opportunities for tumor evolution and escape from immune surveillance or other tumor-suppressive mechanisms normally restraining ultramutation-driven carcinogenesis. While such enforced induction in many cells is likely necessary for a robust, high-penetrance animal model, it may be interesting to study tumor progression with other cell type-specific Cre drivers or with methods permitting P286R induction in fewer cells.
In systematic analyses of Pole P286R/+ endometrial cancers, we found no evidence for spontaneous dMMR. There was not even focal loss of dMMR factor expression in any primary endometrial cancer, and all Pole P286R/+ samples subjected to WGS showed superimposable trinucleotide signatures readily distinguishable from the combined Pole + dMMR signature ( Figure 4B). At the same time, we observed definitive cooperation with respect to overall tumor progression and survival in a defined genetic model where both defects were provoked simultaneously. These findings demonstrate that while Pole + dMMR can cooperate in tumor progression, such cooperation is not obligate. Pole-driven ultramutation is sufficient to drive tumor initiation and progression even in the context of proficient mismatch repair, although in a minority of cancers, both defects coexist and undoubtedly cooperate. POLE mutations may be secondary events that further accelerate the progression of initially dMMR cancers, as suggested by the secondary acquisition of mutations in children with constitutional dMMR (19) and the observation that some endometrial cancers in Lynch syndrome patients have POLE mutations (59). Thus, our results, combined with the available literature, suggest that strong POLE mutations occur secondarily in the context of dMMR-driven cancers, but perhaps not vice versa.
We documented definitive p53-mutant patterns in nearly half of Pole P286R/+ and the majority of Pole P286R/+ Msh2 -/endometrial cancers. The patterns and sizes of p53-mutant clones indicated that p53 mutations were acquired late in tumor progression. These results demonstrate substantial selective pressure for p53 inactivation during Pole P286R -driven tumor progression and suggest that such selective pressure increases as a function of mutational load. This likely explains the higher incidence and larger p53-mutant clone sizes in Pole P286R/+ Msh2 -/tumors. Similar processes may occur in human POLE cancers because most POLE-mutant, immunohistochemically p53-abnormal tumors show incomplete (i.e., subclonal) loss of p53 staining, again showing that the acquisition of p53 mutations is a late event (21). Whereas mutant p53 immunostaining patterns usually signify poor prognosis, "double-classifier" endometrial cancers harboring POLE mutations and mutant p53 immunostaining have a good prognosis similar to POLE-alone cancers (21). Our work provides further evidence that p53 and Pole mutations are functionally intertwined and should be viewed as a characteristic feature of Pole-driven carcinogenesis.
Mouse Pole P286R endometrial cancers exhibited striking nuclear atypia and giant nuclei, as described for human POLE endometrial cancers (37,39). These findings initially seemed paradoxical because POLE is well established as a single base substitution mutator (60,61), whereas nuclear atypia and giant nuclei imply aneuploidy or polyploidy. Our subsequent investigations suggest that tetraploidization is yet another distinctive feature of POLE-driven tumorigenesis shared by the mouse and human counterparts (45). We propose that tetraploidy, which occurs in diverse cancers, is particularly adaptive in ultramutant tumors because polyploidization provides additional copies of loci to permit "genetic buffering" against phenotypic variation, which is likely extreme in the context of ultramutation (62,63). Tetraploidization would also promote even further genetic diversity and could contribute to the apparent base substitution rate as determined by sequencing. Tetraploid p53 WT cells fail to propagate in culture, whereas p53-null cells can be passaged, demonstrating that p53 is a key checkpoint suppressing tetraploidization and that p53 loss favors cell survival in this context (47). Tetraploidization in turn can promote further chromosome-level instability (64). Thus, we propose that POLE tumors, though initially driven by a pure single base substitution mutator phenotype, acquire additional layers of genome instability through the acquisition of p53 mutations, polyploidization, and modest chromosome-level instability, as documented by our SKY results.
In addition to models of Pole-driven neoplasia, our results suggest that the LSL-Pole P286R allele could be useful for other studies of cancer. For example, incorporation of the allele into genetically engineered mouse models (e.g., Kras-driven lung cancers) could be used in an experimental system to formally investigate the contribution of high mutational burden into diverse aspects of tumor progression, including immune surveillance and how the Pole P286R and Pole P286R Msh2 -/models eventually become resistant to immune checkpoint blockade. Thus, this approach may facilitate the development of additional experimental models of the immune landscape of cancer.
Methods
Mouse husbandry and survival analysis. Mice were housed in a pathogen-free animal facility in microisolator cages and fed ad libitum on standard chow. Only females were used, with ages as described for each observation. All experiments used littermate controls. The LSL-Pole P286R allele was generated in and maintained in a pure 129S6/SvEvTac background; the BAC-Sprr2f-Cre allele was in an FVB background (backcrossed for 12 generations). Survival analyses were conducted on experimental and control animals selected at weaning.
Cell line derivation. Tumor fragments were excised from uteri under a dissection microscope, chopped to fine pieces with a scalpel in cold 0.25% Trypsin-EDTA solution (25200-114, Thermo Fisher Scientific), moved to 37°C for 15 minutes, and then triturated 20 times with a transfer pipette. Cells were pelleted by centrifugation and resuspended in DMEM (Thermo Fisher Scientific 10566-016) with 10% FBS and 1× penicillin-streptomycin and then grown in this medium under standard tissue culture conditions. Cells were passaged 4 times before initiation of experiments, and epithelial character was confirmed by phase-contrast microscopy. For WGS studies only, cells were subcloned by flow sorting of single cells into 96-well plates as previously described (22).
SKY of mouse cell lines and interphase FISH of human cell lines. Chromosome spreads were prepared by synchronizing cells with 100 ng/mL colcemid (KaryoMAX, Thermo Fisher Scientific) for 4 hours and harvesting by trypsinization. Cell pellets were gently resuspended in prewarmed 75 mM KCl solution and incubated at 37°C for 6 minutes. Cells were then fixed with ice-cold methanol/acetic acid (3:1) and dropped onto slides. For SKY, multicolor DNA FISH probes for mouse chromosomes (MetaSystems) were used per the manufacturer's protocol. Briefly, slides were denatured in 0.07N NaOH at room temperature for 1 minute, and FISH probes were denatured at 75°C for 5 minutes. FISH probes were then applied to chromosome spreads, sealed with a coverslip, and incubated in a humidified chamber at 37°C for 1-2 days. Following hybridization, slides were washed with 0.4× SSC at 72°C for 2 minutes and 2× SSC, with 0.05% Tween-20, at room temperature for 1 minute. Slides were gently rinsed in water, air-dried, and DAPI counterstained. Images were acquired using a Zeiss Axio Imager Z2 equipped with a Metafer Slide Scanning System and analyzed using Isis (MetaSystems) software.
Interphase FISH was performed at the UT Southwestern Molecular Cytogenetics Clinical Laboratory on 4-μm-thick tissue sections using the AneuVysion kit (Abbott) with DAPI counterstaining per the manufacturer's instructions. Cases of endometrial cancer with POLE mutations were identified by Sanger sequencing of exons 9 and 13, using DNA prepared from formalin-fixed, paraffin-embedded tissue sections.
DNA and library preparation for WGS. DNA was extracted with the QIAamp DNA Mini Kit (QIAGEN, 51306) with concentrations per Qubit fluorometer (Invitrogen, Thermo Fisher Scientific). Sample integrity was confirmed by agarose gel electrophoresis. For preparation of libraries, 1.5 μg DNA was fragmented by Covaris ultrasonicator, then analyzed by gel electrophoresis. The fragmented DNA was combined with End-Repair Mix and incubated at 20°C for 30 minutes. The end-repaired DNA was purified with QIAquick PCR Purification Kit (QIAGEN) followed by addition of A-Tailing mix (Illumina) and incubated at 3°C for 30 minutes. This was combined with the purified Adenylate 3′ ends DNA, adapter and ligation mix, and the ligation reaction was incubated at 20°C for 15 minutes. Adapter-ligated DNA was run on 2% agarose gel to recover target fragments. DNA was gel-purified with QIAquick Gel Extraction Kit (QIAGEN). Several rounds of PCR amplification with PCR Primer Cocktail and Master Mix (both from Illumina) were performed to enrich Adapter-ligated DNA fragments. PCR products were run on a 2% agarose gel to recover the target fragments, followed by gel purification with QIAquick Gel Extraction Kit (QIAGEN). The final library was analyzed by determination of average molecule length per Agilent 2100 bioanalyzer (Agilent DNA 1000 Reagents) and quantification performed by real-time PCR by TaqMan assay. Qualified libraries were loaded onto the HiSeq X Ten sequencer (Illumina) for paired-end sequencing with read lengths of 100-150 bp.
Variant calling. Reads were mapped to the mouse reference genome (GRCm38) using BWA 0.7.17 (66). Duplicated reads were marked using Picard, and base quality score recalibration was applied using GATK 4.0 (67). SNP and indel discovery were performed using samtools (68). A mutation was considered as existing in a sample only if the alternative allele frequency was more than 0.1. A mutation was considered somatic when it was not a known variant from the Mouse Genomes Project, including several FVB and 129 substrains (69), and the mutation was identified in only 1 sample but not in any other samples. All mutations were annotated using SnpEff (70), based on GENCODE M16 annotation (71).
Depth analyses. Average depth in each window (1 Mbp) was estimated using samtools. Raw depth was normalized by dividing the median depth across the genome followed by a log 2 transformation.
The pipelines for read mapping, quality control, mutation calling, and annotation were as described (22). Base substitution rate was calculated for each sample as the number of mutations identified divided by the number of genomic positions covered by at least 20 reads. Trinucleotide signature was generated using a self-written script according to the mutational signatures observed in PCAWG (20).
T cell depletion studies. αCD8b antibody (Bio X Cell, BE0223) at 200 μg per mouse per week was administered by intraperitoneal (IP) injection. CD8 + T cell depletion in peripheral blood was confirmed by flow cytometry analysis at 6 days and 24 weeks after initial injection. Mice were euthanized after a 27-week treatment interval with final MRI performed a few days before euthanasia.
Immune checkpoint blockade therapy. F1 syngeneic mouse hosts were treated with FTY720 (20 μg/mouse) (MilliporeSigma, SML0700) at 3 days and 1 day before tumor cell grafting and 1 day and 4 days after tumor cell grafting by IP injection. For syngeneic graft studies, 1 million Pole P286R/+ endometrial carcinoma cells were subcutaneously injected into the right flank. Immune checkpoint blockade was started 14 days after tumor cell engrafting. Two hundred micrograms each αPDL1 (Bio X Cell, BE0101, clone 10F.9G2) and αCTLA-4 (Bio X Cell, BE0164, clone 9D9) antibodies were administered by IP injection for 3.5 days × 3 doses. Tumor sizes were measured with calipers twice a week. For treatment of live Pole P286R/+ and Pole P286R/+ Msh2 -/mice, combined αPDL1/CTLA-4 was also given at 200 μg each antibody per mouse, twice a week by IP injection.
MRI and data analysis. MRI was conducted with a 7-T small-animal system (Bruker BioSpin Corp.) with a 40-mm (I.D.) radio frequency (RF) coil. Animals were anesthetized with 1%-2% isoflurane (AErrane, Baxter Healthcare Corporation) mixed in 100% O 2 and placed prone with respiratory sensor, headfirst with abdomen centered with respect to the center of the RF coil. Low-resolution multislice gradient echo imaging, serving as the localizer, was first performed on the abdominal region to confirm location and orientation of the uterus. For volume measurements of tumorous uteri, axial and coronal T2-weighted multislice images encompassing an entire uterus were obtained with a fat suppression fast spin-echo sequence. Acquisition parameters for axial images were 4000-ms repetition time, 40-ms effective echo time, 32 × 32 mm field of view, 256 × 256 matrix, 1-mm slice thickness, 31 slices, gapless, 8 excitations, fat suppression, and scan time of 16 minutes and 10 seconds, and those for coronal images were 3000-ms repetition time, 40-ms effective echo time, 48 × 32 mm field of view, 384 × 256 matrix, 1-mm slice thickness, 19 slices, gapless, 8 excitations, fat suppression, and scan time of 9 minutes and 36 seconds. Volumes were calculated with image processing software (ImageJ, version 1.40g; NIH), as described previously (72).
TCR repertoire analyses. DNA from all blood and tumor tissue samples was extracted with the QIAamp DNA Mini Kit (QIAGEN, 51306). DNA samples were sent to Adaptive Biotechnologies for deep TCR-β profiling resolution analysis. TCR diversity and clonality were analyzed per Adaptive Biotechnologies' ImmunoSEQ assays guide (73).
Data and materials availability. Sequence data have been deposited in the NCBI's Sequence Read Archive database under accession number PRJNA613918.
Statistics. Data are presented as mean ± SEM unless otherwise indicated. To determine P values, 2-tailed Student's t tests or Fisher exact tests were performed (unless otherwise indicated). P < 0.05 was considered statistically significant. For survival curves, Kaplan-Meier analysis was used, with statistical comparison among curves performed with the log-rank test. The above statistical analyses were performed with GraphPad Prism (version 8). No statistical method was used to predetermine sample sizes. For treatment studies, mice were randomly assigned to treatment/no-treatment cohorts by alternating assignment to the 2 pools. Where possible (i.e., assessment of mutant clones and their size following IHC of tumors of differing genotypes), analysis was performed by a pathologist blinded to the genotype. Some histopathologic assessments could not be randomized.
Study approval. All animal studies were approved by the UT Southwestern IACUC and in adherence to the NIH Guide for the Care and Use of Laboratory Animals (National Academies Press, 2011). Analysis of human samples was approved with a waiver of consent under a UT Southwestern Institutional Review Board protocol.
|
2020-07-23T09:01:56.658Z
|
2020-07-23T00:00:00.000
|
{
"year": 2020,
"sha1": "274dcd3ea6d90e102de6a6ceb28236b85e882772",
"oa_license": "CCBY",
"oa_url": "http://insight.jci.org/articles/view/138829/files/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "9c46a51b6396691f3f563a6531a3b34ef1a3e6ff",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
224925292
|
pes2o/s2orc
|
v3-fos-license
|
Study on the Safety Risk Prevention Technology for the Urban Gas Pipeline
Along with the continuous development of our society and economy, the process of urbanization has been pushed forward. As one of the indispensable links in urbanization construction, the gas pipeline has brought great convenience to the life of urban residents, while has some potential safety hazards and brings certain risks to the life, property and safety of urban residents. The management status of the urban gas pipeline and safety risk prevention measures are mainly analyzed in this paper.
Introduction
Gas has become one of the indispensable materials in the life of urban residents in our country. It helps urban residents solve the diet problem in daily life. With the continuous acceleration of urbanization construction, the safety construction and management of urban gas pipelines have become one of the key issues of concern in China [1] . The factors threatening the safety of urban gas pipelines and the safety construction issue of urban gas pipelines are mainly analyzed in this paper, and corresponding measures according to the risk prevention technology are proposed to ensure the safety construction and use of urban gas pipelines.
Pipeline design issues
Improper design in the urban pipeline construction is likely to cause problems in the construction process of gas pipelines and harm the later use of gas. The problems in the design of urban gas pipelines are mainly caused by the following reasons: First, the gas pipeline design scheme has not been adjusted according to the actual construction situation. In the process of the house construction project in our country, there may be some adjustments due to several irresistible factors, causing the deviation of the house construction project from the original design plan. Hence corresponding transformation of the gas pipeline design scheme should be made according to the adjustment of the house construction project to meet the needs of gas pipeline installation. However, due to some reasons, the gas pipeline design scheme has not been adjusted accordingly, leading to various problems in the process of gas pipeline construction, which not only affects the progress of gas pipeline installation, but also brings some safety risks to the later use for residents. Second, the capacity of gas pipeline designers is inadequate. The gas pipeline designer is the main worker in pipeline design. He needs to design the house gas pipeline according to the national design requirements and make good preparations for the gas construction. But some designers' lacking competent capacity will bring about mistakes in the pipeline design scheme. As a result, the construction of gas pipeline is inconsistent with the design scheme, which not only affects the progress of overall housing construction, but also brings hidden danger to the later use of gas. Third, the IOP Conf. Series: Earth and Environmental Science 565 (2020) 012025 IOP Publishing doi:10.1088/1755-1315/565/1/012025 2 acceptance work is not well done. The acceptance work mainly includes two aspects, one is the acceptance of the design drawings, the other is the acceptance of the actual construction. If the acceptance personnel can conduct acceptance of the design drawings and construction site according to the state's requirements, they will find the unreasonable phenomena in the pipeline design and construction. Therefore, these phenomena will be pointed out, and designers are required to make adjustments to meet the needs of gas pipeline construction, thus reducing the potential safety hazards of pipeline use in the later stage. However, some of the acceptance personnel fail to conduct the acceptance work with a fair attitude, leading to problems in some design drawings and engineering construction of gas pipelines, which not only cannot guarantee the overall quality of gas pipeline construction, but also brings many potential safety hazards in the later stage (Table 1) [2] .
Construction material issues
The gas pipeline material is the fundamental factor that affects the later use of gas pipeline. In order to pursue higher economic benefits, some gas pipeline installation companies in China fail to purchase gas pipeline construction materials according to the national or local standards, which results in the fact that the overall quality of gas pipeline is not up to the standards and brings a lot of problems for the later use and management of the gas pipeline. At the same time, some of China's gas pipeline construction units do not have good management measures, which leads to the problem of internal personnel taking bribes in the work and carrying out procurement of some substandard products, reducing the quality of pipeline construction and bringing great danger to the later use of gas pipeline. Therefore, the gas pipeline construction unit should strengthen its management and do a good job in material procurement of the gas pipeline, so as to ensure the gas pipeline can meet the requirements of the state and ensure the life and property safety of urban residents.
Stray current interference
Stray current interference is one of the main causes of pipeline corrosion. It is a phenomenon that the current will enter from part of the pipeline and flow along the pipeline for a certain distance before flowing into the land. Pipeline corrosion exists in the outflow part, which makes it difficult to solve the safety risks of urban gas pipelines and increases the difficulty of management and work [3] .
Pipeline management issues
The later-period management of the urban gas pipeline is an important measure to reduce the use risk of the gas pipeline. It can eliminate problems such as the gas pipeline leakage through periodic and comprehensive inspection and detection, so as to ensure the safety of gas pipeline during use. However, In China, the monitoring and management intensity of the urban gas pipeline by relevant units is not high and they don't realize the importance of the gas pipeline detection. The relevant staff did not take into account the actual use of the gas pipeline when maintaining it, without monitoring for the corrosion and excavation that may occur in the pipeline, which bring about risks in the use of urban gas pipeline.
Third Party Construction
The third party construction problem is one of the main problems that cause the damage of urban gas pipeline. With the continuous acceleration of urbanization, urban surface construction will be carried out in the process of construction. In the process of construction, it is easy to cause damage to gas pipelines, thus increasing the working intensity of urban gas pipeline safety risk prevention. IOP Conf. Series: Earth and Environmental Science 565 (2020) 012025 IOP Publishing doi:10.1088/1755-1315/565/1/012025 3
Application of safety risk prevention technology for urban gas pipeline
Safety risk prevention technology is an important measure to forecast and evaluate the safety risk of gas pipeline in China. It evaluates the potential problems and hidden dangers in the use of gas pipelines through risk assessment, and evaluates the consequences caused by the problems and hidden dangers. On this basis, corresponding improvement measures are proposed to avoid the loss of life and property caused by urban gas pipelines [5] . China's relevant departments can apply the safety risk prevention technology for the urban gas pipeline from the following aspects, reduce the risks of gas pipelines during use, and ensure the life and property safety of urban residents.
Internal detection technology
The internal inspection technology is one of the important ways to carry out the internal inspection of pipelines. It can use science and technology to detect and analyze the internal conditions of the pipeline, timely find the internal problems of the pipeline, and provide the repair basis for relevant staff through the test data, so as to avoid the gas leakage, gas explosion and other events caused by the internal problems of the pipeline, and improve the safety of the gas pipeline use.
The later-period maintenance and management of gas pipelines in China has a wide range of applications of the internal detection technology. China's gas pipeline maintenance department will purchase equipments for internal detection technology according to the actual needs, continuously combine advanced technology to carry out scientific management of the gas pipeline, timely and comprehensively monitor and test the gas pipeline. As a result, It reduces the accidents in the use of gas pipeline, improves the reliability of the gas pipeline operation, and ensures the gas use safety for urban residents in China. The internal detection technology mainly detects the internal gas pipeline from three aspects: abnormal geometry detection, metal loss detection and crack detection.
Internal detection technology is shown in the following table [6] . Denting and compression of the pipe shell Track Magnetic flux leakage detector is used as the main method for gas pipeline monitoring. See Figure 1: Its working principle is to detect the corrosion, crack, weld seam, defect, construction damage of the inner and outer pipe wall,the pipeline characteristics and length of the pipeline by using the magnetic flux leakage detection principle, so as to reduce the pipeline operation and management risk and the occurrence of operation and production accidents. Magnetic flux leakage detector is an intelligent detection system, which can detect and record the abnormal defect information and pipeline accessories on the metal pipe in real time.And the defect information and the exact location and size of IOP Conf. Series: Earth and Environmental Science 565 (2020) 012025 IOP Publishing doi:10.1088/1755-1315/565/1/012025 4 related pipeline accessories can be determined through the data analysis and processing in the later stage. Meanwhile, the magnetic flux leakage detector can carry a pipeline mapping system. After the detection, the data collected by the detector can be intelligently analyzed and quantified with the special data analysis software,so as to realize multiple users inquiry with the network.Besides, it has many functions such as pipeline integrity management and risk assessment, testing urban gas pipelines with efficiency and improving the safety risk prevention and control work efficiency of gas pipelines.
External detection technology
External detection technology is the main measure to guarantee the service life of the gas pipeline in China. It carries out the strict inspection to the external use environment and the use condition of the gas pipeline through the related technology, guarantees the pipeline will not have various kind of problems due to the external factors, enhances the gas pipeline service life. The external detection technology of the gas pipeline is mainly to carry out anti-corrosion protection for the pipeline, and use the corresponding detection method to monitor the external situation of the deep underground gas pipeline, so as to prevent the using problem of gas pipeline due to external corrosion, fracture and other problems, which will cause harm to the life of urban residents. External anti-corrosion coating and cathodic protection system is the main way to carry out external detection of the gas pipeline, both of which have been widely applied in the management of the gas pipeline in our country. But their high cost increase the cost of the gas pipeline management, bringing pressure to urban construction projects.
Management and improvement
To conduct urban gas pipeline management, the management and maintenance of the pipeline itself is not enough, the surrounding environment of the gas pipeline should be taken into account as well. The urban government and relevant responsible units should realize the importance of the surrounding environment of the gas pipeline for the normal use of the gas pipeline, and strengthen the renovation of the surrounding environment and facilities of gas pipelines. As for the illegal buildings around the gas pipeline, we should persuade the people concerned to make them demolished by means of communication, do a good job in the environmental management around the gas pipeline, avoid harm to the gas pipeline caused by the external environment or buildings, reduce external risk factors, and improve the using safety of the gas pipeline [7] .
Enhanced supervision
In recent years, China's natural gas consumption and production have been increasing. By 2017, China's natural gas sales had reached 240.4 billion cubic meters and production had reached 149.2 billion cubic meters. With the increase of gas usage, the gas pipeline construction is gradually strengthened. The Chinese government should strengthen the supervision of gas pipelines to avoid problems in the construction and installation of urban gas pipelines, which will affect the later use and management of gas pipelines. According to the requirements of the gas pipeline construction and installation, the government and relevant departments should supervise the construction site, conduct regular spot check and return visit, and improve the consciousness of gas pipeline construction departments. At the same time, the government and relevant departments should strengthen the monitoring work for gas pipeline construction materials, guarantee the safety of the gas pipeline material use, improve the procurement of relevant construction units and standardize the purchase specification of relevant construction units according to the national or local standards, making sure IOP Conf. Series: Earth and Environmental Science 565 (2020) 012025 IOP Publishing doi:10.1088/1755-1315/565/1/012025 5 that the relevant construction units will conduct procurement according to the standards to ensure the safety and specification of purchased materials.
Improve the acceptance efficiency of the government and relevant departments, do well the design and construction acceptance of gas pipelines, do well the preliminary work of gas pipelines, and avoid problems in the design and installation process of gas pipelines, which will affect the later use. This requires the government and relevant acceptance departments to improve their work attitude and acceptance standards to ensure the actual effect of acceptance work. First, the government and relevant inspection and acceptance departments shall check and accept the design drawings of gas pipelines according to national standards, conduct design drawings acceptance according to the actual housing construction needs, so as to avoid problems caused by designers in the design process that will affect the gas pipeline construction in the later stage and ensure the accuracy of gas pipeline design [8] . Second, the government and the relevant inspection department should improve their acceptance work for gas pipeline construction, carry out the pipeline construction site acceptance according to the national acceptance standards and design drawings requirements, as a way to ensure that the construction site conditions of gas pipelines conform to the national standards and design drawings, achieving the ideal state of the gas pipeline construction and installation. Finally, the government and related inspection department should improve their work attitude and work capacity, carry out acceptance inspection in strict accordance with national acceptance standards and requirements of design drawings, avoid the unreasonable acceptance phenomena due to bribery of the installation units, thus improving their sense of responsibility and ensuring the gas using safety for urban residents in our country.
Increase the inspection intensity
Inspection should be strengthened in the later stage of the urban gas pipeline management and a 24-hour on-duty system should be built to carry out detailed and comprehensive inspection of urban gas pipelines. In addition, ensure the comprehensive and round-the-clock supervision of the gas pipeline, so that problems in the use of the gas pipeline can be found and solved timely and the loss of town caused by gas problems will be reduced. This needs the government and relevant departments to transform the work content according to the actual situation of the gas pipeline management, arrange the work time according to the weather, the environment and other fact factors, ensure that the gas pipeline can be inspected in any time. Their responsibilities should be fulfilled, so that the safety arrangement of urban gas pipelines can be conducted well and the service life of gas pipeline equipments can be extended, increasing the use efficiency of urban pipeline resources.
Enhance promotion
With the continuous acceleration of China's urbanization construction, the number of urban residents has gradually increased, and the scope of gas pipeline laying has also gradually expanded. It is difficult to rely only on the government and relevant departments to carry out all-round management of gas pipeline. Therefore, the government and relevant departments should strengthen the promotion of the gas use, and use the Internet, television, newspapers, radio and other means to promote the safe use and monitoring of gas, enhance the awareness of urban residents on the safe use of gas, make them have the ability to test the safety of the gas pipeline and its related appliances in daily life. Hence not only the life safety of urban residents can be guaranteed, but also the management intensity of the government and related departments will be reduced, which is conducive to the urban society safety and stable development. And the fire-fighting promotion is shown in the following table.
|
2020-10-19T18:07:15.268Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "e2c34fa1f4388d598173ced7bea7a486fbd7f8f5",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/565/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c4d23ec7281403492c5a500e66397c67317184cf",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Computer Science"
]
}
|
14526295
|
pes2o/s2orc
|
v3-fos-license
|
Far-field approximation for hydrodynamic interactions in parallel-wall geometry
A complete analysis is presented for the far-field creeping flow produced by a multipolar force distribution in a fluid confined between two parallel planar walls. We show that at distances larger than several wall separations the flow field assumes the Hele-Shaw form, i.e., it is parallel to the walls and varies quadratically in the transverse direction. The associated pressure field is a two-dimensional harmonic function that is characterized by the same multipolar number m as the original force multipole. Using these results we derive asymptotic expressions for the Green's matrix that represents Stokes flow in the wall-bounded fluid in terms of a multipolar spherical basis. This Green's matrix plays a central role in our recently proposed algorithm [Physica A xx, {\bf xxx} (2005)] for evaluating many-body hydrodynamic interactions in a suspension of spherical particles in the parallel-wall geometry. Implementation of our asymptotic expressions in this algorithm increases its efficiency substantially because the numerically expensive evaluation of the exact matrix elements is needed only for the neighboring particles. Our asymptotic analysis will also be useful in developing hydrodynamic algorithms for wall-bounded periodic systems and implementing acceleration methods by using corresponding results for the two-dimensional scalar potential.
Introduction
Numerical and theoretical investigations of particle motion in suspensions bounded by planar walls require efficient methods for evaluating hydrody-namic interactions in these systems. Examples of phenomena where the hydrodynamic wall effects are important include collective particle motion in quasi-bidimensional colloidal suspensions [1,2,3,4,5], and conformation dynamics of a DNA molecule in a parallel-plate microchannel [6].
Several methods for evaluating hydrodynamic interactions in wall-bounded systems have been proposed. In some studies, the flow reflected from the walls was calculated numerically using either boundary-integral [7] or finitedifference [6] techniques. In a different approach [8], the exact point-force solution for the flow between the walls [9] was used. Wall effects were also included using a multiple-reflection technique [10], and several approximation methods were proposed [11,12,13]. While all of these methods have their merits, they also have some essential disadvantages, such as a high numerical cost or an insufficient (in many cases unknown) accuracy.
Recently we have derived [14,15], a novel algorithm for evaluating hydrodynamic friction matrix in a wall-bounded suspension of spheres under creepingflow conditions. 3 Our Cartesian-representation method relies on transformations between a spherical basis set of solutions of Stokes equations (this set is consistent with the particle geometry) and a Cartesian basis set (which is consistent with the wall geometry). The algorithm provides highly accurate results for multiparticle friction and mobility matrices.
Using our approach, we have obtained several interesting numerical results. In particular, we have shown that the friction matrix undergoes a crossover from the quasi-three-dimensional to quasi-two-dimensional form when the interparticle distance becomes larger than the wall separation H. We have also observed an unusually large resistance coefficient for a long rigid chain of spheres in transverse motion (with respect to the orientation of the chain) in a narrow, wall-bounded space. Since both these effects involve flow on the length scale l ≫ H, they are not captured by the usual single-wall superposition approximation which does not properly take the far-field flow into account (as demonstrated in [14]).
Large-scale studies of particle dynamics in the two-wall geometry require efficient simulation algorithms. In our approach [14,15] the most expensive part is evaluation of the Green's matrix G in the multipolar representation. This matrix is a key quantity in our algorithm-its elements correspond to the coefficients in the expansion of the hydrodynamic Green's tensor for the wall-bounded system into multipolar basis fields. The inverse of the Green's matrix combined with the one-particle reflection matrices yields the multiparticle hydrodynamic friction matrix.
In our algorithm [14,15] the matrix G is expressed in terms of lateral Fourier 3 An algorithm based on similar ideas was also developed by Jones [16]. integrals with respect to the two-dimensional wave vector in a plane parallel to the walls. Evaluation of these integrals is especially difficult for widely separated particles due to the oscillatory character of the integrands. In the present paper we derive much simpler asymptotic formulas for the matrix G in the far-field regime. When the particle separation is sufficiently large, these formulas can be used instead of the Fourier integrals, resulting in a significant reduction of numerical cost (and in other important simplifications).
Our analysis of the asymptotic form of the matrix G relies on the observation that in the far-field regime the velocity field in the space between the walls assumes a simple Hele-Shaw (i.e. the lubrication) form. Accordingly, the flow field has only the lateral components and it varies quadratically across the space between the walls. Such a flow field is entirely determined by the corresponding pressure field, which is a two-dimensional harmonic function that depends only on the lateral coordinates. It follows that at large distances r ≫ H, the full three-dimensional hydrodynamic problem is reduced to a much simpler two-dimensional scalar problem for the pressure.
This paper is organized as follows. Our method [14,15] for evaluating manyparticle hydrodynamic interactions in the parallel-wall geometry is summarized in Secs. 2 and 3. Section 2 recalls the induced-force formulation of the problem, and Sec. 3 summarizes the force-multipole expansion method. The main theoretical results of the present analysis are given in Sec. 4, where the Hele-Shaw approximation for the far-field flow is discussed, and explicit expressions for Green's matrix G are derived. In Sec. 5 we present some results of numerical calculations. We show the dependence of Green's matrix elements on the interparticle distance, and we illustrate the role of their farfield behavior in the description of hydrodynamic interactions in rigid arrays of spheres. Concluding remarks are given in Sec. 6, and some technical details are presented in the appendices.
Hydrodynamic resistance
We consider the motion of N spherical particles of the radius a, which are suspended in a fluid of viscosity η, under creeping-flow conditions. The system is bounded by two planar parallel walls at the positions z = 0 and z = H, where r = (x, y, z) are the Cartesian coordinates. The centers of particles i = 1 . . . N are at positions R i = (X i , Y i , Z i ), and the translational and rotational particle velocities are U i and Ω i . The external forces and torques acting on the particles are denoted by F i and T i . It is assumed that the flow field satisfies the no-slip boundary conditions on the particle surfaces and the walls.
For a system of spheres undergoing translational and rotational rigid-body motion with no external flow, the particle dynamics is characterized by the resistance matrix defined by the linear relation The dot in the above equation denotes the matrix multiplication and contraction of the Cartesian tensorial components of the resistance matrix. Our goal is to calculate the resistance matrix ζ, or its inverse, the mobility matrix µ.
Our method [14,15] for evaluating these quantities is outlined below.
Induced-force formulation
The effect of the suspended particles on the surrounding fluid can be described in terms of the induced-force distributions on the particle surfaces. These distributions can be written in a form where δ S a (r) = a −2 δ(r − a).
By the definition of the induced force [17,18,19], the flow field is identical to the velocity field in the presence of the moving particles. Here is the Green's function for the Stokes flow in the presence of the boundaries; the Green's function T(r, r ′ ) is decomposed into the Oseen tensor and the part T ′ (r, r ′ ) that describes the flow reflected from the walls. In Eq. (5) it is assumed that the particles move with given velocities, but no external flow is imposed.
The resistance relation (2) is linked to the induced-force distributions (3) through the expressions for the total force and torque, respectively. To determine the resistance matrix (1) we thus need to evaluate the induced forces (3) for given translational and angular velocities of the particles.
Boundary-integral equations for the induced forces
For a system of particles moving with the translational and angular velocities U i and Ω i , the induced-force distribution (3) can be obtained from the boundary-integral equation where v rb is the rigid-body velocity field associated with the particle motion, and S i is the surface of particle i. In the boundary-integral equation (9), Z i denotes the one-particle scattering operator that describes the response of an individual particle to an external flow in an unbounded space. This operator is defined by the linear relation where v in i is the velocity incident to particle i. For specific particle models (e.g., rigid particles or drops), explicit expressions for the operator Z i are known [20,21,22].
Spherical basis fields
As in a standard force-multipole approach [23,24] the boundary-integral equation (9) is transformed into a linear matrix equation by projecting it onto a spherical basis of Stokes flow. To this end we use the reciprocal basis sets defined by Cichocki et al. [21]. We introduce, however, a slightly different normalization to exploit the full symmetry of the problem.
We use here a convenient normalization introduced in [15], which emphasizes various symmetries of the problem. Explicit expressions for the functions V ± lmσ in this normalization are given in Appendix A. We note that both in our present and in the original normalization [21], the basis fields v − lmσ satisfy the identity [25] ηT 0 (r − r ′ ) = where T 0 (r−r ′ ) is the Oseen tensor (7). Relation (13) assures that the Lorentz reciprocal symmetry of Stokes flow is reflected in the symmetry of the resulting matrix representation of the problem [24].
Following [21] we also introduce the reciprocal basis fields w ± lmσ (r), defined by the orthogonality relations of the form Here is the inner product, the asterisk denotes the complex conjugate, and δ S a is defined in Eq. (4). The reciprocal basis fields and the bra-ket notation (15) allows us to conveniently represent expansions of Stokes flow fields into the complete sets of nonsingular and singular basis fields (12). In particular, any Stokes flow u(r) that is nonsingular in the neighborhood of a point r = R i has an expansion where
Matrix representation
The matrix representation of the boundary-integral equation (9) is obtained using the multipolar expansion of the induced-force distributions (3). The multipolar moments in the above expression are given by the projection according to the orthogonality condition (14). The definition (19) of the multipolar expansion is justified by the identity which follows from the representation (13) of the Oseen tensor. Equations (18) and (20) indicate that the multipolar moments f i (lmσ) are identical (apart from the trivial factor η) to the expansion coefficients of the flow field scattered by an isolated particle in unbounded space into the singular basis fields v − lmσ .
To obtain a linear matrix equation for the set of force multipolar moments f i (lmσ), representation (18) is inserted into the boundary-integral equation (9), and the resulting expression is expanded into the nonsingular basis fields For a particle moving in a quiescent fluid, the coefficients on the right side are nonzero only for l = 1 and σ = 0, 1. The matrix M in Eq. (21) consists of three contributions corresponding to the three terms on the left side of Eq. (9), Using the bra-ket notation these contributions can be expressed in the form is associated with the one particle operator Z −1 i in equation (9), and it relates the force multipoles f i (l ′ m ′ σ ′ ) induced on particle i to the coefficients in the expansion of the flow field incoming to this particle into the nonsingular spherical basis fields (12b). By the spherical symmetry, this term is diagonal in the indices l and m and is independent of m. The Green's matrices G ′ ij (lmσ | l ′ m ′ σ ′ ) and G ij (lmσ | l ′ m ′ σ ′ ) are associated with the integral operators that involve the kernels T ′ (r, r ′ ) and T(r, r ′ ). Using the orthogonality relations (14) one can show that the elements of these matrices correspond to the expansion of the flow produced by a force multipole centered at R j into the nonsingular basis (12b) centered at R i .
Explicit expressions for the single-particle reflection matrix Z −1 i are well known [21,26]. Quadrature formulas for the Green's matrix G ij have been derived in our recent publication [15], where the matrix elements G ij (lmσ | l ′ m ′ σ) are represented as a combination of the free-space Green's matrix [26,24] and the wall contribution G ′ ij (lmσ | l ′ m ′ σ) that is given in a form of a Hankel transform of a product of several simple matrices. The Hankel transform arises from angular integration of lateral Fourier modes of Stokes flow.
The many-particle resistance matrix (1) can be obtained by solving Eq. (21) and projecting the induced force multipoles onto the total force and torque (8). Explicit expressions for the resistance matrix in terms of the generalized friction matrix M −1 are given in [15]. In numerical applications, the system of linear equations (21) is truncated at a given multipolar order l, and the resulting approximate friction matrix is supplemented with a lubrication correction (as described in [15]).
Far-field approximation
Calculation of the exact matrix elements G ij (lmσ | l ′ m ′ σ ′ ) by our Cartesianrepresentation method [15] requires numerical evaluation of Hankel transforms that involve the Bessel functions J m−m ′ (k̺ ij ). Here k is the magnitude of the lateral wave vector, and denotes the lateral position of particle i. For large interparticle distances ̺ ij , the factor J m−m ′ (k̺ ij ) undergoes rapid oscillations as a function of k. Thus, evaluation of the Fourier integrals in the Hankel transforms is numerically expensive for such configurations.
In the following sections we derive explicit asymptotic expressions for the matrix elements G ij (lmσ | l ′ m ′ σ ′ ) at large interparticle distances ̺ ij ≫ H. As we will see, these expressions have a very simple form, and do not require evaluation of the Fourier integrals.
Hele-Shaw form of the far-field flow
Our asymptotic analysis relies on the observation that in the far-field regime the flow between two parallel walls assumes the Hele-Shaw form. Accordingly, the asymptotic pressure field p = p as varies only in the lateral direction, and the associated flow field has the lubrication form where and ∇ is the two-dimensional gradient operator with respect to the lateral position ρ = (x, y). By the incompressibility of the flow field (27), the pressure field p as satisfies the two-dimensional Laplace's equation The asymptotic expressions (27) and (29) can be obtained [27] by expanding the boundary-value problem for Stokes flow in the space between the walls in the small parameter H/ρ ≪ 1, where ρ is the distance from the force distribution that generates the fluid motion. Since the velocity field (27) itself satisfies the Stokes equations and boundary conditions exactly, one can show that the higher-order terms in the asymptotic expansion vanish. This property indicates that the correction terms are subdominant [28], which in turn suggests that the asymptotic behavior (27) and (29) is approached exponentially. This conclusion is consistent with the direct analysis of the asymptotic form of the Green's function in the space between the walls by Liron and Mochon [9] (see the discussion in Sec. 4.4 below).
Asymptotic basis sets
To find the far-field form of the velocity field produced by induced-force multipoles and to obtain the corresponding asymptotic expressions for the elements of the Green's matrix G ij (lmσ | l ′ m ′ σ ′ ), it is convenient to define appropriate basis sets of Hele-Shaw flow and pressure fields. The sets of singular and nonsingular pressures are defined by the relation where m = 0, ±1, ±2, . . ., and are the two-dimensional harmonic basis functions. The associated Hele-Shaw basis velocity fields are according to Eq. (27).
Below we list several useful relations for the harmonic functions (31). First, we have the diagonal representation for the Green's function which is analogous to the representation (13) of the Oseen tensor. Next, we also have the displacement theorem where ̺ ij = ̺ i − ̺ j , and the displacement matrix is given by We note that due to the presence of the Heaviside step function in Eq. (35), the scalar fields with the same sign of the indices m, m ′ = 0 do not couple in the displacement relation (34). We also note that the matrix (35) satisfies the symmetry relation As a direct consequence of the displacement theorem (34) for the scalar pressure fields, we have the corresponding displacement relation for the Hele-Shaw basis flows (32) v The term with m = 0 in the above relation vanishes because v as + 0 ≡ 0 according to Eqs. (31b) and (32). The prime at the summation sign has been introduced to emphasize that this term is omitted.
In the following section we will derive a diagonal representation (analogous to (13) and (33)) for the hydrodynamic Green's tensor describing the asymptotic far-field response of the fluid confined between walls to a point force.
Asymptotic Green's tensor
An explicit expression for the far-field flow produced by a point force in the space between the walls has been derived by Liron and Mochon [9] (see also [29]). According to their results, the far-field flow produced by a force F applied at the position (0, 0, z ′ ) can be expressed in the form The above relation can also be obtained by a direct expansion of the boundaryvalue problem in the small parameter H/ρ [27].
Relation (39) indicates that the correction to the far-field O(ρ −2 ) asymptotic behavior of the fluid velocity u decays exponentially with ρ. Moreover, the vertical component of the force F does not contribute to the O(ρ −2 ) behavior.
Equation (39) can be rephrased as an expression for the asymptotic form T as of the full Green's function (6) T as (r, where r ′ = ρ ′ + z ′ê′ z . One of the gradient operators in the above formula has been applied to the primed coordinates to emphasize the Lorentz symmetry of the Green's tensor (where the dagger denotes the transpose of the tensor). Due to the translational invariance of the system in the lateral directions, the Green's function (40) satisfies the identity where the vector ̺ has only lateral components.
Using Eqs. (32) and (33) and noting that the Green's function (40) is quadratic both in primed and unprimed transverse variables, we find the relation which is analogous to the diagonal representation of the Oseen tensor (13). Equation (43a) combined with the displacement theorem for the Hele-Shaw basis fields (38) and identity (42) yields the symmetric representation of the asymptotic Green's tensor (40)
Asymptotic form of the two-wall Green's matrix
The asymptotic form of the matrix elements (26) can be obtained by projecting relation (44) onto the reciprocal basis fields w + lmσ centered at the points R i and R j . The resulting expression involves the matrix elements where The elements (45) are diagonal in the azimuthal number m by cylindrical symmetry, they depend only on the vertical coordinate Z i of the point R i = ̺ i + Z iêz , and they are real. Using these properties, the following asymptotic form of the wall Green's matrix (26) is obtained (46) Due to the symmetric structure of the expression (46) and the symmetry property (37) of the scalar displacement matrix S +− cyl , the Lorentz symmetry is manifest. We note that the presence of the Heaviside step function in relation (35) implies that The physical interpretation of the matrix C follows from the expression which results from Eqs. (16) and (45). The matrix C(Z; lmσ) thus describes the transformation from the representation of the flow in terms of nonsingular Hele-Shaw basis v as + m (r−̺ i ) centered at the lateral position ̺ i to the spherical representation (12b) centered at R i .
Multipolar flow fields
An alternative interpretation of the matrix C is obtained by considering the far-field flow produced in the space between the walls by the multipolar force density centered at R 2 . By inserting representation (43a) specified for the shifted asymptotic Green's function (42) with ̺ = ̺ 2 into (50) and using definition (45) of the matrix C we find that Thus, the matrix element C(Z 2 ; lmσ) represents the amplitude of the Hele-Shaw basis field v as − m in the far-field multipolar velocity (50). Only one term contributes to this flow according to Eq. (52) because of the cylindrical symmetry of the problem.
The asymptotic multipolar flow fields (50) can also be expressed in terms of the matrix elements (46). To this end the right side of Eq. (50) is expanded in the spherical basis fields (12b) with the help of identity (16). The expansion yields the relation where G as 12 is given by Eq. (26) with the Green's function T replaced with T as . The above expression relates the asymptotic flow u as lmσ centered at the position R 2 and the spherical basis fields centered at a different position R 1 .
We note that for each m only several force multipoles (51) produce a nonzero far-field velocity (50). This behavior results from the properties of the matrix C(Z 2 ; lmσ) that appearers in relation (52); the form of this matrix is analyzed in Sec. 4.6 below. A further discussion of the multipolar fields in the space between the walls is given in Appendix B.
Explicit expressions for the transformation matrix C
A general structure of the matrix C can be inferred using scaling arguments. According to Eq. (12b) spherical basis fields v + lmσ (r − R i ) are homogeneous functions of the order l + σ − 1 of the relative-position vector r i = r − R i . Similarly, Eqs. (31b) and (32) imply that the Hele-Shaw basis fields v as + m (r − ̺ i ) are combinations of homogeneous functions of the order |m| − 1, |m|, and |m| + 1 of r i . Since the coefficients C(Z i ; lmσ) are independent of r i , relation (49) implies that the non-zero elements of C(Z i ; lmσ) satisfy the condition A detailed analysis of relation (49) reveals that the nonzero elements of the matrix C can be written in the form [27] C(Z; l ±µ σ) = B ± l−µ σ (µ; Z), µ = |m| ≥ 1, where B ± λ σ (µ; Z) are the elements of the 3 × 3 matrix The range λ = 0, 1, 2 of the index λ = l − |m| in equation (56) result from the conditions |m| ≤ l and (54). All other elements of the matrix C vanish.
We close our theoretical considerations with a remark that the asymptotic form G as ij of the Green's matrix G ij is approached exponentially for R ij → ∞ because Liron-Mochon formula (39) is exponentially accurate at large distances. As in relation (39), the lengthscale for this approach is set by the wall separation H. The asymptotic expression (46) should thus be very accurate when the interparticle distance R ij is larger than several wall separations H. This conclusion is supported by our numerical results discussed in the following section.
Matrix elements
A typical behavior of the Green's matrix G ij is illustrated in Figs. 1 and 2. The results are shown for the matrix elements We present our results in the form of the rescaled elements defined by the relation For those values of parameters l and µ for which the matrix elements (59) do not vanish, the factor Φ − corresponds to the far-field behavior of G as 12 (1 ∓1 0 | l ±µ σ), according to Eqs. (35) and (46). In the asymptotic regime ̺ 12 ≫ H the rescaled elementsG as 12 (1 ∓1 0 | l ±µ σ) depend only on the vertical coordinates Z 1 and Z 2 . The nonzero asymptotic elements are quadratic functions of the vertical coordinate Z 1 , and they are at most quadratic in Z 2 but can also be linear or constant in this variable, as indicated by Eqs. (46) and (56). The far-field flow (50) is related to these elements by Eq. (B.9). Figure 1 illustrates the behavior of the matrix elementsG 12 All these functions approach nonzero asymptotic valuesG as 12 (1 −1 0 | l 1 σ) = 0 for large interparticle distances ̺ 12 ≫ H, according to Eqs. (46) and (54). The corresponding behavior of the unscaled matrix elements (58) follows from Eqs. (31a), (35), and (46).
The matrix elements (61) are directly related to the multipolar flow fields (50), as indicated by Eq. (B.7). Therefore, we find that Eq. (61) corresponds to the slowest possible far-field decay of the flow produced by a multipolar force distribution. The multipoles (60) include the horizontal Stokeslet (l = 1, σ = 0), rotlet (l = 1, σ = 1), stresslet (l = 2, σ = 0), and three other multipoles, one of which has the spherical-harmonics order l = 3. The numerical results shown in Fig. 1 indicate that the approach ofG 12 to the asymptotic values is exponential, which is consistent with our discussion in Sec. 4.
Applications in multiparticle hydrodynamic-interactions algorithms
The simplest numerical application of our asymptotic formulas (46), (55), and (56) is to implement them directly in the induced-force-multipole equation (21). To this end, the matrix (26) is represented as the superposition of the long-range asymptotic part and the short-range correction, i.e., The asymptotic part G as ij (lmσ | l ′ m ′ σ ′ ) can be evaluated from our explicit formulas at a low numerical cost. To obtain the correction term δG ij (lmσ | lmσ), first the expression G ij (lmσ | l ′ m ′ σ ′ ) is calculated using the Cartesian-representation method described in [14,15] and next, the asymptotic expression is subtracted from the result. Since the correction term is short ranged, the matrix δG ij (lmσ | l ′ m ′ σ ′ ) can be truncated by setting where the truncation distance ̺ as is of the order of several wall separations H. The results shown in Figs. 1 and 2 and other similar tests indicate that the asymptotic approximation for the Green's matrix is very accurate for ̺ as 3H. Thus, the numerically expensive contribution δG ij has to be evaluated only for the neighboring particles at an O(N) cost.
To test our asymptotic approach and illustrate the role of the long-and shortrange contributions to the Green's matrix G, we consider a benchmark case of a linear rigid array of N touching spheres translating in the center plane in the space between closely spaced walls. The spheres are on a line parallel to the x direction and the array is moving either in the x (longitudinal) or y (transverse) direction. We focus on the translational friction coefficients evaluated per particle, where ζ tt αα ij is the αα component of the translational resistance tensor ζ tt ij defined in Eq. (2), and ζ is the one-particle lateral translational resistance coefficient [16,14].
As illustrated in Fig. 3 (see also discussion in [15,14]) the longitudinal and transverse friction coefficients (64) behave differently. The longitudinal coefficientζ xx C decreases with the length of the array N, while the transverse coefficientζ yy C increases with N. For tight configurations with small gaps be- tween the wall and particle surfaces (the case show in Fig. 3) the decrease of ζ xx C is moderate because the friction force is dominated by the local resistance due to the dissipation in these gaps. In contrast, the increase ofζ yy C is large due to collective long-range effects.
The mechanism of these collective effects can be explained using the results for the pressure field around arrays of the length N = 10 and 20 plotted in shown in Figs. 4(a) and 5(a) indicate that the pressure field is only weakly affected by the length of the array, and its magnitude is the largest near the array ends. In contrast, the pressure shown in Figs. 4(b) and 5(b) for the transverse motion increases approximately linearly with the array length N, and its magnitude is maximal near the chain center. This large pressure amplitude is associated with the flow of the displaced fluid around the ends of the array in the confined space. The flow is significant over the distance that scales with the length of the array l = 2Na (where a is the sphere radius).
In the Hele-Shaw regime the pressure gradient is proportional to the fluid velocity; hence, the pressure itself is proportional to N.
To further elucidate the effects of the short-range and far-field flow components, the exact numerical results for the resistance coefficients (64) are compared in Fig. 3 with the asymptotic approximation (62) and (63). We also show results obtained using a much cruder approximation, where the whole Green's matrix is truncated at a certain distance ρ 0 , i.e., Our numerical calculations indicate that the truncation (66) yields poor results. The far-field flow contribution is especially important for the transverse motion of the array because of the positive-feedback effect: For this motion the dipolar Hele-Shaw flow field generated by a given particle acts as a back flow on the other particles. This back flow, in turn, produces an increase in the induced force distribution that generates the dipolar flow. This back flow mechanism, resulting in the large transverse resistance, is consistent with our discussion of the pressure field shown in Figs. 4(b) and 5(b).
In contrast to the crude approximation (66), a truncation of the short-range part (63) of the Green's matrix yields accurate results already with moderate values of the truncation parameter ̺ as . The results shown in Fig. 3 indicate that the truncations at ̺ as /H = 1 for the longitudinal motion and at ̺ as /H = 2 for the transverse motion are sufficient. The results with ̺ as /H ≥ 3 (not shown) are essentially indistinguishable from the exact results.
Conclusions
Our paper presents a complete analysis of the far-field flow produced by an arbitrary force multipole in the space bounded by two parallel planar walls.
We have shown that a force multipole characterized by the multipolar numbers lmσ produces, at large distances, a Hele-Shaw flow driven by a twodimensional multipolar pressure field of the azimuthal order m. The amplitude of this flow has been explicitly obtained for an arbitrary order of the source force multipole.
Our asymptotic results were applied to evaluate the multipolar matrix elements G ij (lmσ | l ′ m ′ σ ′ ) of the Green's tensor for Stokes flow in the wallbounded domain. This matrix is used in our recently developed algorithm [14,15] for evaluation of the multiparticle friction tensor ζ ij in a suspension confined between two parallel walls. The elements of the matrix G ij are equivalent to the expansion coefficients in the displacement theorem for Stokes flow in the bounded domain. Such a displacement theorem relates the flow produced by a force multipole centered at a point R j to nonsingular multipolar flows centered at a point R i . We have shown that in the far-field regime the matrix elements G ij (lmσ | l ′ m ′ σ ′ ) can be expressed in terms of much simpler displacement formulas for the two-dimensional scalar potential.
We have found that the matrix G ij achieves its asymptotic behavior when the lateral distance between the centers of the particles i and j exceeds several wall separations H. Evaluation of the exact matrix elements in terms of lateral Fourier integrals derived in [15,14] is thus needed only for the neighboring particles. Therefore, application of the asymptotic expressions in our hydrodynamic-interaction algorithm yields an important improvement of its numerical efficiency. (The far-field contribution to the Green's matrix cannot be simply neglected-for some problems such a crude approximation leads to entirely wrong values of the friction matrix; cf. discussion in section 5.2).
Several other important consequences stem from the fact that we have reduced a complex hydrodynamic problem to a simpler problem of a two-dimensional scalar potential. First, since for a scalar potential the multipolar flow fields in a periodic system are known [30], the results of our analysis can be used to develop an algorithm for hydrodynamic interactions in a periodic wall-bounded system. Without the asymptotic expressions, evaluation of the periodic hydrodynamic Green's matrix would be much more difficult, as discussed in [27].
Next, for scalar potentials, fast multipole and PPPM acceleration techniques are well developed [31]. Combined with our asymptotic results, such methods can be applied for fast evaluation of hydrodynamic interactions in wallbounded suspensions. Development of accelerated algorithms for suspensions will require implementation of certain techniques that are specific to multiparticle hydrodynamic systems, e.g. an appropriate preconditioning of the Green's matrix and incorporating the lubrication interactions into the calculation scheme. These techniques were used in accelerated Stokesian-dynamics algorithms for unbounded suspensions [32] and [33]. Our present asymptotic results greatly facilitate development of accelerated algorithms for wall-bounded systems, and our research is currently focused on this problem.
C Far-field pressure distribution
As discussed in Sec. 4, the flow and the pressure fields in the Hele-Shaw asymptotic regime (27) are uniquely related (up to an additive constant in the pressure). Thus, many of the asymptotic formulas, expressed here in terms of the velocity fields, can be translated into the corresponding expressions for the pressure.
This remark applies, in particular, to Eq. (52) for the asymptotic multipolar flow (50). We introduce the asymptotic multipolar pressure field p as lmσ (r), which is defined by the relation u as lmσ (r − ̺ 2 ; Z 2 ) = − 1 2 η −1 z(H − z)∇ p as lmσ (ρ − ̺ 2 ; Z 2 ). (C.1) Using Eqs. (30) and (32), the flow-field identity (52) can be transformed into the corresponding pressure identity of the form Equation (C.2) can be conveniently used to evaluate the far-field disturbance pressure p as in a many-particle system. This equation describes the asymptotic pressure produced in the far-field regime by a single force multipole (51), as indicated by Eqs. (50) and (C.1). To determine p as , the multipolar moments f i (lmσ) of the force distributions (18) induced on the surfaces of particles i = 1, . . . , N are evaluated by solving the force-multipole equations (21). Combining the solution with (C.2) yields where Q i (m) = − 6 πH 3 lσ C(Z i ; lmσ)f i (lmσ). (C.4) The contour plots in Figs. 4 and 5 were obtained using this method.
|
2016-01-11T18:29:14.669Z
|
2005-04-27T00:00:00.000
|
{
"year": 2005,
"sha1": "abfd49800f253504f71885254d03f0fb2a3a9a57",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0504697",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "abfd49800f253504f71885254d03f0fb2a3a9a57",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science",
"Physics"
]
}
|
49366599
|
pes2o/s2orc
|
v3-fos-license
|
ANCA-associated vasculitis in a case of congenital leptin deficiency
Sir, Leptin (Greek “leptos” = “thin”) is a hormone that helps to regulate metabolism by inhibiting food intake and promoting energy expenditure. In addition to being a key factor in regulating body weight, leptin plays an important role in the regulation of immune system and various other physiological responses.[1] Congenital leptin deficiency, caused by LEP gene mutation and inherited as an autosomal recessive disorder, presents with severe obesity early in life, following a normal birth weight. Affected individuals suffer from hypogonadotropic-hypogonadism, which if untreated leads to delayed puberty and infertility. This form of leptin deficiency is extremely rare with less than 30 patients reported in the literature so far.[2] Leptin deficiency is also suspected to be involved in the pathophysiology of ANCA-associated vasculitis (AAV). Kümpers et al. found that leptin levels are negatively correlated with the disease activity.[3] Further studies are needed to gauge the role of leptin as a potential therapeutic target in the treatment of autoimmune disorders.
A 10-year-old girl, a product of second degree consanguineous marriage, first in birth order, presented with multiple painful, red, raised lesions on the trunk and extremities since 5 days. Examination revealed palpable purpura and ulcerations of variable sizes and shapes over the legs, arms, forearms, dorsa of hands and feet, chest, and abdomen [ Figures 1 and 2]. There was predominant extensor distribution with koebnerization appreciated at places. Mucosae were not involved. General physical examination revealed moon facies [ Figure 3], morbid obesity with a weight of 90 kg, height 137 cm, and a BMI of 47.95. She had difficulty in walking due to excessive weight. Acanthosis nigricans was also noted in the axilla and neck [ Figure 4]. Rest of the systemic examination was within normal limits. Investigations revealed neutrophilic leukocytosis (white blood cell count -19,950/cumm; N = 80.5, L = 17, M = 5.9, E = 01). Liver and renal parameters, routine and 24 h urinary examination were unremarkable. Rheumatoid factor, hepatitis B surface antigen, anti-HCV antibodies, HIV ELISA, antinuclear antibody, lupus anticoagulant, and cryoglobulins were also negative. Fasting plasma CORRESPONDENCE glucose and insulin levels were normal. Triglycerides were marginally raised. cANCA was positive, and skin According to her parents, the patient had uneventful antenatal and perinatal period and was apparently normal till the age of 6 months when they suspected excessive eating and weight gain by the child. The consultation was sought for the same, and the child was evaluated for leptin deficiency in view of the early onset obesity. Her anthropometric measurements at that time were: weight 10.75 kg (>95 th percentile), height 64.4 cm (between 25 th and 50 th percentile), and BMI 25.84 kg/m 2 . Baseline investigations, echocardiography, serum cortisol level, and thyroid function test did not reveal any abnormality. Serum leptin levels were inappropriately low 0.8 ng/ml (1.7-10.9 ng/ml). The patient was diagnosed as a case of leptin deficiency but was subsequently lost to follow-up till date when the patient reported to us with the purpuric rash. During the intervening period, the patient had normal developmental milestones with no mental retardation or language deficit.
There was also history of excessive weight gain in her younger 9-month-old male sibling [ Figure 6]. He likewise was of normal weight at birth but was presently obese. His weight was 9 kg (>90 th centile), length was 64 cm (10 th centile), and BMI was 21.97 kg/m 2 .On evaluation, he too was diagnosed as having leptin deficiency (Serum leptin levels: 1.1 ng/ml). Due to unavailability at our center, genetic analysis could not be performed in either of the two cases.
The index case was treated with tapering doses of steroids and showed remarkable improvement of skin lesions. Endocrinology consultation was sought. She was advised leptin replacement therapy which could not be administered, as the patient could not afford the same.
Congenital leptin deficiency is one of the rare causes of early-onset obesity. It was first described in two cousins from an inbred Pakistani kindred by Montague et al. [4] albeit the difference was not statistically significant. [9] Similarly, studies performed in other forms of systemic vasculitis (Henoch-Schönlein purpura and Behcet's disease) have shown increased levels of leptin during active disease periods. [10,11] Whether leptin has a role in vasculitis or its altered levels in this disorder is a mere coincidental finding needs to be probed. Larger studies are required to consolidate this finding. Our case emphasizes the need for early detection of congenital leptin deficiency, which if misdiagnosed runs the risk of severe obesity-associated complications (especially type II diabetes).
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-07-04T02:58:58.148Z
|
2018-05-01T00:00:00.000
|
{
"year": 2018,
"sha1": "4aef92e756f69e4e08529a72a6d5b940f51f26ac",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "f5bd6d67242c0ca4b608ee9562487c29dfe98e75",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252691789
|
pes2o/s2orc
|
v3-fos-license
|
SLEEP quality in patients with psoriatic arthritis and its relationship with disease activity and comorbidities: a cross-sectional study
The assessment of psoriatic arthritis is complex and multidimensional. It is increasingly common to include the patient perspective using patient-reported outcomes. Although some research has explored sleep quality in patients with psoriatic arthritis, most studies have had small sample sizes, failed to assess sleep quality considering the inflammatory process together with the psychological well-being of patients, and have not described any use of sleep medication. Further, research to date has not provided data on the relationship of sleep quality with axial forms. In this context, the objective of this study was to assess sleep quality in patients with psoriatic arthritis and its relationship with clinical characteristics, disease activity, functioning, disease impact, fatigue and psychological status. A cross-sectional study was conducted including 247 consecutive patients with PsA recruited during 2021. Sleep quality was measured using the Pittsburgh Sleep Quality Index. We assessed correlations of Pittsburgh Sleep Quality Index score with peripheral disease activity (Disease Activity Index for PSoriatic Arthritis), axial disease activity (Ankylosing Spondylitis Disease Activity Score-C-reactive protein and Bath Ankylosing Spondylitis Disease Activity Index), functioning (Bath Ankylosing Spondylitis Functional Index and Health Assessment Questionnaire), impact (Psoriatic Arthritis Impact of Disease questionnaire), anxiety, depression (Hospital Anxiety and Depression Scale) and fatigue (Functional Assessment of Chronic Illness Therapy-Fatigue) scores. A multiple linear regression model was constructed with PSQI as the dependent variable and as independent variables those that could influence sleep quality. Nearly two-thirds (63.15%) of patients had poor sleep quality. Poorer sleep quality was associated with being female, higher joint counts, greater peripheral and axial disease activity, fatigue, anxiety and depression, functioning and disease impact (p < 0.001). Multiple linear regression analysis found that pain (β: 0.3; p < 0.007) and fatigue β: − 0.1; p < 0.001 contributed 40% to the sleep quality model. Poor sleep quality was common among patients with psoriatic arthritis. Emotional factors (fatigue, anxiety) seemed more important than inflammatory factors in sleep quality.
Psoriatic arthritis, a member of the spondyloathritis family of diseases, is a chronic inflammatory disease associated with psoriasis.According to data from the study of prevalence of rheumatic diseases in the Spanish adult population (EPISER 2016) 1 , the prevalence of this disease is 0.52 (CI 0.38-0.87%).
Apart from axial and peripheral joint involvement, the clinical description of PsA includes entheseal changes and dactylitis as well as extra-articular manifestations.In any of its manifestations, psoriatic disease is associated with impairment of social functioning and psychological disorders 2 .
The practice of assessing the psychological well-being of patients with PsA is becoming increasingly common and systematic.A recent study obtained, in a Spanish population, a prevalence of sleep disorders of 21.1% (CI 95% 17.38-25.01) 3.The impact of the disease on sleep quality has been well documented previously [4][5][6][7][8] .This psoriatic disease can have a significant effect on the feeling of fatigue 9 as well as impair sleep quality with respect to that in the general population 10 .In turn, a reduction in sleep quality is related to a decrease in the quality of life 11,12 and the development of comorbidities [13][14][15][16] .The persistence and severity of sleep disorders may be associated with inflammatory disease activity, chronic pain, fatigue, anxiety and depression 17 , potentially creating a vicious circle in which each problem exacerbates the others.
The assessment of PsA is complex and multidimensional.It is increasingly common for rheumatologists to include the patient perspective using patient-reported outcomes (PROs) in the evaluation of various domains of the disease and comorbidities, reflecting a paradigm shift in patient assessment.Although some research has explored sleep quality in patients with PsA, it has various limitations.Most studies have had small sample sizes, failed to assess sleep quality considering the inflammatory process together with the psychological well-being of patients, and have not described any use of hypnotics (sleep medication) to manage sleep.Further, research to date has not provided data on the relationship of sleep quality with axial forms of the disease or measures of disease activity.In this context, the objective of this study was to assess sleep quality in patients with PsA and its relationship with clinical characteristics, disease activity, functioning, impact of the disease, fatigue and psychological status.
Study population
We carried out a single-centre cross-sectional study at a tertiary hospital in Spain including consecutive patients over 18 years of age seen in rheumatology consultations between January 2021 to December 2021 who met the ClASsification for Psoriatic ARthritis criteria 18,19 .
Clinical variables
In our clinic, patients with PsA are routinely assessed by taking a detailed medical history, conducting a complete physical examination, gathering PROs and performing laboratory tests every 3 to 6 months.In this clinic, we record demographic data (age and sex) and collect and update information regarding smoking status (smoker/ former smoker/never smoker) and the number of cigarettes smoked measured in pack years 19 , use of conventional synthetic disease modifying anti-rheumatic drugs (DMARDs) (methotrexate, sulfasalazine or leflunomide), as well as targeted synthetic or biologic DMARDs, PsA and psoriasis duration, body mass index (BMI).
The level of physical exercise using the International Physical Activity Questionnaire (IPAQ) 20,21 .The aforementioned questionnaire assesses physical activity based on three characteristics: intensity (low, moderate, vigorous), frequency (days per week) and duration (minutes per day).The activity is recorded in metabolic equivalent of tasks (METS).To estimate the number of METs, we multiplied the MET score for the type of activity (3.3 for low, 4 for moderate and 8 for vigorous) by the number of days a week the activity is done and by the minutes spent doing the activity per day.
Among the clinical forms of the disease, axial psoriatic arthritis was defined as inflammatory lower back pain with radiographic damage (at least grade 2 radiographic sacroiliitis as per New York criteria and/or presence of syndesmophytes) 22,23 .
We explored tender joint count (TJC), swollen joint count (SJC), entheseal involvement using the Maastricht Ankylosing Spondylitis Enthesitis Score (MASES) 24 modified for PsA to include the plantar fascia, with scores ranging from 0 to 15 (mMASES) 25 and current or history of dactylitis in patients with peripheral forms and a combination of both assessments was used for mixed forms.
The extent of the psoriasis was assessed using the Psoriasis Area Severity Index (PASI) 26 and item 3 (concerning skin problems) of the Psoriatic Arthritis Impact of Disease (PsAID) questionnaire 27 .
Among the comorbidities considered, we assessed the presence of fibromyalgia 28 .The study was approved by the Ethics Committee of the Hospital Universitario de Salamanca (EO 20/19).All research was performed in accordance with local guidelines/regulations (Castilla y León, Spain) and with the Declaration of Helsinki.All participants and/or their legal guardians gave written informed consent before inclusion in the study and consented to the publication of the results.
Indices, questionnaires and PROs
Disease activity, functioning, perceived pain and disease activity, and disease impact Disease activity.In patients with peripheral involvement, disease activity was assessed using the Disease Activity Index for PSoriatic Arthritis (DAPSA) 29 , while in those with axial involvement, it was assessed using the Ankylosing Spondylitis Disease Activity Score with C-reactive protein (ASDAS-CRP) 30 , the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) and BASDAI item 2 which relates to axial pain 31 .
Functioning.Functioning was measured using the Health Assessment Questionnaire (HAQ) 32 and the Bath Ankylosing Spondylitis Functional Index (BASFI) 33 .
Perceived pain and disease activity.The level of pain and disease activity as reported by the patient using visual analogue scales (VAS).
Disease impact.The impact of the psoriatic disease was assessed using the Psoriatic Arthritis Impact of Disease (PsAID) questionnaire 27 .
Anxiety, depression, fatigue and sleep
Anxiety and depression.Anxiety and depression were assessed with the anxiety and depression subscales of the Hospital Anxiety and Depression Scale (HADS).Using this 14-item self-report instrument developed by Zigmond and Snaith 34 , patients rate their symptoms on a Likert type scale.Seven of the 14 items concern depressive symptoms (HADS-D) and the other seven symptoms of anxiety (HADS-A).
Fatigue.Fatigue was assessed with a Functional Assessment of Chronic Illness Therapy (FACIT) scale, specifically, the FACIT-fatigue scale, which has been validated for PsA 35 and consists of 13 items assessing self-reported fatigue and its impact on activities of daily living and functioning.Items are rated on a 5-point Likert type scale from 0 to 4 yielding a total score between 0 and 52, higher scores indicating less fatigue.Permission was obtained from the FACIT.orgfor the use of the questionnaire in this study.
Sleep.Sleep quality was assessed with a specific tool for measuring sleep quality, the Pittsburgh Sleep Quality Index (PSQI) 36 .Using this 19-item self-report instrument patients assess their quality of sleep over the previous 30 days.The PSQI explores seven domains: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleep medication, and daytime dysfunction.Each domain is scored on a range between 0 and 3 and these scores are summed to obtain a global score ranging between 0 and 21 points, 0 reflecting no difficulties at all and 21 serious difficulties in all domains assessed.
Statistical analysis
Quantitative variables are reported as means and standard deviations and categorical variables as numbers and percentages.Comparisons between groups were carried out using the Student's t test for normally distributed quantitative variables and the Mann-Whitney U test for ordinal variables or non-normally distributed quantitative variables.Comparisons between more than two groups were performed using one-factor analysis of variance for normally distributed quantitative variables and the Kruskal-Wallis H test for ordinal variables or non-normally distributed quantitative variables.Correlations between quantitative variables were assessed with Spearman's correlation coefficient.P values < 0.05 were considered statistically significant.
The multiple regression model was constructed with the PSQI as the dependent variable and as independent variables those that according to the literature had been related to sleep quality (pain VAS, TJC, RCP, HADS-A, HADS-D and FACIT-F) adjusted for sex and presence of fibromyalgia 2,12,[37][38][39][40] .
Missing data was less than 3%.
Ethics approval and consent to participate
The study was approved by the Ethics Committee of the Hospital Universitario de Salamanca (EO 20/19).
Patient baseline characteristics
Table 1 summarises the characteristics of the 247 patients included in the study.Over half of them were men (55%), the overall mean age was 52.4 ± 11.7 years, and the mean duration of PsA and psoriasis were 9.7 ± 7.4 years and 19 ± 14.4 years respectively.Regarding smoking habits, 73 (30%) were non-smokers, 102 (41%) former smokers and 72 (29%) smokers with a 20.4 ± 24.5 pack year history of smoking.Patients were in the overweight range of BMI 27.2 ± 5.0 kg/m 2 and had a physical activity level of 616.4 ± 841.0 METs.Regarding the clinical form, a total of 171 (69%) patients were classified as having a peripheral PsA, 13 patients (5%) axial PsA and 63 patients (26%) mixed PsA.The mean TJC and SJC were both 1. Forty patients (16%) had extra-articular musculoskeletal manifestations, in the form of dactylitis, and regarding enthesitis, the mean mMASES score was around 2. A total of 122 (49%) patients were taking conventional synthetic DMARDs, methotrexate in most cases; 55 (22%) were taking a biologic DMARDs, a tumour necrosis factor inhibitor in most cases; and just 5 (2%) were taking a targeted synthetic DMARDs.See Table 1 for more details and other characteristics.
PSQI questionnaire
Overall, 63% of patients with PsA had poor sleep quality.The highest scores were obtained for items concerning sleep disorders (1.5), duration of sleep (1.5), subjective sleep quality (1.4), sleep latency (1.3) and sleep efficiency (1.1).The use of sleep medication (0.8) contributed less to the total score (Table 2).Notably, just 31% reported using sleep medication three or more times a week, 10% once or twice a week and 12% less than once a week, while the rest (46%) claimed not to use this type of medication at all.
Relationship of PSQI with demographic and clinical variables
Among demographic and clinical variables, sleep quality was significantly associated with female sex and the presence of enthesitis (mMASES) (Tables 3 and 4).
Relationship of PSQI with disease activity, functioning and disease impact
Sleep quality as measured by PSQI was correlated moderately with peripheral disease activity (DAPSA), its components pain VAS, and weakly with activity VAS, TJC and SJC.No relationship was observed with CRP level (Table 4).Further, sleep quality was moderately correlated with functioning (HAQ) and with the impact of the disease (PsAID) (Table 4).In patients with axial manifestations, sleep quality was correlated with disease activity (ASDAS-CRP and BASDAI and within this index, with item 2 corresponding to inflammatory axial pain) and moderately with axial functioning (BASFI) (Table 4).
Relationship between PSQI and cutaneous psoriasis
We found no relationship between sleep quality as measured by the PSQI and cutaneous manifestations of psoriasis as measured by the PASI (r: 0.12; p = 0.54).On the other hand, we observed a weak correlation between PSQI score and item 3 of the PsAID, concerning skin problems including itchiness (r: 0.39; p < 0.001) (Table 4).
Discussion
Our study confirms the poor sleep quality of patients with PsA and shows its relationship with pain, fatigue, anxiety and depression.To date, few studies have assessed sleep quality in PsA patients using validated questionnaires such as the PSQI 2,12 .In our study, we found a decline in sleep quality in almost two-thirds (63.15%) of patients, similar to the 67.6% described by Krajewska 2 and somewhat lower than the rates of 84-85% in the cohorts analysed by Gezer 12 and Wong 41 .In our cohort, the mean global PSQI score was 8.58, slightly lower than values reported by Krajewska 2 , Gezer 12 and Wong 41 (scores of 9.32, 9.70 and 9.24 respectively), though higher than scores in the controls of these cohorts (of between 4 and 5).
Lower sleep quality in our patients with PsA was associated with sleep disorders, duration, subjective quality, latency and efficiency, similar to in the aforementioned studies.Also consistent with previous studies, the use of sleep medication was relatively uncommon 2,12,41 .Regarding disease activity, functioning and disease impact, we observed that poorer sleep quality was associated with greater peripheral activity (DAPSA), poorer functioning (HAQ) and greater impact of PsA (PsAID).www.nature.com/scientificreports/One of the main factors associated with poorer sleep quality was pain, a relationship which was also found in the cohorts of Krajewska and Gezer 2,12 .Nonetheless, it is not easy to establish a causal relationship between pain and sleep quality.Some studies 42 have described apparent bidirectionality in the relationship, with pain leading to a decline in sleep quality but also sleep dysfunction resulting in a worsening of pain.The mechanisms of interaction between sleep quality and pain could be explained from three different perspectives 43 .First, from a neurobiological perspective, sleep interruption, fragmentation or restriction causes hyperalgesia and may interfere with analgesic treatments involving opioidergic and serotonergic mechanisms of action 44 .Secondly, from a psychological perspective, sleep disturbance reduces the pain threshold and amplifies pain signals, resulting in hyperalgesia and more negative emotions focused on pain, forming a negative feedback loop.Thirdly, we should not forget the inflammatory mechanism of axial pain in PsA patients, which again links the idea of a decline in sleep quality with inflammatory activity.In relation to this, sleep quality in axial forms was also associated with disease activity (ASDAS-CRP, BASDAI), axial inflammatory pain and functioning (BASFI).To our knowledge, no previous studies have linked sleep quality with axial manifestations of PsA.In particular, Gezer et al. 12 did not find any associations between sleep quality and axial signs and symptoms of PsA (sacroiliitis, spondylitis), though in patients with ankylosing spondylitis, sleep quality was associated with inflammatory axial pain, as well as BASDAI and BASFI scores 43 .
Although in our univariate analysis sleep quality as measured by PSQI score was associated with TJC, this potential relationship was not significant in the linear regression model.In patients with psoriasis, the studies of Callis Duffin 17 and Strober 37 have previously linked the coexistence of arthritis and psoriasis with poorer sleep quality.On the other hand, neither of these studies assessed whether the worsening in sleep quality was proportional to the intensity of the inflammation, as measured by TJC or CRP levels.Findings in other series of patients with PsA have differed markedly.PSQI scores were not found to be related to TJC by either Gezer et al. 12 or Krajewska et al. 2 , while both studies found an association between these scores and blood CRP levels.Such differences might be explained by the small sample sizes in these studies.On the other hand, Wong et al. 41 did find a correlation with TJC but did not assess the correlation with CRP.Further, these authors did not specify whether the variables fatigue or anxiety were included in the multivariate analysis.
In our study, no association was found between psoriasis severity (as measured by PASI score and patient´s subjective perception) and sleep quality.Although such an association has been described in cohorts of patients with psoriasis, it has not been confirmed in other studies in patients with PsA 10,17 .The less severe cutaneous In this study, we have confirmed sleep quality to be closely related to fatigue, anxiety and depression.Other studies carried out in patients with PsA have obtained similar results.In these patients, fatigue is more closely associated with emotional than inflammatory characteristics 38 .One of the consequences of poor sleep quality is fatigue.Additionally, it has been demonstrated that the fatigue associated with anxiety and depression may affect sleep quality.Therefore, sleep, fatigue, anxiety and depression should be considered in terms of a circular relationship 39,40 .
At this stage, we should recognise the strengths and limitations of our study.Concerning strengths, first, it was based on a relatively large sample, most published studies being limited by the small number of patients included.Second, the instruments we have used to measure sleep quality (PSQI), fatigue (FACIT-F), anxiety and depression (HADS) have been validated for use in Spain.Lastly, this is the first study to find a link between axial manifestations of the disease and sleep quality.
Regarding limitations, the cross-sectional nature of the study means that we are able to establish associations, but not causal relationships between the factors identified and sleep quality.Second, we did not include patients randomly, but we believe that the inclusion of consecutive patients for a year allowed us to obtain reasonable representativeness, as patients are seen routinely at 3-to 6-month intervals, and hence, all patients should have been assessed at least once during the recruitment period.
Conclusions
In conclusion, poor sleep quality is common in patients with PsA, although nearly half of patients do not take any sleep medication.In our series, while sleep quality was not associated with cutaneous involvement, it was related to axial manifestations of the disease.Emotional factors (fatigue, anxiety) seem to be more relevant than inflammatory factors in terms of sleep quality.This circular relationship between sleep quality, emotional disorders and disease activity underlines the need to take a multidisciplinary approach to PsA.The detection and management of comorbidities that may influence the activity of patients with PsA is a key aspect in the development of precision medicine to improve the social functioning of patients.
This analysis was performed with IBM SPSS Statistics for Windows, Version 23.0.
Table 1 .
Demographic, clinical and disease-related characteristics of patients with psoriatic arthritis.BMI: body mass index, TJC: Tender Joint Count, SJC: Swollen Joint Count, ASDAS-CRP: Ankylosing Spondylitis Disease Activity Score with C-reactive protein; BASDAI: Bath Ankylosing Spondylitis Disease Activity Index; BASDAI (item 2): Bath Ankylosing Spondylitis Disease Activity Index item related to axial pain.BASFI: Bath Ankylosing Spondylitis Functional Index; DAPSA: Disease Activity Index for PSoriatic Arthritis; DMARDs: disease-modifying antirheumatic drug; FACIT: Functional Assessment of Chronic Illness Therapy; HADS: Hospital Anxiety and Depression Scale; HAQ: Health Assessment Questionnaire; mMASES: modified Maastricht Ankylosing Spondylitis Enthesitis; PASI: Psoriasis Area Severity Index; PsA: psoriatic arthritis; PsAID: Psoriatic Arthritis Impact of Disease; PSQI: Pittsburgh Sleep Quality Index; TNF: tumour necrosis factor; VAS: visual analogue scale.a In peripheral and mixed forms (n = 234).b In axial and mixed forms (n = 76).
Table 2 .
Components of the Pittsburgh Sleep Quality Index in patients with psoriatic arthritis.PSQI: Pittsburgh Sleep Quality Index.Results are expressed as mean (m) and standard deviation (SD).
Table 3 .
Pittsburgh Sleep Quality Index scores as a function of demographic, clinical and treatment variables.DMARDs: disease-modifying anti-rheumatic drugs; PSQI: Pittsburgh Sleep Quality Index.Significant values are in bold.
|
2022-10-04T04:55:54.221Z
|
2023-12-21T00:00:00.000
|
{
"year": 2023,
"sha1": "ce8a71926664c0f9bcfafbc7e3259c3c761b292d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3b85cc3de806b2171a2803fdadd2fc1ea12f12a2",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
}
|
195962
|
pes2o/s2orc
|
v3-fos-license
|
Understanding covariate shift in model performance
Three (3) different methods (logistic regression, covariate shift and k-NN) were applied to five (5) internal datasets and one (1) external, publically available dataset where covariate shift existed. In all cases, k-NN’s performance was inferior to either logistic regression or covariate shift. Surprisingly, there was no obvious advantage for using covariate shift to reweight the training data in the examined datasets.
Introduction
A common prerequisite in supervised learning algorithms is that the training and prediction data arise from the same distribution and are independently and identically distributed (iid) 1 . Intuitively this is justified, as one should not expect to learn a classifier on one distribution of examples and apply it to accurately predict labels of examples drawn from a different distribution. Covariate shift is a machine learning technique that can be utilized in supervised learning when the training and prediction distributions are known to differ, but the concept being learned remains stationary. While standard machine learning classifiers are trained and then used to predict on arbitrary compounds, covariate shifted classifiers must be trained specifically for each prediction dataset. This is because covariate shifted classifiers weight the training distribution to be more similar to the prediction distribution. A recent book provides an excellent overview of the current state of the art in covariate shift methods 2 .
Covariate shift frequently occurs during the drug discovery process where learning systems are built to predict physiochemical properties of interest. Initially a chemistry team may focus on a particular chemical series, and information from this series is used to train a learning system. As the project progresses, the chemistry team may refocus their efforts on a new, structurally distinct series. The accuracy of prospective computational predictions on the new series may be compromised as these molecules originate from a distribution that is distinct from the molecular set used to train the learning tool.
For example one may wish to build a learning system to predict hERG activity (unwanted cardiovascular toxicity). Initially the computational tool is trained using series A but must now predict on series B. The concept "binding to hERG" is fixed, however the area of interest has transitioned from chemical series A to chemical series B. The feature vectors describing these two sets are likely related but potentially different; and as such, their covariates have shifted. Put more mathematically, the probability of observing a feature vector from the prediction set is different from the probability of observing a feature vector from the training set. That is, the training and prediction sets are non-iid. A well-constructed learning system will recognize that predictions on series B are outside the "domain of applicability" of the model and predict with low confidence. The covariate-shift method attempts to adjust the domain of applicability so that it is more aligned with the prediction set. It is analogous to a nearest neighbor classifier but employs distributions rather than individual examples. Covariate shifted classifiers weight examples from the training set to create a distribution that is more aligned with the prediction set. This weighted data set is then used to train the classifier, resulting in a covariate shifted classifier. As such, covariate shift is applied at the distribution level whereas nearest neighbor methods are applied at the example level. Once a training set has been shifted, it can be used by any machine learning algorithm.
Covariate shift methods typically reweight instances in the training data so that the distribution of training instances is more closely aligned with the distribution of instances in the prediction set. This is accomplished by providing more weighting during model building to an instance in the training set that are similar to an instance in the prediction set. It has been shown 3 that the appropriate importance weighting factor w(x) for each instance "x" in the training set is: where p t (x) is the probability of seeing instance x in the training set and p p (x) is the probability of seeing x in the prediction set. It is important to note that only the feature vector values (not their labels) are used in reweighting. The importance weighting scheme is intuitively understandable. If the probability of seeing a particular instance from the training set in the prediction is very small, then this instance should carry little weight during the training process and consequently have little effect on the decision function. Figure 1 plots two Gaussian distributions and w(x). If instances from the blue distribution are used for training a classifier to predict on an instance from the green distribution then the red curve gives the importance of each instance. Note the increased importance for instances from the training distribution overlapping with highdensity regions of the prediction distribution.
Methods
For our experiments, we use a logistic regression classifier where each training instance is weighed by its importance w(x). For the calculation of w(x) we use the Kullback-Leibler Importance Estimation Procedure (KLIEP) method developed by Sugiyama 4 . The KLIEP method is based on the Kullback-Leibler divergence theorem and attempts to find weights to minimize the divergence from p train (x) to p predict (x). Briefly, the importance is modeled as a linear function: The α i are the weights to be learned and ϕ i the basis functions. The importance weight from Equation 1 can be rearranged and used to estimate the probability of observing a feature vector in the predictive set.
Amendments from Version 2
In response to reviewer feedback, Figure 1 has been replaced with a new version.
REVISED
The KL divergence from p p (x) to its estimate ˆ( ) p p x can then be expressed as: After algebraic manipulation, removing terms independent of ŵ (x) and adding constraints to ensure proper normalization, a final objective function to be maximized can be derived as (see 4 for details): The resulting problem is convex and can be solved using standard optimization techniques. The result is an expression for w(x) that allows calculating weights for a training instance x. These weights can then be incorporated when training a classifier to obtain a covariate shifted version of the classifier.
Toy example
To demonstrate the use of covariate shift methods, we repeated a simple toy experiment as detailed in 3. Figure 2 graphically displays the results we obtained.
The red training points are drawn from two (2) two-dimensional Gaussian distributions representing a class 1 and a class 2. The green prediction points are drawn from a slightly rotated version of the training distributions. The red line plots the classifier obtained when training on only the training points; the green line plots the classifier trained on both the training and prediction points (the optimal classifier in this case). The blue line plots the classifier trained on the training data that was weighted by the importance factor as estimated by the KLIEP method. Note how the blue line is shifted towards the optimal classifier, demonstrating the effect of the KLIEP algorithm and covariate shift. Units are in nM.
Experiments
Using the Python programming language we implemented the KLIEP method for determining weights for use in covariate shift 5 .
In principle, covariate shift is applicable to any classifier that allows weighting of input instances (e.g. support vector machines and random forest). For this study we wanted to isolate the effects of covariate shift and therefore selected a classifier without adjustable parameters and used logistic regression (LR). Logistic regression is a classification technique analogous to linear regression and is applicable when the dependent variable is categorical 6 . We combined logistic regression with KLIEP and applied it to five different in-house ADME (absorption, distribution, metabolism and excretion) assays and one external dataset (beta secretase). The cutoff values for determining the binary categories for the compounds in each dataset are listed in Table 1. Due to inherent noise in the assays we discard data where the assay values are between the positive and negative cutoffs listed in the Table 1. We compare KLIEP+Logistic Regression (KL+LR) to Logistic Regression and a k-NN (using Tanimoto similarity) classifier (k=5). For each dataset the molecules were sorted by compound registration date. The first 75% of the data comprised the master training set while the remainder formed the master prediction set. Temporal ordering of the data represents the evolving coverage of chemical space by drug discovery projects and consequently captures the natural "shifting" of the covariates. Classifier performance statistics are generated by performing twenty different runs, each on a random 80% of the master files. Performance statistics for each classification task are then obtained by averaging the results of the twenty individual folds. In all cases, OpenEye 7 path fingerprints are used as feature vectors. We experimented with different fingerprints provided by OpenEye (MACCS 166 bit structural keys and circular fingerprints) and found that they had no significant effect on the outcome.
To ensure the data was amenable to covariate shift we generated classifiers separating "training" from "prediction" data. Figure 3 shows performance of LR on this separation task. For each dataset we are able to compute highly accurate classifiers. This indicates that the training and prediction data are drawn from different distributions and hence are appropriate for covariate shift methods. This is a necessary condition for covariate shift but does not imply model improvement over unweighted data. Figure 4 compares the performance of KL+LR, LR and k-NN on the five (5) datasets. One can see from the graph that KL+LR failed to provide any statistical improvement over standard LR.
We extended the study to include an external dataset provided by ChEMBL 8,9 such that others could use their own fingerprints and independently support or refute our claims. We chose the beta secretase IC 50 data as it is a well established biochemical screen, highly accurate and contains > 7000 data points crossing multiple orders of magnitude, which are publically available. Using OpenEye path fingerprints and K-Means clustering we clustered the dataset into two clusters, A and B. Under cross-validation, a logistic regression classifier was able to separate the two clusters with a high level of accuracy (90%) indicating that the clustered dataset would be appropriate for application of the covariate shift algorithm. Ten random subsets of molecules from cluster A were used to train a logistic regression classifier using covariate shift which was then used to predict on molecules from cluster B. The performance of the shifted classifier was compared to an unshifted classifier trained and tested on the same clustered datasets and random splits. The process was repeated by training on molecules from cluster B and predicting on molecules from cluster A. Analogous to the internal datasets, as measured by overall classifier accuracy, there was no statistical advantage for application of covariate shift (Shifted Accuracy: 82.95% +/-1.6%; Unshifted Accuracy 82.73% +/-1.2%).
A possible explanation for the failure of the covariate shift method to provide a boost in predictive performance could be that the calculated importance weights are all similar. This would cause each training example to exert the same influence on the decision function and thus the importance weighting would have no effect. This was not the case. Figure 5 plots the cumulative distribution function of the importance weight for the training set compound. The plot demonstrates that weights are distributed across a range of classifier performance.
Conclusions
We have applied the KLIEP method to five (5) internal data sets and one (1) external data set where covariate shift was evident.
Although KL+LR was an advantage over k-NN, there is no statistical advantage of reweighting the training dataset. We are surprised with this outcome and are currently exploring other datasets where application of covariate shift may improve the predictions.
Data availability
F1000Research: Dataset 1. The beta secretase IC 50 data derived from the ChEMBL database, 10.5256/f1000research.8317. d117882 10 Author contributions BG conceived the study. BG designed the experiments and carried out the research. GM wrote the manuscript and provided the betasecretase data set and contributed to the experimental design. PW provided oversight.
Competing interests
No competing interests were disclosed.
Grant information
The author(s) declared that no grants were involved in supporting this work. 1. The description of Figure 1 is inconsistent with the figure legend. It seems that the red and blue labels in the legend of Figure 1 need to be swapped to make the figure consistent with the description in the text: The red and green curves look like pdfs whose integrals are very similar and close to 1 while the blue curve has a much larger area inconsistent with a pdf. 1.
curve has a much larger area inconsistent with a pdf.
2. For the ChEMBL data set the inactivity/activity cutoffs used should be mentioned.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
No competing interests were disclosed. Competing Interests: 20 Having read the revised paper it seems better, but I'm still somewhat puzzled about a few things.
I am getting the feeling that the K-NN method is meant as a baseline control method since by definition K-NN looks at only the training set compounds close to the test set compounds, so there is an implicit selection of training set compounds, and this should have a similar effect as covariant shift. This is not explicitly said in the paper.
The authors do not try sophisticated but more "standard" classification methods like random forest or SVM, and don't say why not.
Both myself and the other reviewer seem confused by Figure 1. The red line is supposed to be the importance weight. However, it implies that the highest weights are given in a region of descriptor space far away from both training and test sets.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. 1. Agreed that we could have added such a sentence to the paper.
2. We didn't examine more standard classification methods as we were specifically studying whether there was a benefit in using covariate shift. We were not interested in exploring RF or SVM as that was out of scope for this paper.
3. The three curves in Figure 1 were all drawn as per a Gaussian distribution and we thought it would look odd to have the red line go straight to zero when, for example, x=1. The purpose of the would look odd to have the red line go straight to zero when, for example, x=1. The purpose of the importance weight being close to 0 (on the y axis) when x=2, for instance, is because there's much overlap between the training and the prediction set. When, for instance, there isn't much overlap (ie, x=1.5), the importance weight goes up. I can see why one might be confused when x=1; that was merely to show that when there's minimal overlap between prediction and training, that the importance weight is quite high.
No competing interests were disclosed.No competing interests were Competing Interests: disclosed. This means that in Figure 1 the red curve can either show the importance weight and the blue curve the training pdf or vice versa without compromising the accuracy of the figure. What speaks against interpreting the blue curve as a pdf is that it clearly has a much larger area than the green curve representing the prediction pdf, which should be 1 in both cases. From visual inspection the red curve has an area much closer to 1 than the blue curve and it should thus be interpreted as training pdf while the blue curve represents the importance weight. Even though the figure is only used for illustration purposes it could be improved by either relabelling or by rescaling the red and blue curves accordingly.
No competing interests were disclosed.
Georgia McGaughey
The figure is really meant to be an illustration of the importance weight -and not mathematically accurate with respect to pdfs. In actuality, the green and blue curve look like pdfs but they really are not. The green and blue curves are as such not normalized using the same scales -so the area under each curve does not sum to 1. We could make the blue curve a shifted version of the green curve then recompute the importance weight. What we were hoping to illustrate is that if an example from the training (blue) was pulled at x = 1 it would be very important for training because it is rare and the testing (green) set has non-zero support at x = 1.
Additionally, another perhaps confusing aspect about the figure is that the y-axis represents two values on different scales: 1) the importance weight and 2) the probability (relative) of seeing a training/test example. We will redraw the figure and upload a new version.
No competing interests. Competing Interests:
2.
3. The study investigates the influence of accounting for covariate shift in classification performance using logistic regression models. Overall, this short paper is very well and clearly written, however the method section should be expanded (see below). Although no increase in performance could be established by accounting for covariate shift, it provides an excellent basis for further investigations.
Suggestions/Corrections:
The method section should be expanded: I assume all models were trained as binary classifiers. This is potentially confusing as the chosen ADME properties in the experimental data could also have been modelled using regression models. This should be stated clearly and explained how labels (good/bad) are assigned to the training instances for the different ADME properties (and how labels are assigned to the ChEMBL data given the potencies).
In Figure 3 (and 4), given the imbalance in data size between training and test set, consider reporting the balanced accuracy. E.g. a trivial classifier classifying each compound as "training" compound would have an accuracy of 75% based on the imbalance of the data set, which needs to be taken account when interpreting Figure 3.
The authors provide a data set for download although they do not explicitly report the results for that data set. The results should be reported.
Typos:
In the formula for KL on page 3 the two vertical bars should have the same size.
In Figure 1, the labels for the red and blue line are mixed up.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.
No competing interests were disclosed. Competing Interests:
|
2017-10-29T18:51:26.557Z
|
2016-06-17T00:00:00.000
|
{
"year": 2016,
"sha1": "8773c791e83c84ada86cffe9905353a879441060",
"oa_license": "CCBY",
"oa_url": "https://f1000research.com/articles/5-597/v3/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "73a4a840d27974dc5bbcb39ea402e3ca20452d90",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Computer Science"
]
}
|
214153658
|
pes2o/s2orc
|
v3-fos-license
|
THE ROLE OF THE INDIGENOUS KNOWLEDGE SYSTEM OF THE COMMUNITY Model Intergovernmental Cooperation in Education Program Guide in the province of South Sulawesi
Noting the fact that the index of the quality of primary and secondary education in the province of South Sulawesi, which was ranked 21 out of 34 provinces that exist throughout Indonesia (2009), the South Sulawesi provisional government is determined to improve the quality of education index to rank 10 (2013). To support these policies then issued Regional Regulation No. 4 of 2009 concerning free education program which was further supplemented by the Governor Regulation No. 4 of 2011 on the implementation of the free education program. The program is held by a model of cooperation between the provincial government and the government district /city in South Sulawesi. Particularly in relation to the distribution of the proportion of the budget given for each level of government. It was agreed that the provincial government issued a budget of 40% and district / municipal governments 60%. In the implementation until 2016 (approximately 8 years) has not shown a significant index of the quality of education, because it is still ranked 19. The objective of this study is to analyze why the model of cooperation among government used in its free education program has not been effective. To achieve these objectives, the research approach used is qualitative types of cases. The results showed that the model of intergovernmental cooperation that is used is politics society and politics of inter-organization and not used of legal and administrative approaches. Key word: Intergovernmental cooperation, Politics Society and Inter-Organization, legal and administrative approaches
The potential to accelerate the successful development and welfare of people in an area, one of which is highly dependent superiority of human resources. Awareness to develop the potential of every moment is already very high wailayah not driven by international institutions to achieve the targets in the Millennium Development Gold (MDG) but also a common awareness in every region in Indonesia. Not least in the province of South Sulawesi. Through free education program that have been proposed since the early reign of Sulawesi Governor in 2008 until now. If examined closely, the real free education program is not only open up wider access to the child is unable to make schooling free of charge. Moreover, this program will gradually break the chain of poverty, restore the rights of children, as well humanize those who have been oppressed by the power of capital. "Free Education" here is the commitment of the government in providing education without involving the community (parents) in terms of financing, particularly for school operations. Above understanding has a consequence that the free education policy heavily dependent on the accuracy of the calculation of the unit cost (unit cost) in the school. The unit cost provides an overview of how the actual average cost (the real average cost) required by the school to serve the students. The cost of the unit should then be compared with BOS (school operational assistance) the difference is covered by local governments through regulation of the budget that has been set in the budgets of provinces, districts and cities. This is what we mean by the term sharing of funds between central and local governments. The success of the development program is not always successful when the government as responsible for the success of development in the region are not able to design policies once mengimplementasikannnya effectively. Free education policy in question is in order to overcome the low quality of education in South Sulawesi as part of indicators of human development index (HDI). Before the free education policy implemented, the province of South Sulawesi its HDI is still at number 21 nationally, including education. Because the South Sulawesi provincial government in cooperation with the district / city government seeks to improve the quality of education through a Memorandum of Understanding (MOU) which was followed by the Regional Regulation (Perda) No. 4 Year 2009, contains on the Implementation of Free Education in South Sulawesi province. Bylaw No. 4 In 2009 this was followed up by the Governor Regulation (gubernatorial) No. 9 of 2010 which contains the Guidelines for Free Education South Sulawesi province. The budget for education free trial in 11 regions ranged from Rp 644 billion sourced from school operational funds (BOS) to Rp 405 billion, Rp 125 billion provincial budget, and the rest of the state budget heading. Free education program in South Sulawesi are expected to have implications for the reduction in the illiteracy rate among school age. When referring to the Regulation No. 4 of 2009, the goal rather than free education is; 1) improving equitable learning opportunities for all children of school age; 2) improve the quality of graduates; 3) improve the relevance of competency-based education to keep pace with global developments; 4) increase of efficiency and effectiveness of the implementation of free education to meet the quality and productivity of human resources excellence. In the law chapters 2 and 3 are also presented scope of free education. The scope of these is that free education; 1) is intended for the people of South Sulawesi who send their children in primary and secondary schools in South Sulawesi; and 2) for students who come from outside South Sulawesi education but charged according to applicable regulations. While the principle used in the implementation of free education that is based on the equity, quality assurance, participation, transparency, accountability, education, and competency. In 2001 alone, the education budget has significantly increased eight-fold from 42.3 billion (2001) 580,000, -per primary school student and Rp. 710,000, -per junior high school students. Unfortunately, the surge in the education budget has not been able to answer still finding people who are not able to enjoy education, especially those from poor households (Kopel and Tifa Foundation, 2012). In findings Kopel and the Tifa Foundation, revealed four (4) major problems that are often found in the implementation of the Education Fund. First, regulation is needed to strengthen the protection of citizens' rights at a time of accountability implementation. Second, budget accountability is still low, especially in the absence of strict sanctions and legal action. Third, overlapping authority, and the fourth, the low supervision of Parliament and the media. In particular, the issue of education is seen still minimal media attention. The budget allocation is also an indication of less efficient, causing free education not being reached. In many areas, for example, the increase in the education budget more absorbed by indirect spending (especially the salary component), which direct spending posture education (program spending) is relatively fixed or even declining. Therefore, the exact opposite of a surge in the budget, education urgency Indonesia in fact it is still just dwell on the problem of improving the quality and equity of access than even its governance. Standard with policy tools that have been available have not been effective, improved funding scheme areas that do not support the policy of free education, the data collection system (especially the school's data against the data students from poor households) are still not accurate, yet effective system of supervision tiered systemic, integrated so that implementation program ineffective.
Conceptually issues that arise in the process of cooperation between local governments in the free education policy led by the government of South Sulawesi Provoinsi not be separated from the ineffectiveness of cooperation between the local governments. If the South Sulawesi provincial government together with local government district / city using a model of intergovernmental cooperation (intergovernmental cooperation), it can be assumed that the issues that arise as previously described will not appear and even insurmountable. Because according Agranof (1986); and Conlan and Posner (2008), the policy of cooperation relations between the local government can be a solution to various problems disparities between regions, particularly in the empowerment of community participation in improving the efficiency and effectiveness of resource utilization in order to create development that is harmonious and balanced, appropriate position, role and its functions with due regard to the principles of democracy, diversity potential of each within an integrated management (Arganoff, 1986;Laffin, 2007;Tasmaya, 2007). To be a model of cooperation among the regional administration managed, policy is needed to coordinate activities between one or more local governments (Post, 2002). Cooperation among governments at the local level will also be the arrangement of two or more of government to achieve common goals, providing services or resolve problems together (Patterson, 2008;Domai 2009;Warsono 2009;Coon, 2011). Cooperation between government tiers below the top level with a conceptually named as a form of intergovernmental relationship (IGR). On the basis of these considerations that the study was conducted. The goal was to understand and analyze the various issues that arise in the free education program between the provincial government and the government of South Sulawesi regency / municipality that has finally found a model of cooperation between the governments.
1.2. Formulation of the problem 1. How is the importance of cooperation among local governments in gratisi education program in South Sulawesi? 2. What are the factors that influence the intergovernmental cooperation with the government of South Sulawesi Province District / City in the free education program? 3. How is the relationship model of cooperation among local governments in the free education program in South Sulawesi province.
Research purposes
Finding a model of cooperation relations between the regional government in the free education program in South Sulawesi province.
II. LITERATURE REVIEW In Indonesia, the basic organizing the cooperation relations between the local government (intergovernmental cooperation) is a development concept of the relationship between the government (intergovernmental relations) and the management of inter-governmental (intergovernmental management) that developed in the study of decentralization (local autonomy) that focused on each activity or interaction between units -unit of governance, allocation decisions are based on what, who is involved and the consequences of those actions (Smith, 1985;Anderson's, 1960;Edner, 1976, Arganof, 1986Conlan and Posner, 2008). The integration of the concept into three basic reference in drafting model of the relationship of cooperation between regions that have a high level of effectiveness in achieving the objectives of cooperation. Conceptually relations of cooperation between the regions to explain how regional administration can be more effective and efficient in conducting collective action. Effective in eliminating managerial fragmentation in governance, so as to create equitable development. Accession process through cooperation undertaken since the beginning of the management process and act together (Anderson, 1960;Edner, 1976;Agranof, 1986). Thus the intergovernmental management is an integrated management controlled together in the face of complexity (Agranof, 2003). Free education in order to be sustainable it needs the support of inter-regional cooperation relations policy. Cooperation between governments is intended to reduce regional disparities, reduce conflict, improve service, empowerment of community participation and improve the efficiency and effectiveness of resource use, to realize the construction of the harmonious and balanced, appropriate position, roles and functions with regard to democratic principles, the diversity of the potential of each within an integrated management (Tasmaya, 2007). Theoretical reference used in building models of cooperation relations between the local governments in the free education policy in the province of South Sulawesi is a model of cooperation relations between the government, according to Henry (2004). Model partnerships between local governments in the free education policy in the province of South Sulawesi, namely: 1) joint service agreement; 2) intergovernmental transfer service, and 3) pattern of interlocalism (Henry, 2004). Research on the relationship of cooperation between local governments in the free education policy in the province of South Sulawesi is designed in order to find a model of cooperation between governmental relations in the field of free education in the province of South Sulawesi. Model form of cooperation relations between the regional government allows the sets of students no longer have problems in getting a quality education so that the basic needs of those preparing for the future, especially in the aspect of education can they feel. Learners and their families who are less able no longer fend for themselves in taste education, but getting support from local authorities, in any form of policy, strategy and infrastructure as an integral part of the function of the service provider to the public of quality and equitable. Based on the concepts and theories of exposure the cooperation relations between the regional administration with a focus on free education in the province of South Sulawesi, the authors describe the state of the art research, below.
Roadmap Research
Studies on the relationship of cooperation between the government known several stages, ie the stage of determining strategy or decision-making or policy-making, public service delivery stage, the stage of
Quality education
Model partnerships between local governments in the free education program in South Sulawesi 1. Identify why the cooperation between local governments is essential in a free educational program 2. Identify the factors that affect the cooperation between the government free education program 3. The approach used in intergovernmental cooperation in the free education program policy implementation, and evaluation phase (Hill, 2002). In this regard, other researchers have done some of those stages, namely the network in defining a strategy and stages of implementation of public service. At the phasing of other researchers have conducted research at the stage of defining a strategy, stage of public service delivery, today will conduct research on policy implementation stage. For more details can be seen in the image below. Based on the picture-1 above shows this study complements a couple of models of cooperation among governments, but rather focus on cooperation between the provincial government and district / city in the lower levels of government, especially in the free education program. Previous research contribute to the research to be done, especially in finding the reasons, factors and approaches used in the intergovernmental cooperation in South Sulawesi III. RESEARCH METHODS 3c (2013) output: Inter-regional Cooperation Model In Local Government Affairs in the Regional Resource Utilization
Location Research
The research location is the province of South Sulawesi which is where the implementation of free education. The location of the sample study is an area which is considered to represent the province of South Sulawesi is Makassar City Pare-Pare (representing urban areas), Gowa (representing the district closer from the provincial capital), and the district of Luwu Utara (representing districts far from the provincial capital).
Design and Strategy Research
This study was a qualitative case study research strategy. Use of the design of such research is to unveil a model of cooperation between governments based on the context. In the case study, there are two types of research, which is descriptive, and eksplanatif (Yin, 2000).
Informants
Determination of informants in the study were determined purposively, that they are deemed to have information or be involved either directly or indirectly to the free education program in South Sulawesi province. Such determination was based on the assessment of the experts (or the researchers themselves) for a particular purpose or a particular situation (Neuman, 1997). The informants in this study are: a) The Regent / officials and employees of the Department of Education; b) a team of nine free education program South Sulawesi province; c) members of the Commission in Parliament that gave birth to the education sector; d) Provincial and District Education Council.
Data Collection Techniques
In this study, data collection techniques used were observation, interview, and documents. Observations were made primarily related to the object, such as a school, and a variety of facilities and infrastructure support free education. Depth interviews were conducted at the informants mentioned above, while the technical documentation is to collect documents such as regulations, journals, and research results related to this research.
Mechanical Processing and Data Analysis
In qualitative research data analysis and processing is an activity inseparable. It can be seen on the stages of the qualitative data analysis proposed by Miles and Huberman (1992), namely: data reduction, data presentation, and conclusion / verification. This study uses a case study analysis strategy, in which the analysis carried out varies from one stage to another stage.
IV. RESULTS AND RESEARCH FINDINGS 4.1. An Overview of Research Object
Free education policy in the province of South Sulawesi is a policy issued by the local government, especially in the era of the leadership of the provincial governor of South Sulawesi Dr. Syahrul Yasin Limpo, SH., MH., As part of his political appointments when participating in the election of the Governor of South Sulawesi province in 2009. After Syahrul Yasin Limpo was elected governor of political appointments he subsequently poured into Regional Regulation (Perda) No. 4 2009. in the law does set some basic principles that show how the collective responsibility that must be carried by all levels of government in the province of South Sulawesi (province-district / city). The principle is: a. Free education is a policy with a financing scheme elementary and secondary education are addressed jointly by the Provincial Government and the Government of Regency / City in order freeing the cost of education of students in the region of South Sulawesi provision. b. Regarding the allocation of funding education governance based on each school's profile educational unit. To the provincial government verification of each school's profile. Through Regulation No. 4 of 2009, the government of South Sulawesi province following up with a Memorandum of Understanding (MOU) with the government of the level of district /city. A strong desire to realize South Sulawesi province excel in the education sector can not be separated from the development vision of South Sulawesi Province, 2008-2013 namely: South Sulawesi province into ten best in the construction of basic rights. The Vision subsequent elaboration into the development vision of the education sector, namely: South Sulawesi ten best fulfillment of basic rights in education. Ten indicators are measured through performance indicators and service of basic rights of education sector is measured through the provision of facilities to the community based on the competencies of the Provincial Government in the form of service development and regulation of the education sector. South Sulawesi Provincial education vision subsequently poured into the educational mission, namely: 1. Improving management of educational services; 2. Improving access and equity in education; 3. Improving the quality and relevance of education; 4. Improving literacy and reading culture in society; and 5. Develop and utilize ICT.
While the goals and objectives set forth by the intention to realize its vision and mission. The objectives and targets are also meant to be the direction for every attitude and behavior of the free education providers. The education objectives with respect to: 1) affordability of educational services; 2) opportunity to obtain a quality education; 3) improvement of knowledge and knowledge development; and 4) institutional development. The goal of free education are: 1) an increase in the average length of the school (RLS) of 8.3 years; 2) an increase in the literacy rate (AMH) reached 95%; 3) a decrease in dropout rate / school center (DO) for students of Elementary School (SD) of 0.7%, Junior High School (SMP) to 1:00%, High School (SMA) and Vocational High School (SMK) to 1:00% ; 4) increasing the transition rate (AM) for primary to secondary school graduates reached 98%, SMP to SMA / SMK reached 95%; and 5) the addition of S3 qualification for educators, staff, employees and education observers to 500 people. Free education scheme and industrial cooperation between the provincial governments and district / city governments do with the financing patterns in which 60% of the costs borne by the district / city, while 40% is paid by the provincial government. In order that the implementation of free education done well, then Bylaw No. 4 of 2009 selanjutya poured into South Sulawesi Governor Regulation No. 4 of 2011 on the Implementation of Free Education in South Sulawesi province. The implementation scheme can be viewed via the following picture: The organizational structure of the implementation of free education in the province of South Sulawesi, namely: Control Team at the Provincial Level a. protector: •
Research result
The ultimate goal to be achieved and the end product of research suggests in relationship model of cooperation among local governments in the free education program in South Sulawesi province. To achieve the ultimate goal of this research, this study was planned in two phases where the first phase reasons why cooperation among local governments is essential to do or how its significance at both levels of government (Provinsi-District / City) wishes to engage in inter-governmental especially in the free education program in South Sulawesi. In addition to these objectives also in this study explored what are the factors that influence the cooperation between the provincial government of South Sulawesi regency / municipal government. 4.2.1. The significance of cooperation among local governments in the free education program in South Sulawesi province. In expressing the importance of cooperation among local governments in the free education program in South Sulawesi Province, referring to some of the main questions is the reference to illustrate the importance of this cooperation is done. The questions are: how to form partnerships between the provincial government and district in the field of free education; What is the legal basis of cooperation between the provincial and district / city; What's the reason the cooperation between the provincial and district / city; What is due to administrative reasons, what for political reasons, what for economic reasons? The first analysis disclosed in the context of the importance of cooperation among local governments in free education in the province of South Sulawesi is how to shape relations of cooperation between the provincial and district / city. Summing up what form cooperative relationships can be known through a series of information obtained from a variety of key informants in each district / city object being studied. According to information from a variety of key informants and combined with existing documents can be seen that the free education policy is a policy scheme designed specifically which aims to improve the quality of education outcomes in the province of South Sulawesi, especially at the level of primary and secondary education. As it is known that during this rank quality primary and secondary education South Sulawesi province at the beginning of this policy is rolled out through political appointments at that time the candidate for governor Syahrul Yasin Limpo (1998) still be around 20-22 ratings from all provinces in Indonesia. This was confirmed by one informant Education Commission Chairman 2 Pare-Pare Parliament which states that: "... So far as I know, since the beginning of the story that time governor candidate who now has served the governor campaigned for free education programs and free health care. Then finally after being chosen forwarded to the areas then become the program and then became a candidate campaign regents / mayors of each area included in the town of Pare-Pare. That was the beginning of the program appears " (IJ: 2015) To realize that such schemes can be implemented, chairman was elected governor of the political promise be realized in the form of Regional Regulation No. 4 of 2009. Efforts to realize the policy it seems difficult at the time when the entire financing should come from the local budget (APBD) South Sulawesi province alone. Because the South Sulawesi provincial government invited the district / city governments to join together to make that free education. Through a Memorandum of Understanding (MOU) between the government of the province of South Sulawesi with the Regent / Mayor as much as 23 City District, then the policy could be operationalized. Form of financing was agreed to be tackled jointly between the provincial government to district / city governments. In the MOU, it was agreed that the allocation of funds to each provincial government levels, namely by 40% and the government district / city level by 60%. Thus, the legal basis of cooperation between the provincial government and the district heads / mayors always refer to Regulation No. 4 of 2009. This was confirmed by one of the key informants namely the Head of Division (Head) Basic Education Department of Education Pare-Pare which states that: "... .. We no legal basis for this cooperation free education, legal basis of the MOU between the Governor and the Mayor. Governor Regulation regulate the procedures for the use of funds for free education. Besides it is also bound by Provincial Laws On Pendidian Penyelenggaraa Internet (AI, 2015).
.so Since 2005 I served as Chairman of the Board of Education, a free education program in North
Luwu Regency is already there, sir Lutfie have saved it. Instead, pack Lutfie've gathered all the principal to explain this free education program ' (MA, 2015) This free educational program is an integrated program that includes policies financing, structuring, development, monitoring, and control. Free education in this case was intended to free the students from all kinds of school fees either directly or indirectly. The design in the form of subsidizing the cost of education in terms of school operations to offset the cost to be incurred by the school, and scholarships for students who beprestasi in order to reduce the cost of school learners. The reason for convening the policy of free education can be parsed from the standpoint of the provincial government as described in the beginning that because look at the reality of the quality of education in South Sulawesi region remained at a level between 20-22 throughout Indonesia so that no specific policies to cope with the condition. When referring to the angle of view why the government district / city support the government's policy at the provincial level is quite varied misalya from the standpoint of the government of Pare-Pare in which constituted apart due consideration to lower the school dropout rate in the region, as well as social and political reasons as stated by the Chairman of the Commission of 2 Education Council Pare-Pare: "... .. In fact the beginning of this cooperation that we see are due to social reasons. This means that we are planning to do a free education system but not unconnected from the political element in which the program is being "sold" during the campaign in the region (EN, 2015). The same thing was stated by Chairman of Commission I of the Education Council North Luwu which states that: "The free education program is a populist program, where each Election directly in each district / city, would be said to succeed, so the nuances of political highly viscous (AAM, 2015) , Vice Regent of Luwu Utara stated that: "... .. So, I have no problem with the partnership as the education sector is one of the basic needs of people who are basic services that must be met by the government, so this is a shared obligation that no provincial program, his spirit was so that no student is not in school for reasons no fees or dropped out of school because of the high cost of education, so the local government supports it (IPI, 2015). In this free education policy required all school children completing primary and secondary education in the context of the formation of character and noble character in line with the norms of decency on the basis of God Almighty. If these obligations are ignored by the parents of students, the government is obliged to write to his parents. With this policy, the objectives of the free education as described in the general description of the object of study, namely: 1) improving equitable learning opportunities for all children of school age; 2) improve the quality of and graduates; 3) improve the relevance of competencybased education to keep pace with global developments; 4) improve the efficiency and effectiveness penyelenggaraam free education to meet the quality and productivity of human resources excellence. Referring to Regulation No. 4 of 2009 Implementation neighbor has a number of free educational programs: 1. The program is free education for students whose schools obtain full financing aid organization of education; 2. The cost of a subsidy program for poor students whose school assistance is not full or partial financing of education provision; and 3. Program scholarships for outstanding students who come from poor families. The goal of this free education that formal education in primary and secondary education which comprises primary schools both public and private. Government Elementary School (MI), both public mapun private, elementary school was outstanding, Junior High School (SMP) includes SMP both public and private, MTs (MTs) both public and private, Junior High School Outstanding (SMPLB), Secondary Schools (SMA) includes both public and private, vocational schools (SMK) includes both public and private, high school Superb (SMALB) and Madrasah Aliah (MA) both public and private. The decree also explained that the Private Schools and Pesantren can accept or reject the implementation of free education. For schools that refuse it are required to guarantee the quality of teaching and learning process. Regarding further quality standards set by Governor Regulation. Private schools and boarding schools are not able to meet the quality requirements outcomes, then it must be willing to be combined with a nearby private schools within a certain time. As for private schools and boarding schools who receive free education policy but still has other components that must be financed by a subsidy, it can receive from the learners with the approval of parents through the School Committee / Parents' Association. Regarding the amount of levy for every learner must get approval from the local government based on the implementation of free education Supervisory Commission. Allocation of budget for each learner based on the number of students, study groups, educators and other education personnel. How allocation were made by the following procedure: 1) each targeted school providing education free school profile data submitted at the start of the school year according to the format in triplicate for Controlling Team Free Education District / City. Provincial Control Team, and School Records concerned; 2) Control Team Free Education District Level Data City recapitulate schools based on the profile are sent each target school pendidian organizing free, then dikrim to Provincial Control Team; 3) Control Team Free Education recapitulate the provincial level data from Control Team Free Education district / city level based on the profile created by each school, to the implementation of free education to further propose budget allocation to the Governor. In order that the implementation of free education can take place as a mandate that has been set forth in the law No. 4 of 2009 and the Regulation of the Governor of South Sulawesi Province No. 4 of 2011 on the Implementation of Free Education in South Sulawesi province, the government formed a Commission for Supervision of Education Implementation Guide or abbreviated Komwas Ledigra. The Supervisory Commission is an institution that was deliberately set by the provincial government specifically tasked with overseeing and controlling the implementation of the free pendidian in South Sulawesi province. The supervisory commission is independent in performing its role so that the role can help the provincial government to streamline the use and utilization of subsidy funds and improving the quality of graduate education provision free (Perda No. 4 of 2009). In the effective functioning of this Monitoring Team, the Supervisory Team for free education provision can be set in each district / city with the Decree of Regent / Mayor (the Governor Regulation No. 6 of 2011).
In Regulation No. 4 of 2009 mentioned, in addition to molded Control Team as described in the general description of the object, also formed Free Education Implementation Supervisory Team where the goal for the management of the free education can be effective. The effectiveness of implementation is deemed to be more effective if for each beneficiary free education was also given the threat of sanctions if violations. Therefore, the Regulation No. 4 of 2009 also stated that violations of the implementation of free education subsidies threatened with imprisonment of six months or a maximum fine of Rp. 50.000.000, -Abuse of subsidies free education provision penalized under the provisions of the legislation in force. Sanctions as specified in the regulation in the form of criminal sanctions (criminal act of violation) and / or administrative sanctions. The sanctions provided in the form; 1) employment sanctions as stipulated in the regulations on employment; 2) claims for compensation as stipulated in the regulations in the field of financial management Countries / regions; and 3) delay and / or termination of the provision of education funds bentuan free (Bylaw No. 4 of 2009). The conclusion from the above description when referring to the facts related to the relations of cooperation between the provincial and district / city, especially in terms of financing of the whole policy of free education at all levels of primary and secondary education that funds are shared (cost sharing), where basic cooperation rests on grounds still low quality of outcomes of learners before this cooperation program implemented. On the basis of the government at the provincial level teaching and district / city in South Sulawesi to jointly overcome the problems of the education quality. Through such cooperation Memorandum of Understanding outlined and implemented consistently. As for the legal basis of cooperation between the government of the province of South Sulawesi is the Regional Regulation ( While the reason for the implementation of free education policy in the province of South Sulawesi through cooperation between levels of government district / city based on the desire to improve the quality of education in the region of South Sulawesi. With the policy of free education penyelanggaraan expected no more children of school age who are not educated. Another reason is because of political considerations, especially for the appeal of local leaders both at the level of the provincial government and district / city, especially at times when the Regional Head Election (Election).
Faktor the factors that affect the intergovernmental cooperation with the government of South Sulawesi Province District / City in the free education program
The purpose of the analysis of the factors that influence the cooperation between the Provincial Government and the Government of Regency / City in the free education program is to elaborate on what caused the implementation of free education policy has yet to show results on target. To this end it is through key informants supported by a wide range of relevant documents, the purpose to analyze the causes can be achieved. Various information to be obtained through key informants to answer research questions through a series of questions as the following: factors that influence the effectiveness of the cooperation between the provincial government and the government kabuoaten / city; how to factor the financial capability; What kepemiminan factors and regional political interests ?; What cultural factors and the structure of local government bureaucracy ?; As set forth in the law No. 4 of 2009 that the budget allocation in free educational activities aimed at; a) improving equitable learning opportunities for all children of school age; b) improve the quality of and graduates; c) increasing the relevance of competency-based education to keep pace with global developments; d) improve the efficiency and effectiveness of the implementation of free education to meet the quality and poduktivitas superior human resources. The facts show that what the objectives of the implementation of free education seems is still experiencing various kinds of constraints. This led to the still still beradanya level of quality of education, South Sulawesi Province, according to the version of the Human Development Index (HDI), especially in education is still in the stage 20 (2014). Browse for the cause so that the tip of the quality of education in South Sulawesi which still remain on the same level when the program was initiated in 2009. Based on information obtained from a variety of key informants can be put forward several factors considered influential by related parties directly from the program of cooperation between levels of government in the field of free education can be described as follows: 1. Ability Factor Financial and Other Resources The implementation of free education policy in its implementation requires a substantial budget amidst a quality of education that is still not optimal. Various budget is absorbed not only directly related to the cost of teaching and learning, but also associated with the post-budget items deemed capable of supporting the teaching and learning process. As set forth in the law No. 4 of 2009 that the budget allocation in free educational activities aimed at two main targets, namely; a) free education program for students whose schools obtain full financing aid organization of education; and b) the cost of the subsidy program for poor students whose school assistance is not full or partial financing of education provision; c) a scholarship program for outstanding students who come from poor families. Although the implementation of free education based on principles of equity, quality assurance, participation, transparency, accountability, education, and competence, but the implementation is still a lot of obstacles.
One of the factors that influence the effectiveness of the cooperation between provincial governments and district / city governments are financial factors, mainly related to the contribution of each party to cooperate. As known fund sharing agreed in the MOU that has been made that the provincial government for 40%, while the district / city by 60%. In practice it turns every region has problems respectively, primarily related to various types of financing at the school level that everything is not able to be addressed by the local government / municipal. (EN, 2015). Including complaints from the Chairman of the Education Commission of North Luwu also stated that similar things: to use education funds free of Pemrov already making use of the Guidelines and Technical Guidelines 40%, but sometimes do not fit the needs of the district government (AAM, 2015). More explicitly stated by Vice Regent of Luwu Utara that: ".... Actually, the allocation of funds from the BOS funds were allocated for free education is not effective anymore. My suggestion why not other sectors and education for students who touched which is triggered by the fund but the teacher force less untouched in terms of the vital role of the teacher (IPI, 2015).
It is inevitable that factors other than financial capability is also associated with a still uneven quality of human resources (educators) in each district / city. Although in one of the free education program there are scholarships for personnel involved but because of limited financial ability ultimately have an impact on the level of competence of equitable distribution of education and education to be able to manage the quality of teaching and learning process. Source of funding or financing, infrastructure, responsibility and commitment to make the program successful field workers free, and attitudes of field staff open to feedback and criticism from various parties in revamping its free education program is also important to be addressed. Another thing that has been linked to the financial problems that the technical manual (Technical Guidelines) issued by the government of South Sulawesi Province was felt by the implementers of district level feel they have no creative space to anticipate the different conditions in the field. So it also affects the process of managing the fund free education in the area. In addition, some districts do not agree with the provisions of sharing of funds between the provincial and district / city, because they felt somewhat burdensome area. this was confirmed by one of the key informants Midwives Head of Basic Education In Pare-Pare City Department of Education: ".... I also have to say, free education when they want to be involved because we prepared every region is different characteristics, Pare-pare problem is not necessarily the same as the Toraja. It used to be very strict rules, and I said we could not make the rules that govern the governors specifically, critical spirit free education there, as to what the problem areas associated with education services, it actually must be addressed. Due to regional problems related to free education is different, then they create uniform guidelines that how? Maybe we need pants but Toraja need clothes, and guidelines states have to buy pants all, so we had to buy pants, but it was not in accordance with the principle of benefits ... " (AI, 2015) The same thing also expressed by other key informants namely (AAM, 2015) Besides the issue of financial management, are also associated proportion of the allocation of funds for each type of funding that is deemed disproportionate by local authorities. According to them should also be given a balanced proportion to the needs of the students. This was stated by Secretary of Pare-Pare that: ".... I used to protest, free education too much to the welfare of teachers, but not what he meant. Welfare later teacher behind because there is already certified. But how the money can help the community. example buy uniforms for needy students, and then the teacher to improve the quality, such as training ... " (AMM, 2015) The fact as has been described above indicate that the problems associated with financial factors it still needs mendapatan improvement, especially in terms of the proportion of funds sharing between provincial governments and the district between 40% provincial and 60% of districts / cities where the district / city governments perceived still very heavy because there are still many sectors in areas that still require priority for funding. As well as between species / unit financing is also important in the redesign, and that is not less important aspect of management that is not too stiff so it is seen by the area that Juknis designed by the provincial government is not too in favor of the reality on the ground which are diverse in the regional level.
Factor Elite Leadership and Political Interests
In addition to financial factors, other factors are factors of leadership and political interests of the region. Seriousness of the government both at the provincial and district / city be the deciding factor effective implementation of free education policy. Many local governments feel that this policy is a policy imposed by the provincial government. As a result, the district / city should attempt to run this program. Local government district / city must continue to remain there despite heavy budgetary allocations. They should carry out because if not, then the government will be faced by the public. On the other hand at the time of a change of leadership at the district / city, free education program became mandatory sales offered during the campaign, although the candidates have not known exactly how both policy management and operational techniques. A condition that occurs gets a confirmation of one key informant stated that: "... .. In fact initially the program is the provincial government, then lowered into the area. I assume this was not optimal because a lot of the problems posed. From the beginning when this program into a sales campaign in the area and then into the program, the vision and mission of the elected government of the province. Finally, as imposed on the area to be on the run, so the area is also allocating the budget because we are also dealing with the public if he refused. Then after that also all local governments are now selling campaign so that the campaign material used has become like this (IJ, 2015). In addition to these factors are also associated with a leadership commitment of leaders at the district / city different. Differences between regions commitment because some local party felt that the free education program is actually the program initiated by the provincial government, it tought to be a greater proportion of funding than those at the local level. This makes a real commitment from the leadership at the district / city different.
Organizational Structure and Management
Another factor influencing the effectiveness of the implementation of the policy of free education in primary and secondary education in the province of South Sulawesi is the bureaucratic structure of local government. In the context of cooperation between government, the organizational structure / bureaucracy are factors that often affect the effectiveness of cooperation as it relates in addition to a coordination mechanism among organizational units, is also associated with the emergence of conflicts as a result of the intersection of relations unit public organization (unit area devices eat it), both vertically (intergovernmental) and horizontally between government and the public (state-society). Especially in the context of modern government no longer puts the government as a single actor in the making of public policy and public service delivery. In substance, existing governance structures in order to control all the activities and the authority that accompanies each actor as well as a steering and guidelines for the management and staff who occupy every structure and existing positions in the bureaucracy. The bureaucratic structure will be effective when coupled with proper management, especially joint management controlled well together. Complexity is happening in the relationship of cooperation between the regional administration generally in line with the complexity that occurs in each program, which also has a different jurisdiction boundaries. Agranoff (2003) mentions that important each cooperative relations between the governments require integrated management controlled together in the face of complexity that occurs in intergovernmental cooperation is built. A good policy should be accompanied by good planning. Where in good planning will always load level indicators. In the context of this policy of free education, by the local government perceived level of indicators of implementation success yet so it is difficult to measure the extent of this program is deemed successful or less successful. The absence of indicators created by default by the provincial government in the evaluation program causing trouble. It was confirmed by the Secretary Pare-Pare that: ".... Just be sure we have no indicators, from the province's look at the how to improve the quality of education. Not only the graduation rate, but should see his teacher, growing or not. And we kombine with certification ... " (AMM, 2015) A similar statement was also delivered by the Head of Primary Education at the Department of Education Parepare that: 366 ".... I think this is a weakness of the program performance indicators are not there, I challenge the province to showcase it. For example today, the funds in the scroll 9 billion for free education to Parepare, 3 billion more than the Provincial 5 billion more of us. Then we ask what means of measurement Capain use of the funds? So yesterday I said to the provinces that the challenge area for his SPM (Minimum Service Standards) of free education. For example, the number of dropouts in SPM minimum of below 1% and now if there is a free education area nda bold guarantee that once there is a free education will reach a percentage to meet those standards? If it does not directly 1%, could have for example the first year and 1.2% next year down another 1.01% and has always declined. Then what about the quality of student test scores, is there any movement up or even down. Now that the province admitted to this result, the question is then where the standard of judgment? That which does not exist ... " (AI, 2015).
Other problems found in the field in addition to the disbursement of funds is often too late also were often arbitrary regulation of the Governor. This is confirmed by the Head of Primary Education at the Department of Education Pare-Pare stating that: "... The obstacles faced is not permanent regulation of the province. Every year there are regulations governor changed. And also late in the disbursement of free education, even now too late. It should be that if a new regulation is made, six months before it takes effect should have been up to the user or the target. And we're budgeting system no such thing as DPA (document platforms budget). in situ has to be stated in detail, if the DPA that we already explained and have been set in the budget and then there are different guidelines then we had to wait for a change again. Only obstacles that we face now, there needs to be labor-related regulations. Many of the children we get, let us be willing to give money, give money transport, give the book money, give clothes and they do not want. Since they became the backbone of his family to earn a living, such as parking attendants children, child laborers. This needs no regulation governing regulations governor who later followed up the area of labor-related ban on using children .... " (AI, 2015).
Related to the disbursement mechanism as afore disturbing for the government in the area. Moreover, when the provincial government would not meet the 40% quota share of the funds that have been agreed. This occurs when the district / city government can not meet the quota was agreed that 60%. It was submitted by one of the key informants sexy field staff evaluation and curriculum development in Luwu Utara District education office which states that: ".... The funds that we would not thawed enough. So that liquid, junior high and elementary Only 10 months it 9 months due to a limited budget. So any help from the province we do not melt as well as when we dilute the division 40: 60 is not balanced anymore. Then it would violate the MOU ... " IIM, 2015).
Examples of how to distribute the funds between the provincial and district / city stated Head of Basic Education In Pare-Pare City Department of Education: "... The area is proposed based on existing data such as the number of students, study rooms, teachers and others, we tell from where they made the details. When, for example Pare-Pare took 1 billion it provided the provinces and 600jt 400jt reserved area, the simple concept ... " (AI, 2015) One very important aspect related to the effectiveness of the organizational structure that is the functioning of a coordination mechanism among all the existing structures within the intergovernmental cooperation. According to the Regional Secretary of Pare-Pare there are mistakes made by the province in the process of cooperation relations between this region namely the exclusion of agency cooperation in this cooperation process. Whereas the free education policy is a form of cooperation between levels of government. The results of interviews with him stating that: ".... You see, there was a mistake made diprovinsi cooperation between regions is not true, because it was not the way the cooperation bureau. Directly from the education department to the education department. It's an actual error. Should the agency that facilitates cooperation between regions. This mistake by the province. It formed cooperative relationships between the regions, the umbrella that we are in cooperation with the provincial bureau of cooperation. Because of all that work is an area of 24 districts / cities ... " (AMM, 2015) Listening to the various issues that are relevant to aspects of the structure and management of the program has been analyzed before it can be concluded that the cause of the implementation of the free education program experienced many constraints in terms of technical and operational disbursement of funds, lack of petujuk technically capable memahmi conditions of actual on the ground, an indicator of the achievements that have not been structured either making it difficult evaluation, organizational structure has not involving the structure that should exist in every cooperation between levels of government, so also have an impact on the coordination mechanism. BIBLIOGRAPHY Agranoff, Robert. 1986. Intergovernmental Management: Human Service Problem-Solving in Six Metropolitan Areas. Albany, N.Y : State University of New York Press.
|
2019-12-05T09:25:26.921Z
|
2019-10-16T00:00:00.000
|
{
"year": 2019,
"sha1": "9a4a36ea7cfb38d8a35f902cf799afc7ada091c8",
"oa_license": null,
"oa_url": "https://journal.iapa.or.id/proceedings/article/download/209/127",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "82e8589e658abb496422a6740eb82b67190840dc",
"s2fieldsofstudy": [
"Education",
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
236963053
|
pes2o/s2orc
|
v3-fos-license
|
Training machine learning models on climate model output yields skillful interpretable seasonal precipitation forecasts
A barrier to utilizing machine learning in seasonal forecasting applications is the limited sample size of observational data for model training. To circumvent this issue, here we explore the feasibility of training various machine learning approaches on a large climate model ensemble, providing a long training set with physically consistent model realizations. After training on thousands of seasons of climate model simulations, the machine learning models are tested for producing seasonal forecasts across the historical observational period (1980-2020). For forecasting large-scale spatial patterns of precipitation across the western United States, here we show that these machine learning-based models are capable of competing with or outperforming existing dynamical models from the North American Multi Model Ensemble. We further show that this approach need not be considered a ‘black box’ by utilizing machine learning interpretability methods to identify the relevant physical processes that lead to prediction skill. Seasonal forecasting skill in machine learning methods that are trained on large climate model ensembles can compete with, or out-compete, existing dynamical models, while retaining physical interpretability.
Sources of seasonal predictability. The climatology and variability of precipitation across the western United States present a unique seasonal forecasting challenge. Relatively low precipitation totals combined with high year-to-year variability are often received in the form of a relatively small number of atmospheric rivers across winter months 1 . Individually, these storms have proven challenging to forecast at lead times beyond the weather time horizon 2,3 . During the recent severe California drought (years 2012-2016), the challenges for decision-makers under forecast uncertainty were highlighted. As widely documented, the expected positive anomaly of precipitation across California and the Southwest under the major El Niño event of 2015/2016 did not eventuate as anticipated, and instead the devastating drought continued 4 . Given that the economic costs of severe drought can frequently exceed $1B annually across California 5,6 , improving the skill of seasonal precipitation forecasts remains a top priority for water resource managers.
In a seasonal forecasting context, teleconnections are best viewed as probabilistically loading the dice in favor of a certain outcome (i.e., dry versus wet conditions). The El Niño Southern Oscillation (ENSO) is known to be the primary driver of seasonal forecast skill across North America 7-10 , yet its signal-to-noise ratio is such that unexpected outcomes will occasionally occur by chance 4,11,12 . Other studies have shown that traditional indices for describing ENSO variability (i.e., Niño3.4) may not be optimal for capturing the teleconnection to western US precipitation 13 . Studies using both model simulations and observations have also shown that tropical diabatic heating anomalies in key regions across the western tropical Pacific, at times independent from ENSO, substantially increase the likelihood of ridging and subsequently drought conditions across California 14,15 .
The Indian Ocean has been suggested to play a role in mediating the ENSO precipitation teleconnection to North America in certain years 16 . While the tropospheric mid-latitude jet stream has a much shorter memory than tropical sea-surface temperatures (SSTs), certain configurations of the jet have also been shown to be predictable across subseasonal timescales, which has particular relevance for regional precipitation predictability over the western US 11 . In the stratosphere, both tropical and polar stratospheric variability can have appreciable impacts on precipitation across North America, offering a potential source of predictability on subseasonal-to-seasonal timescales 17,18 . To take full advantage of these potential sources of seasonal predictability, forecast models must capture a wide range of these teleconnections and their possible interactions.
Existing approaches to seasonal forecasting. Seasonal forecasting methods can be broadly categorized into dynamical, empirical (i.e., statistical or machine learning), or hybrid-based (i.e., dynamical models combined with empirical approaches). Seasonal forecasts using dynamical models typically involve probabilistic forecasts derived from the spread of the ensemble members, which relates to very small uncertainties in the model initial conditions that grow rapidly with time. The ensemble mean of these individual ensemble members can be used to forecast any signal that might emerge beyond the noise of individual weather events. The North American Model Ensemble Project (NMME) 19,20 has provided an opportunity to estimate the skill of start-of-the-art dynamical reforecasts. Seasonal forecast skill for precipitation across the western US has generally been found to be low after 2-weeks lead-time, but with dry extremes better forecasted than wet extremes 21 . Recently, seasonal forecast skill has been evaluated across different suites of NMME models that represent upgrades across model versions 20 . While seasonal temperature forecast skill was shown to improve across successive upgrades, minimal improvements were found for precipitation suggesting a potential saturation of skill for models run at this resolution.
Statistical seasonal forecasts of precipitation in the United States have a long history, and often implement classical canonical correlation analysis (CCA) 7,22 . This approach models linear relationships between two sets of predictor variables, commonly between lagged spatial fields of SST and temperature or precipitation. Within the classical CCA framework, it is difficult to include multiple predictor variables and their interactions without overfitting 23 , the temporal nature of the data is not explicitly modeled beyond the use of lagged correlations, and only linear relationships through correlation are directly captured. Despite the large ongoing investments in developing dynamical model ensembles, a relatively small amount of research has explored recent advances in machine learning for improving seasonal forecast skill 24 .
Another important barrier to implementing empirical approaches for seasonal forecasting is the limited sample size of observational data needed in model training 25 , which impacts both traditional statistical approaches and machine learningbased approaches. For example, for reliable modeling of nonlinear interactions between multiple predictor variables, an extensively large number of cases per predictor variable is required to limit overfitting 26 . Since this is clearly not possible with the limited record afforded by current observational or reanalysis products (~40-100 years), one promising alternative involves a hybrid-based approach of training machine learning models on large climate models ensembles 25,27,28 . This approach can greatly increase the training dataset sample size to span several thousand seasons and has achieved skillful ENSO predictions at lead times exceeding 1-year through training convolutional neural networks (CNN) on historical climate model simulations 25 . Other studies have also achieved skillful seasonal forecasts through applying relatively simple regularized regression models to climate model simulations 27,28 . In this study, through testing a wider range of machine learning models, we build upon these prior applications of training statistical and machine learning models on long climate model simulations for the purpose of seasonal prediction (Fig. 1). We implement cluster analysis to target the more predictable large-scale spatial patterns of precipitation, contributing to the observed seasonal prediction skill. Lastly, we implement a range of interpretable machine learning approaches to identify the relevant physical processes that contribute to prediction skill.
Results and discussion
Classification accuracy. After training and calibrating each machine learning model on Community Earth System Model Large Ensemble (CESM-LENS) data (see Section "Climate model training data", and Fig. 1) the models' configurations were frozen and used to forecast seasonal precipitation clusters across the observational record. To do so, the models are driven by input data based on the observed atmospheric and oceanic conditions prior to the target season (e.g., using October and earlier conditions to predict November through January (NDJ)). This 'test set' evaluation of the models provides an estimate of future model performance since the models are making predictions on data unseen in the training phase. The classification accuracy of four machine learning models tested is shown in Fig. 2a for NDJ seasonal predictions and in Fig. 2b for January through March (JFM) seasonal predictions. The machine learning accuracy (red bars) is presented alongside the NMME accuracy (white bars) and ensemble accuracy (blue bars), presented for the overlapping test period years. For the ensemble methods, Ens_Mode_ML is calculated as the ensemble mode prediction from the four machine learning models, Ens_Mode_NMME is calculated as the ensemble mode prediction from the seven NMME models, and Ens_Mo-de_Super is calculated as the ensemble mode prediction from all NMME and machine learning models.
For the NDJ predictions, three of the machine learning models (LSTM, NN, RF) have accuracy in the 40-50% range. While this accuracy remains somewhat modest, it is skillful relative to both baseline methods tested: a random guess model and the most frequent cluster prediction. Furthermore, for NDJ, two of these machine learning models have accuracy above or equal to each of the NMME models tested, and the ensemble prediction accuracy from the machine learning models (Ens_Mode_ML) exceeds that from the NMME models (Ens_Mode_NMME). The classification accuracy for JFM, across the latter half of the water year, is generally found to be improved in the majority of models compared to NDJ, consistent with the previous studies 29 . The Random Forest and NN models are found to be top performers in JFM, with classification accuracy >50%. A number of NMME models also show skill relative to baseline methods in JFM, most notably CanCM4 and FLORB which are also competitive with the machine learning models.
In examining misclassifications in various models, we found a general tendency to misclassify the cluster based on a failure to predict the precise positioning of the anomaly dipole. For example, a widespread dry pattern (cluster 3, Fig. 1) may have a greater tendency to be misclassified into the wet north dry south pattern (cluster 4). Depending on the application, this forecast error is likely less critical as the sign of the anomaly is still correctly forecasted across a large proportion of the Southwest under dry conditions. On the other hand, forecasting widespread dry (cluster 3) under a widespread wet occurrence (cluster 2) can clearly be considered a more problematic forecast. To explore this further, in model post-processing, we computed the accuracy in Fig. 3 after grouping cluster 1 and cluster 2 together (wet southwest group), and cluster 3 and 4 together (dry southwest group). As expected, this additional cluster grouping results in accuracy improvements in both individual models as well as the baseline methods. In JFM, a number of individual models (both machine learning and NMME) and their ensembles display accuracy in the 70-80% range which is skillful relative to both baselines. For NDJ, individual model accuracy can approach 60-70%, but in the case of NDJ, this level of accuracy is not skillful relative to the random guess baseline. Notably, for JFM, this highlights the increase in classification accuracy that can be achieved by reducing the spatial precision in the target prediction variable. Similar results have been documented in other studies, where aggregating over larger regions has generally been shown to significantly increase the skill by relaxing the predictand spatial requirement and avoiding direct predictions across the smaller less predictable components [30][31][32] .
To investigate whether certain clusters are typically better predicted than others, Fig. 4 presents a receiver operating characteristic (ROC) diagram for NDJ and JFM seasons separated by cluster. The analysis is presented for predictions from the RF model, shown to be one of the top-performing models across JFM (Fig. 2b, Supplementary Material Fig. S11). More predictable clusters in the ROC diagram are indicated by larger true positive rates and smaller false-positive rates. In both seasons, cluster 4 (wet north dry south pattern) and cluster 3 (widespread dry) are generally found to be the most predictable patterns. In contrast, the model very rarely predicted the occurrence of the widespread wet cluster 2 (0% of predicted clusters in NDJ and 2.6% in JFM), despite the occurrence of cluster 2 in both the test set (15% in NDJ and 17.5% in JFM, Supplementary Material Table S5) and in the training set (15.7% in NDJ and 16.4% in JFM). All other machine learning models also displayed a reduction in skill for predicting this cluster. Since cluster 2 was the least frequent cluster in the training dataset (Fig. 1d), we tested various approaches across the four machine learning models for increasing the predicted frequency of this cluster. However, modifying the class weights during training and stratified sampling approaches were not found to systematically increase the prediction skill for this cluster across models.
We suggest that the competitive machine learning accuracy results reported here largely stem from: (1) including a large pool of candidate predictor variables and accounting for non-linear relationships; (2) predicting smoother more predictable components of seasonal precipitation through the clusters obtained from K-means clustering. The use of clusters as the predictand has allowed investigation of skill on a per-cluster basis, providing insight into forecast errors in terms of the spatial pattern and sign of the precipitation field being forecast. Furthermore, we suggest that searching for skill in these larger-scale precipitation clusters, as opposed to attempting to predict seasonal precipitation on individual grid cells, is more closely aligned with the spatial scales of the dominant sources of predictability, namely the general positioning of ridges and troughs along stationary Rossby wave trains. In the following sections, we explore the physical plausibility of the associations learned in the model training.
Interpreting the Random Forest model. An often-cited criticism of machine learning is the challenge of interpretability compared to much simpler linear models. The potential lack of interpretability has implications for the perceived credibility of the model, where machine learning models may achieve promising results for the wrong reasons 33 . In this section, we present further analysis that attempts to look inside the machine learning "black box". We focus on the RF model, which was shown to be a top performer in predicting JFM clusters, but note that a number of these approaches are model-agnostic (e.g., permutation importance, Partial dependence plots, ALE plots, and LIME) and could be applied to the other machine learning models in future work.
The relative importance of individual predictor variables in the RF model is shown in Fig. 5, based on considering three variable importance measures. Overall, based on these measures, the most important predictor variables for JFM seasonal precipitation in the model are tropical Pacific SST anomalies from July through December (SST_TP_EOF1 Lag 0-5), velocity potential anomalies in the tropics from July through December (VP200_PW_EOF1 Lag 0-5 and VP200_PW_EOF2 Lag 4), and western tropical Pacific SST anomalies from September through December (SST_WP_EOF1 Lag 0-3). There is consistency between the three variable importance measures detailed in Fig. 5, providing further confidence that these predictor variables are indeed providing robust measures of importance. In particular, there is a general negative relationship between the relative decrease accuracy and mean minimum tree depth variables, indicating that the variables most important to classification accuracy are also generally positioned closer to the root of the decision tree, as expected. It is also noteworthy that 14 of the 15 top predictor variables correspond to EOF1. Since EOF1 by definition explains the greatest variance, the fact that the Random Forest is capable of distinguishing these components as being more important is encouraging. Furthermore, the RF typically favors the lower lags of the predictor variables (i.e., lags 0-3) as opposed to lags 11 or 12. This aligns with the intuition that conditions closer to the target in time should carry more predictive information.
Notably, the top predictor variables highlighted in Fig. 5 are consistent with the current physical understanding of the dominant contributions to western US seasonal precipitation. A substantial body of work has highlighted the importance of ENSO as the primary driver of seasonal forecast skill over North America 4,7,8 . The fact that the first empirical orthogonal function (EOF) of tropical Pacific SST, that most closely related to ENSO variability, is found by the Random Forest model to be the most important predictor variable (Fig. 5) provides confidence in the ability of the model to distinguish the most relevant teleconnections from a large pool of candidate predictor variables. Western tropical Pacific SST variability has also been shown, in both modeling 14 and observational studies 15 , to be particularly important in driving a Rossby wave train response in the midlatitudes and subsequently placing a ridge over the coastal western US that is typically associated with widespread drought. Western Pacific SST variables and their lags are prevalent among the top predictor variables in Fig. 5. Later in this Section, we further illustrate that the direction of the western tropcial Pacific precipitation relationship is also consistent with past research, and how the association can be modulated by ENSO strength and variability.
Other studies have highlighted that the representation of ENSO by the Niño3.4 index is likely not optimal for capturing the teleconnection to North America 13,34 . The use of velocity potential in the model allows for the representation of not only the direct influence of ENSO in the SST anomaly field but more broadly different dipole patterns of deep convection across the Pacific and Maritime continent and Indian ocean. These patterns of deep convection are known drivers of the Rossby wave response out of the tropics 34 such that targeting these regions through this variable appears to be an important predictor variable in the model (Fig. 5). In contrast, North Pacific SSTs and variability in the subtropical jet appear not to be as important overall predictor variables in the trained model. However, they can carry a small amount of predictive information relevant to certain clusters as described in the next section.
Moving beyond individual variable importance measures, interactions between pairs of variables are next explored in Figs. 6-7, as well as the direction of these associations. To examine which pairs of variables were among the most important in the RF model, we assessed all pairwise variable interactions across the 5000 decision trees that make up the model. These variable interactions are assessed in terms of the mean conditional depth of variable interactions in the decision tree, where more important variable interactions tend to occur closer to the root of the decision tree. Figure 6 shows the depth and frequency for the 25 most shallow pairwise variable interactions as determined by the mean conditional depth measure. Of these 25 variable interactions, the overall shallowest interaction was found between the first EOF of velocity potential and the first EOF of tropical Pacific SST at different lead times. The most frequent interaction was found between tropical Pacific SST in December and those in October. These specific interactions are further investigated in the following section through partial dependence plots.
Partial dependence plots quantify the direction and strength of influence of each variable on the probability of an individual cluster outcome, after accounting for the mean effects of other variables. For the overall shallowest pairwise interaction, the partial dependence plot (Fig. 7a) Fig. S13), the partial dependence plot shows that La Niña-like conditions in December (positive values of SST_TP_EOF1) and La Niña-like conditions in October (positive values of SST_TP_EOF1_Lag2) combine to increase the probability of occurrence of cluster 4. The direction of this association is in close agreement with observations and previous studies, whereby La Niña conditions tend to promote a dry southwestern and wet northwestern United States. The finding that La Niña conditions preceding December further increase the strength of this association is in agreement with the physical intuition that more persistent La Niña events are more likely to produce the canonical La Niña teleconnection response. Other pairwise variable interactions can also be explored in this way to further scrutinize the model's learned associations. For example, in Supplementary Material Fig. S14 we show the partial dependence plot for the interaction between tropical pacific SST variability and western tropical Pacific SST variability. For this interaction, we observe that the sign of the western Pacific SST anomaly pattern can act to modulate the probability that La Niña conditions would result in widespread dry conditions over the western US. The importance of western tropical Pacific SST variability for driving ridging has been found in previous studies using both models and observations 14,15 .
One caveat to interpreting partial dependence plots is that they do not compute the probability of a cluster as a function of predictors while ignoring the effects of other predictor variables 35 . Instead, partial dependence plots account for the mean effect of other variables. This can make interpretation of partial dependence plots challenging in cases where multiple predictor variables are correlated with each other 36 . In contrast to partial dependence plots, ALE plots average and accumulate differences in the prediction across the conditional distribution to isolate the specific unbiased effects of an individual predictor of interest. Therefore, if individual variables in ALE plots show a clear relationship with the cluster outcome, one can be more confident that certain variables do indeed provide a non-spurious unique influence on that outcome. ALE plots for particular variables of interest are given for predicting cluster 3 (Supplementary Material Fig. S15) and cluster 4 (Fig. S16). These plots provide further evidence that, while ENSO plays a dominant role in western US precipitation predictability, other variables beyond ENSO can also provide smaller independent contributions. For example, positive values of SST_TP_EOF1, corresponding to La Niña conditions, most strongly increase the probability of cluster 4 occurrence, which is in agreement with the partial dependence plot from Fig. 7. The ALE plots also confirm that velocity potential at longer lead times is important (from Fig. 7), where a range of negative values for this variable slightly increases the cluster 4 occurrence. As noted earlier, a challenging component of the seasonal prediction problem is distinguishing between cluster 3 and cluster 4, related to the positioning of the anomaly dipole across the western US. Comparing the ALE plots between cluster 3 (Fig. S15) and cluster 4 ( Fig. S16) elucidates what information the model has used to distinguish between these clusters when making seasonal forecasts. Since La Niña conditions favor both clusters 3 and 4, it is shown that SST variability in the western tropical Pacific and the Indian Ocean, and to a lesser extent the North Pacific are used as additional discriminants in the model. For example, while SST variability in the North Pacific was found not to be an overall important predictor earlier (Fig. 5), this can slightly increase the likelihood of cluster 3 occurrence compared to cluster 4.
Explaining individual seasonal forecasts. In the previous section, we provided evidence that the trained model has been capable of learning physically plausible teleconnections for western United States precipitation. In this section, we extend interpretability to go beyond that of the general model structure and towards explaining the model's decision-making process for individual seasonal forecasts. In particular, we probe what combination of factors drives the model to make an incorrect or correct seasonal forecast in a given year.
Using the LIME modeling framework, we fit simpler statistical models around the decision point of the more complex Random Forest model. The LIME modeling framework is applied on a case-by-case basis (local interpretability), with one model fit to an individual season. We present two case studies corresponding to incorrect and correct forecasts. The 2005-JFM seasonal cluster was correctly predicted by the model, with a Dry North Wet South pattern (cluster 1). Estimates of individual predictor variables and thresholds that most strongly supported or contradicted this forecast for cluster 1 are presented in Fig. 8a. As shown, negative values of SST_TP_EOF1, which correspond to a warm tropical Pacific SST anomaly associated with a weak El Niño event across October to December, most strongly favored the prediction of cluster 1. Other variables, such as velocity potential at longer lead times and western tropical Pacific SST anomalies, also favored the cluster 1 forecast but these contributed less to the outcome in this case compared to ENSO.
As noted earlier, this framework can also be applied to investigate drivers of incorrect forecasts, with the forecast for 2016-JFM presented here as an example (Supplementary Material Fig. S17). In this case, the model incorrectly predicted cluster 1 to occur, whereas cluster 4 occurred. Again, weak El Niño conditions were the main driver of this incorrect prediction. However, patterns of SST variability in the western tropical Pacific and the Indian Ocean presented a tug-of-war to contradict the prediction but with overall smaller weights. This incorrect prediction was not unique to the Random Forest model, with all other machine learning models and NMME models analyzed making similar incorrect forecasts. Various other studies have concluded that this particular event was largely dominated by less predictable atmospheric variability, making it difficult to predict on seasonal timescales 4,11,12 . It is important to note that the LIME framework is only an approximate estimate of the machine learning model's more complex decision-making process at that locality. In the two cases presented, the simple model was found to provide a reasonably good approximation (R 2 = 0.4-0.5) of the model complexity, but less explainable cases can also be found. While acknowledging this caveat, we suggest that the ability to explain individual forecasts in this way could be very useful. In dynamical models used for seasonal forecasts, it is often difficult to formally quantify what boundary conditions have most heavily influenced an individual forecast without running further resource-expensive diagnostic experiments. Subsequently, in practice, it often takes considerable time after the event for diagnostic studies to reevaluate and quantify the physical drivers of a forecast, and interpretation from this can be inconclusive. In contrast, here we have illustrated how local interpretable machine learning can provide plausible explanations for what variables contributed most strongly to a particular forecast outcome at a negligible computational cost. In practice, the main advantage of this is that these local interpretability plots (Fig. 8) can rapidly be produced and presented alongside the seasonal forecast in real-time.
Implications and future directions. The machine learning approach to seasonal forecasting tested here shows promise both in terms of competitive accuracy and in terms of ability to learn physically plausible teleconnections. The proposed interpretable machine learning approach could also be applied more broadly in future work to better understand and compare teleconnections between different climate models, as well as assessing the possibility of non-stationarity in certain teleconnections due to climate change.
A number of different pathways may exist for further improvements in seasonal forecast skills. One major advantage is that training machine learning models on climate model simulations can leverage substantial existing investments in large climate model ensembles. Indeed, the number of modeling groups performing large initial condition model experiments has increased considerably in recent years 37 , providing further opportunities to train on different climate models and better understand different structural uncertainties that contribute to seasonal forecast uncertainty. This large and growing set of model simulations available for training is in contrast to the traditional approach of training on observational data, where only a single additional training sample becomes available each year. In future work, we plan to assess potential skill improvement from training on certain climate model simulations that have a reasonably well resolved quasi-biennial oscillation (QBO), which contains a large amount of memory and has been highlighted recently as an important source of predictability for North American precipitation 18 .
It is notable that the Random Forest, one of the computationally simpler machine learning models tested, ranked as one of the top-performing models. This carries a practical advantage since the Random Forest is typically more readily interpretable (as explored in Figs. 5-7) and has fewer tunable parameters with relatively little sensitivity to these parameter choices. For the LSTM model, a simple implementation (single layer LSTM followed by 20 neurons in a single layer) was found to achieve competitive results compared to more complex architectures. However, future work may explore different LSTM implementations, including testing deeper LSTM architectures and bidirectional LSTM 38 . Transfer learning may be another approach to further improve skill 25 . Transfer learning in this context involves generating the main associations and weights from training on large climate model simulations then updating these pretrained weights on a separate set of observations.
Summary
This study has tested a novel approach for seasonal forecasting western US precipitation. In particular, a range of machine learning approaches have been trained on large climate model simulations, and their predictions combined in an ensemble to predict large-scale patterns of precipitation anomalies. The main findings from this study are: Classification accuracy is generally higher in JFM compared to NDJ seasons. In both seasons, the machine learning models display skillful predictions relative to baselines, and can compete with or out-compete dynamical forecast models from NMME.
The widespread wet pattern of precipitation (cluster 2) was consistently the most difficult pattern to forecast in all machine learning models. Post-processing, where different clusters are combined, can increase the accuracy further (accuracy: 70-80%) but comes at the cost of providing a less precise forecast (i.e., larger spatial smoothing).
Focusing on the Random Forest model, we have investigated both global and local interpretability. In terms of global interpretability, as expected, ENSO is the dominant source of seasonal predictability. Other variables, namely velocity potential anomalies across the Indian Ocean and Maritime Continent, and SST anomalies across the western tropical Pacific can modulate the probability that ENSO will result in a certain precipitation cluster. The interpretability results provide confidence that the model is capable of learning physically plausible teleconnections from a large pool of candidate predictor variables.
Local interpretability provides estimates of what variables have influenced an individual seasonal forecast. Examples of local interpretability are presented showing how, for specific seasons, conditions beyond ENSO have influenced a specific forecast. Presenting local interpretability plots alongside the seasonal forecast may help build trust in the predictions from machine learning.
We suggest that this approach to seasonal forecasting offers a promising path forward. Compared to the traditional approach of training statistical models on observational data, the large sample size enabled by training on large climate model simulations helps overcome sampling issues and allows for nonlinear interactions to be represented. Further skill improvements may come from training machine learning models on multiple climate models through the same framework.
Methods
Overview of the framework for machine learning with large ensemble climate simulations. This section provides an overview of the framework implemented here for machine learning-based predictions from large ensemble climate simulations. Figure 1 outlines this methodology, showing oceanic (Fig. 1a) and atmospheric (Fig. 1b) predictor variables and regions from the CESM-LENS model, described in Section "Climate model training data" below. The predictor variables are based on applying EOF analysis to different regions and variables (described in Section "Machine learning predictor variables"). Using these EOF-derived predictor variables in model training, the four machine learning models tested in this study are shown in Fig. 1c. The predictand variable (Fig. 1d) targets widespread spatial patterns of precipitation derived from applying K-means clustering to CESM-LENS precipitation data (Section "Machine learning predictand variable"). Predicting the occurrence of these larger-scale precipitation features (Fig. 1d) has the advantage that these spatial scales are well aligned with the typical area of ridges and troughs along Rossby wave trains that form an important source of seasonal predictability in this region 14,15,39,40 . After training and calibrating each of the four machine learning models on CESM-LENS data through this framework, the same models are then forced by observational and reanalysis data (described in Section "Observational and reanalysis data") and used to make out-of-sample seasonal forecasts for NDJ and JFM seasons across the observed record (1980-2020).
Climate model training data. The limited record length of observational data at the seasonal time resolution leads us to explore the use of climate model data when training various machine learning models. For this purpose, we use simulations from the CESM-LENS single-model large ensemble 41,37 , comprising 40 ensemble members spanning years 1920-2005 with historical forcing from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) design protocol. The CESM-LENS uses the Community Earth System Model v1 (CESM1), with the Community Atmosphere Model (CAM) v5, run at approximately 1°resolution with fully coupled atmosphere, ocean, land, and sea-ice components. Each of the CESM-LENS ensemble members represents a physically plausible and unique trajectory of the climate system (e.g., different phases of low-frequency variability will occur at different times across the historical record) solely due to internally generated climate variability. Data from CESM-LENS used for training machine learning models were as follows: SST, zonal and meridional wind at 200 hPa (U200, V200), velocity potential at 200 hPa (VP200), geopotential height at 500 hPa (Z500), and total precipitation.
A number of studies have evaluated the performance of CESM1 in terms of simulating low-frequency variability in the tropics (i.e., including ENSO) and related teleconnections into the North Pacific [42][43][44][45] . These considerations are relevant to training machine learning on CESM1, as systematic biases in teleconnections also have the potential to be learned during training. A primary focus of CESM1 model development was to improve ENSO variability and teleconnections as these were known to have large deficiencies across previous model versions (e.g., CCSM3) 45 . The much-improved fidelity of ENSO variability and teleconnections across successive versions of the model was largely the result of targeted changes to the atmospheric deep convection parameterization 46,47 . CESM1 has shown to generally perform well in terms of temporal characteristics including the asymmetry of El Niño and La Niña duration 42 . However, the amplitude and variability of ENSO events are both known to be larger than that observed across the 20th century 42 .
Machine learning predictor variables. All predictor variables first underwent dimension reduction through EOF analysis. The purpose of this was to: (1) isolate dominant spatial modes of variability and (2) reduce the amount of potentially redundant data by transferring from gridded data to a smaller number of principal components more manageable in model training. A similar approach is typically implemented as a first step in more traditional CCA for seasonal forecasting. All predictor variables were based on monthly mean values. For each predictor variable, the first four EOFs were retained which collectively explain at least 50% of its variance. The spatial patterns of EOFs and the percent variance explained are shown in Supplementary Material Figs. S2-S8.
As detailed in Fig. 2a, EOF-derived predictor variables for SST were chosen to target the following regions: tropical Pacific (TP), western tropical Pacific (WP), Indian Ocean (IO), and North Pacific (NP). These regions were targeted based on previous studies (see Section "Sources of seasonal predictability") indicating plausible physical teleconnections to western US precipitation. EOF-derived predictor variables for atmospheric circulation (Fig. 1b) target velocity potential anomalies at 200 hPa across the wider tropical Pacific (PW) and the Indian Ocean, zonal wind anomalies at 200 hPa across the North Pacific (NP), and geopotential height anomalies at 500 hPa across the Eastern North Pacific (ENP). The velocity potential field targets broad spatial patterns of anomalous deep convection that drive a Rossby wave response in the extratropics. North Pacific subtropical jet variability was also included because of the waveguiding influence on tropical-extratropical teleconnections [48][49][50] as well as the relevance to western US precipitation through more localized jet regimes over the northeast Pacific 11,51 . Other variables considered included tropical and high-latitude stratospheric variability, including the QBO and sudden stratospheric warming events. However, stratospheric variability is not well resolved in this low-top model version of CESM1 52 , and as expected, sensitivity testing found that these variables did not add additional skill and were subsequently removed.
Machine learning predictand variable. The predictand variable is derived from K-means clustering of standardized seasonal (3-monthly) precipitation anomalies over the western US for two separate seasons: NDJ and JFM. Cluster analysis was used to isolate recurrent large-scale features of precipitation variability in this region (Fig. 1d). K-means clustering requires the user to select a specific number of clusters, which we set as four clusters for the following reasons. Previous research has highlighted that the first two modes of seasonal precipitation anomalies in this region explain approximately 60% of the variance 39 . The first mode is associated with widespread wet/dry conditions across the entire region, while the second mode is associated with a north-south dipole of precipitation anomalies 53 . The first four clusters from K-means (Fig. 1d) are a very close match to these dominant modes of seasonal precipitation. Furthermore, by choosing four clusters, the clusters trained from CESM-LENS (shown in Fig. 1d) are a close match to those trained from observational data (Supplementary Material Fig. S1). While including more than four clusters can provide more regional detail in precipitation, the prediction accuracy from the machine learning methods tested was found to decrease when additional clusters were added in sensitivity testing. This suggests that the four clusters extracted from K-means broadly represent the main predictable components of precipitation in CESM-LENS on seasonal timescales.
Observational and reanalysis data. After training on CESM-LENS, the trained machine learning models were taken offline and tested for making out-of-sample predictions on observations (years 1980-2020). For this purpose, the same set of observed predictor variables are needed as those used in model training. Specifically, SST data were obtained from ERSSTv5 54 , all atmospheric circulation fields were from ERA5 55 , and the total precipitation was from CPC-Unified at 0.25°r esolution 56 . The dimension reduction of these predictor and predictand variables was applied to both CESM-LENS and observations. Random Forests. Random Forests (RF) 57 is a supervised machine learning algorithm consisting of an ensemble of decision trees. Different decision trees are developed by taking random subsets of predictor variables and data cases, which reduces the correlation between individual trees. The purpose of using multiple decision trees is that the variance in the prediction is reduced compared to predictions from individual trees that are often prone to overfitting on the training data. Each tree is built using a bootstrapped sample, and records that are not used in building the decision tree are referred to as the out-of-bag sample. In a number of settings on tabular data, RFs are often shown to be capable of producing similar classification accuracy compared to more complex machine learning methods, while retaining a somewhat higher degree of interpretability . Another advantage of RFs is the relatively small number of parameters required in model tuning and the relative insensitivity to these choices 57 .
Two parameters were tuned in the RF model: the number of trees set to 5000, and the number of variables randomly sampled at each split set to 10. These parameter choices were based on tuning across the CESM-LENS training dataset, though sensitivity testing revealed stable results across a number of parameter choices provided that the number of trees was sufficiently large. For the RF training, predictor variables were lagged based on the memory of each predictor variable (Supplementary Material Figs. S9 and 10) and through sensitivity testing to the of out-of-bag accuracy across the CESM-LENS training to how each variable was lagged. The first EOF of each SST variable/region was lagged at 1-month intervals up to 12 months, and the second EOFs were lagged up to 6 months. The exception was for SST in the North Pacific, which was not lagged, as we found very little sensitivity to adding additional lags. The first EOF of U200 was also lagged up to 6-months, given the memory of this variable. All other predictor variables were not lagged, such that only October values were used to make the NDJ predictions and only December values were used to make the JFM predictions.
XGBoost. Extreme gradient boosting (XGBoost) is a recent implementation of gradient boosted decision trees for supervised machine learning 58 . XGBoost relies on the concept of boosting-that multiple "weak" learners (i.e., underfit to the data) can be more effectively combined to produce a single "strong" learner. The training proceeds by iteratively growing individual decision trees that target misclassifications from the previous weak learners, giving them additional weight across subsequent training iterations. Recently, XGBoost has been a consistent top performer in terms of classification accuracy for tabular datasets across an extensive range of applied machine learning problems 58 .
Compared to RF, XGBoost requires a larger number of parameters to be tuned in the model training. Among the most important parameters are the number of rounds for boosting (nrounds), how deep the trees can grow (max_depth), the learning rate to control how conservative the boosting is performed (eta), and the minimum loss reduction required to make a further tree partition (gamma). These parameter values (Supplementary Material Table S1) were tuned based on multiclass classification accuracy in the CESM-LENS validation dataset from a random search across a range of values, performed separately for models predicting JFM and NDJ seasons. For tuning purposes, we split the CESM-LENS dataset as 80%/20% for training and validation, respectively with the same variables and lags were used as in the RF model, described in Section "Random Forests".
Neural networks. Neural networks (NN) approximate nonlinear functions and processes 59 through a series of feed-forward matrix operations. NNs pass predictor input variables through a series of hidden layers, to a specified output layer. Each layer is described by the number of nodal points in that layer with the initial layer being the number of input variables. Nodes from adjacent model layers are connected via model weights. The hidden nodal point values are determined by the sum of the product of associated model weights and the input values from the previous layer. Each nodal point is then 'activated' by a nonlinear function before passing the variables to the following layer. The task of training a NN is to learn the optimal nodal weights, computed iteratively through backward optimization and gradient descent. In particular, each iteration seeks to minimize the cost of a specified loss function, by determining the gradient field of the weights and taking a small step in the direction opposite this gradient. The series of multiple hidden layers, and the optimization process, gives rise to the term Deep Learning.
A Deep Feed Forward NN was implemented in which all nodes are fully connected, without enforcing sparsity. The final architecture and parameters were selected through a hyperparameter search, with the minimum error on the validation data set (20% of the CESM-LENS dataset) used to determine the final network parameters and architecture (Supplementary Material Table S2). The NN utilizes an Adam optimizer 60 , the Rectified Linear Unit (ReLU) activation function, a 0.001 learning rate, a batch size of 100, dropout regularization, and a categorical cross-entropy loss. The network was trained with 50 epochs and saved whenever validation accuracy improves. All predictor variables (Section 2.3) were lagged by 12 months and used in the final model. Class imbalances were accounted for by applying a scalar value that weights the cross-entropy loss function during training proportionally to categorical representation in the training dataset.
LSTM networks. Long short-term memory (LSTM) networks are a type of recurrent NN that can learn dependence in sequential data 61,62 . LSTM networks have an internal state that aims to model information about past observations (inputs) for a variable number of steps in the sequence. The advantage of this approach, compared to traditional feed-forward NNs, is that persistence and memory of past information are explicitly modeled in the network and used when making predictions. In the context of seasonal forecasting, this feature of LSTMs is valuable since a sequence of past events (e.g., the gradual development and persistence of ENSO over several months), as opposed to an individual data point/ month, is likely provides additional predictive information.
All predictor variables were used in the final model, but here the sequence length history (i.e., how far the model looks back at past events) was treated as a hyperparameter. Other hyperparameters were the number of LSTM neurons, training epochs, and batch size, which were tuned across the validation data set to determine the final network values (Supplementary Material Table S3). The network architecture used for both JFM and NDJ models was fixed with the LSTM layer followed by 20 neurons in a single layer. The LSTM network used an Adam optimizer 60 with categorical cross-entropy loss. Dropout regularization was implemented to reduce overfitting, where the network nodes are probabilistically dropped out of weight updates during model training. Further details on the training data and software implementation for each machine learning model are presented in Supplementary Material Table S4 and Table S5.
Interpretable machine learning. The goal of interpretable machine learning is to quantify which predictor variables are overall most influential in the model (global interpretation), as well as estimating how an individual classification/forecast was made (local interpretation). In recent years, applied machine learning research has focused heavily on developing these techniques, which are directly relevant to applications in atmospheric science [63][64][65] . Here, we describe the implementation of three separate approaches to target global and local interpretability, specifically from the RF model.
First, to target global interpretation, we explore the most important individual predictor variables used for making seasonal predictions. Three metrics were considered as follows: relative mean decrease accuracy, mean minimum tree depth, and root. Relative mean decrease accuracy quantifies the decrease in classification accuracy from shuffling individual predictor variables. If shuffling results in a relatively large decrease in the test set accuracy (i.e., larger errors across the out-ofbag samples) then the variable is seen to be important since shuffling has now broken a previously important relationship. Mean minimum tree depth quantifies the average depth of a particular predictor variable across multiple decision trees. Since more consequential variables are positioned closer to the root of the decision tree, a lower minimum tree depth value signifies larger variable importance. Similarly, the root metric measures variable importance by counting the number of times a particular predictor variable is positioned at the root of a decision tree.
Second, also targeting global interpretation, we explore the most important predictor variable interactions between all possible pairs of predictor variables. To do so, across all decision trees, we compute the mean conditional depth of all pairwise interactions in terms of their frequency of interaction occurrence and their depth of interaction occurrence 66,67 . This enables analysis of variable interaction importance since variables that interact closer to the root of the tree and with a higher frequency will be more consequential overall in determining the prediction. Partial dependence plots compute the probability of an outcome as a function of two predictor variables after accounting for the average effects of all other variables 63,68 . This analysis allows us to examine the average direction and strength of influence for key predictor variables, as well as examining the potential influence of nonlinear interactions on the probability of a particular outcome. Accumulated local effects plots (ALE plots) 36 are also computed, which can be considered an extension of partial dependence plots. ALE plots are designed to limit potential issues associated with the multicollinearity of predictor variables that can make partial dependence plots challenging to interpret.
Third, we implement the local interpretable model-agnostic explanations modeling framework (LIME) 69 to explain why a particular cluster was forecasted on a particular target date (i.e., local interpretation). In the context of seasonal forecasting, LIME is used to explain and rank which predictor variables were used by the model to make a particular forecast. The LIME modeling framework aims to explain the importance of predictor variables by finding a simple linear solution that approximates the original model's decision function near to a particular case. Through perturbing input variables across the model's nonlinear decision function, this much simpler linear model is fit to explain how the more complex RF model behaves locally.
Comparisons to dynamical forecast models. The output of a number of dynamical models from the North American multi-model ensemble (NMME) phase 2 models 19,20 were analyzed to compare skill with the machine learning-based models (Supplementary Material Table S6). In particular, the 3-month forecasted standardized seasonal precipitation anomaly from each model's ensemble mean was projected onto the K-means clusters . This projection into cluster-space allowed direct comparisons between the machine learning-based models and the dynamical models. Forecasts for November through January (NDJ) were initialized in October, and forecasts for January through March (JFM) were initialized in December, with all available hindcast years used in each model.
|
2021-08-10T13:41:03.650Z
|
2021-08-10T00:00:00.000
|
{
"year": 2021,
"sha1": "2d5ce8daf478de52d60650c4c1d00e8cb5fbeee0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s43247-021-00225-4.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "2d5ce8daf478de52d60650c4c1d00e8cb5fbeee0",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": []
}
|
204718646
|
pes2o/s2orc
|
v3-fos-license
|
THE MUSICALITY OF NARRATIVE FILM IN ‘THE HEADLESS WOMAN’
This paper provides an overview on the concept of musicality in fiction cinema language, understanding the comparative analysis of both art forms and considering cinema as a potentially musical construction. Furthermore, the examination of polyphonic musical textures and its methodical application in the formal analysis of Lucrecia Martel’s cinema (namely in her 2007 film The Headless Woman) provides a new perspective on the aesthetical values of the Argentinean filmmaker’s work with sound and image, which unveils other forms of assuming fiction film narration. KINO EYES – REFLECTION ON AND ABOUT FILMMAKING
Introduction
It is often conceived that cinema is a predominant visual art, and even in the regular argot of moviegoers the verb "watch" is widely used to describe the action of experiencing a film. Already enough has been said regarding the dominance of image creation-perception and the undervalued potential of sound design, focusing the discussion on digging in the visual capabilities of sound, and looking at the most concrete and narrative aspects of sound in film creation. However, not much has been said about the sound, sonic or musical aspects of film-image creation, and yet it is undeniable the musical formality that is inherent in cinema. Due to micro and macro structural properties of film, both in sound and image, such as rhythm, time and movement, its similarity with music composition is an area of film studies and comparative arts analysis that has been wandering since the first decades of the 20th century. One of the purposes of this paper is to add some considerations about the conception of cinema as a musical form, experiencing it not only as a visual and sound arrangement, but also (paraphrasing Germaine Dulac's Visual Symphony) as some sort of music for the eyes.
These comparative studies between formalities in music and film should not be intended as determining a specific way of composing music in filmic terms, neither to structure a film as if it were a particular musical structure, because this argument is highly debatable due with. I will focus mainly on her latest film "The Headless Woman" (2008), whose narrative departs from the intimate level of her protagonist, to the complexity of the family she belongs to, and in a bolder level of analysis, to the encounter of two social statuses in the northern region of Argentina, which paradoxically becomes also a simultaneous chant of voices. However, polyphony and musicality as such will remain always the main path to discover these different dimensions of her cinematic world from a purely formal methodology.
The first part of this paper will make a brief attempt to grasp the concept of musicality in fiction film creation, supported by previous thoughts and researches from some theorists and thinkers of film studies and comparative arts, trying to set a less unstable ground for the understanding of this particular proposal.
Next part will focus deeper into the no- This process of thinking an art form resembles to a certain extent the thoughts of Wassily Kandinsky, whose approach to painting tried to be as pure as possible to the visual medium as such. Many avant-garde artists from the beginning of the 20 th century were in the search for 'absolute' languages that could be ideal and spiritual, and not being at the service of second purposes by the representation of other realities or appearances. Kandinsky considered that one of the keys to this theoretical approach was the study of the language of music, which he considered would be the ultimate spiritual and purest art form, the one that can convey emotions, ideas and sensations without the need to make use of any elements that belong to the concrete world or from the other arts. His ideas about colours, rhythm, lines, shapes and the musicality in painting is an important background to consider, due to the fact that his art abandoned the intended mimesis of a reality, and focused in an extremely formal way of exploration on the intrinsic qualities of visual arts (Kandinsky, 1912).
One of the main reasons why Kandinsky was interested in the language of music was the fact the sound perception in music is not a representation of any visual, written or spoken language, thus liberated from any prejudice or connection with the concrete universe; when we listen to music, we do not see ob- perceiving really is: a process -movement. This 'movement' with its own organic structure is not tied to the power of association (sunsets, funerals), nor to emotions of pity (match-selling girl, betrayed love), nor indeed to 'content' at all, but follows instead its own inevitable mechanical laws" 2 (Graf & Scheunemann, 2007, p.13).
French filmmaker Germaine Dulac developed a huge amount of film works in a similar quest for a pure or absolute cinema, and she was also highly influenced by music. However, she be- (Graf & Scheunemann, 2007, p.128 There is the symphony, pure music. Why wouldn't the cinema also have its own symphony?" 4 (Williams, 2014, p.141).
Nevertheless, is important to notice her reluctant position to delve into conservative narrative cinema, and how some of her films are still considered more in the realm of experimental film than as conventional fiction cinema.
"Dulac viewed each shot as a 'no-tation,' having a value similar to a musical note, yet representing a specific concept or ideal, which she subsequently juxtaposed" (Williams, 2014, p.129). The notion of shots as musical notes provides us with a different perspective regarding the thinking of cinema language and montage, not that it would redefine completely the conception of editing, as we can find similar analogue ideas in Eisenstein's theories, but it does pose a strong belief towards assuming the inner and outer rhythm of the moving images with the clarity and sensibility as in a musical piece.
For instance, the basic and classical tonal form in music composition, which consists of a referential note or harmony (often called as tonic) upon which other notes form the melody, can be compared to the dramatic curve of a film story. In music, the notes that place themselves further away from the tonic develop a harmonic tension whose increasing progress seeks to be resolved once the melody comes back to the tonic. Similar structures of tension and release can be found in either the development of a scene dramatic tension in shots, or even in the crescendo of a dramatic conflict that must find a climax/resolution in the overall scope of a film narrative. Bürch, the notion of musical rhythm in film is highly debatable due to the fact that cinema rhythm is not only the repetition of shots at a certain duration, but that it is affected by a huge array of other variables implicit in the film form (Burch, 1969, p.67). Kulezic-Wilson, on the same topic, considering rhythm and the duration of shots but from a more optimistic point of view, argues that ultimately can be." (Burch, 1969, p. 52) The musicality in fiction film language is undoubtedly a characteristic that can be
Musicality in Lucrecia Martel's films
One of the singular aspects of Martel's films when tackling the study of its musicality is precisely the absence of non-diegetic music, and yet the few but very precise moments of diegetic music confirm her sensibility for a musical understanding of filmmaking. Moreover, the musicality is present everywhere, both in visual and sonic realms in her films.
One of the most musical elements of her films is the use of dialogues, which apart from fulfilling the habitual mission Later on, after Veronica has confessed the situation to her husband, they both come back to the same place but this time the scene takes place at night, provoking another layer of visual repetition that operates as a variation of the same theme (see Fig. 9).
Visual and sonic polyphony in The Headless Woman
Considering
Martel's musicality and narrative cinema
The polyphonic assemble of different 5 Sergei Eisenstein had already proposed this approach in his idea of vertical montage, a form to understand the simultaneousness of film elements in time.
6 A similar approach has been often used by some editors who assemble a timeline canvas/chart of simultaneous lines, notes, and images for every scene elements, using it as a tool for planning or analysing the process of montage.
7 It is interesting to consider the notion of baroque here, due to the fact that Renaissance and Baroque art periods are the ones mostly associated with polyphonic textures in musical composition.
8 Zambas refers to 'zamba salteña', a traditional music genre that belongs to the folklore of Salta, Argentina.
|
2019-09-26T09:07:40.009Z
|
2018-12-30T00:00:00.000
|
{
"year": 2018,
"sha1": "935632cd88d86bd591007ee9cab0c8ed50c3b3e1",
"oa_license": "CCBYNC",
"oa_url": "https://revistas.ulusofona.pt/index.php/ijfma/article/download/6757/4087",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e458dc9645db3d5b4fbafbc72e100e0c52303bf8",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
}
|
214415181
|
pes2o/s2orc
|
v3-fos-license
|
Ensemble of Regression-Type and Interpolation-Type Metamodels
Metamodels have become increasingly popular in the field of energy sources because of their significant advantages in reducing the computational cost of time-consuming tasks. Lacking the prior knowledge of actual physical systems, it may be difficult to find an appropriate metamodel in advance for a new task. A favorite way of overcoming this difficulty is to construct an ensemble metamodel by assembling two or more individual metamodels. Motivated by the existing works, a novel metamodeling approach for building the ensemble metamodels is proposed in this paper. By thoroughly exploring the characteristics of regression-type and interpolation-type metamodels, some useful information is extracted from the feedback of the regression-type metamodels to further improve the functional fitting capability of the ensemble metamodels. Four types of ensemble metamodels were constructed by choosing four individual metamodels. Common benchmark problems are chosen to compare the performance of the individual and ensemble metamodels. The results show that the proposed metamodeling approach reduces the risk of selecting the worst individual metamodel and improves the accuracy of the used individual metamodels.
Introduction
Metamodels, which are also referred to as surrogate models, are essentially approximate mathematical models of real physical systems. In the past decade, metamodels have become increasingly popular in the field of energy sources because of their significant advantages in reducing the computational cost of time-consuming tasks [1,2]. Melo et al. [3] pointed out that researchers in many countries are developing metamodels to estimate the energy performance of the building stock. Bornatico et al. [4] used a kind of metamodel to optimize energy systems, and found that the metamodel converged to the same solution at 150 times the speed of the fine model. Westermann and Evins [5] summarized and discussed recent studies on the application of metamodels in sustainable building design. Ferrero Bermejo et al. [6] reviewed and compared two typical metamodels, namely the artificial neural networks and the support vector machine, for energy forecasting and condition-based maintenance in PV plants.
Actually, a good metamodel mainly depends on its accuracy and generality for different design tasks. To enhance the performance of metamodels, researchers have carried out a lot of studies over the past few decades [7][8][9][10][11]. As a result, a large number of metamodels have been proposed, of which several types have gained wide acceptance in various applications. They are polynomial response surface (PRS) [12][13][14], support vector regression (SVR) [15][16][17], radial basis functions (RBF) [18,19], extended radial basis functions (E-RBF) [20], moving least squares (MLS) [21], artificial neural networks (ANN) [22,23], multivariate adaptive regressive splines (MARS) [24] and Kriging (KRG) [25,26]. These different metamodels give us more options for different tasks. However, lacking the prior knowledge of the actual physical systems, it is challenging to find a suitable metamodel in advance for a new task. In particular, the worst metamodel may be chosen for the task.
A simple way to overcome the difficulty is to build a series of metamodels based on a given training dataset at first, and then select the best one on the basis of some statistical techniques like the cross-validation method. Another favorite way is to construct an ensemble metamodel, which assembles two or more individual metamodels by introducing weight factors. The basic idea of such an ensemble metamodel can be traced back to 1990s [27,28], and currently it has become a research hotspot [8,29]. According to the characteristics of the weight factors, the techniques for building the ensemble metamodels can be mainly categorized into methods based on local errors, methods based on global errors, and methods based on regression.
In the first category, the weight factors (ω i = ω i (x)) are functions of design space, which are determined by the local errors of individual metamodels at the point of interest. Zerpa et al. [30] introduced a local weighted average model for the optimization of alkaline-surfactant-polymer flooding processes by using the prediction variances of three individual metamodels (PRS, KRG, and RBF). Sanchez, Pintos, and Queipo [31] proposed a general approach toward the ensemble of kernel-based models based on the local prediction variances. Acar [32] investigated the efficiency of methods based on the local errors, and developed a new approach to determine the weight factors by using the pointwise cross-validation errors instead of the prediction variances. Zhang, Chowdhury, and Messac [33] proposed a new metamodeling technique called adaptively hybrid functions, whose weight factors are determined based on the local measure of accuracy in the pertinent trust region. Lee and Choi [34] presented a new pointwise ensemble of metamodels, of which the weight factors are calculated by using the v nearest points cross-validation errors.
In the second category, the weight factors (ω i = C i , ∀x) are constant values in the entire design space, which are determined by the global errors of individual metamodels. Goel et al. [35] studied a global weight factor selection approach based on the generalized mean square cross-validation errors (GMSE). Acar and Rais-Rohani [36] developed an accurate ensemble of metamodels by solving an optimization problem that minimizes GMSE or root mean square errors (RMSE). Viana, Haftka, and Steffen [37] obtained the optimal weight factors of the optimization problem by using the Lagrange multipliers. This method was also employed by Toal and Keane [38] to construct an ensemble of ordinary, universal, non-stationary and limit KRG models. Additionally, Acar [39] performed the simultaneous optimization of the weight factors and the shape parameters in the ensemble of RBFs.
It should be noted that in the first two categories the weight factors of individual metamodels are restricted to a positive range (ω i > 0) and the sum of these factors is equal to 1 ∑ M i=1 ω i = 1 . Since they are different from the first two categories, the techniques in the third category mainly use the regression methods (like least squares) to determine the weight factors. Accordingly, there is no longer any restriction on the weight factors, which may even have negative values. Polynkin and Toropov [40] introduced a novel mid-range metamodel assembly for the large-scale optimization problems, which is constructed based on the linear regression method. Ferreira and Serpa [41] developed an augmented least-square approach for creating the ensemble of metamodels, which can be extended to the efficient global optimization. Zhou and Jiang [42] constructed an ensemble of four individual metamodels (PRS, KRG, SVR, and RBF) from the view of the polynomial regression, and proposed a metamodel selection method on the basis of the stepwise regression to eliminate the redundant ones from the set of the candidate metamodels.
Motivated by these existing works, this paper proposes a different method for constructing the ensemble metamodels, which combines the advantages of regression-type and interpolation-type metamodels. The regression-type metamodels have better global trend fitting capacity than the interpolation-type metamodels, while the interpolation-type metamodels perform better than the regression-type metamodels in the vicinity of the sampling locations. By thoroughly exploring the characteristics of regression-type and interpolation-type metamodels, the proposed method could extract some useful information from the feedback of the regression-type metamodels to further improve the functional fitting capability of the ensemble metamodels.
Motivation and Basic Characteristics
The existing individual metamodels can be classified into regression-type and interpolation-type metamodels. The regression-type metamodels aim to fit the global trend of the underlying functions of the real physical systems in the entire design space, while the interpolation-type metamodels aim to achieve the local accuracy in the vicinity of the sampling locations. Accordingly, the regression-type metamodels can build smooth surfaces that pass across all the training points, while the interpolation-type metamodels can construct models that go through each training point. That is to say, for the regression-type metamodels there may be obvious deviations between the actual responses and the approximate responses at the sampling locations, while for the interpolation-type metamodels there is no deviation. These different characteristics make the two types of metamodels possess different advantages and limitations. For example: (i) the regression-type metamodels have better global trend fitting capacity than the interpolation-type metamodels, while (ii) the interpolation-type metamodels perform better than the regression-type metamodels in the vicinity of the sampling locations.
It should be noted that obtaining the training dataset required for constructing the metamodels may be time-consuming. Therefore, as much information as possible should be extracted from these data. However, for the regression-type metamodels, there are apparent deviations between the actual responses and the approximate responses at the sampling locations, from where some useful information may be still extracted to further improve the performance of these metamodels. Exploring the underlying knowledge of the training dataset and combining the characteristics of regression-type and interpolation-type metamodels, this paper proposes a novel metamodeling approach for the ensemble metamodels. The flowchart of the proposed metamodeling technique is shown in Figure 1, which involves four main steps as follows.
Obtain initial training dataset by choosing a type of DOE and conducting experiments or simulations Choose an appropriate regression-type metamodel Utilize the initial training dataset to construct the regression-type metamodel .
Step 1 Update the training dataset by using the feedback of the established regression-type metamodel Step 2 Choose an appropriate interpolation-type metamodel Construct the interpolation-type metamodel to approximate the deviation function by utilizing the updated training dataset Step 3 Construct the ensemble metamodel and predict the response at any point of interest Step 4 Figure 1. Flowchart of the proposed approach for building ensembles of regression-type and interpolation-type metamodels.
Step 1: An appropriate design of experiment (DOE) should be first chosen to generate n sampling locations (x 1 , x 2 , . . . , x n ), at where the actual responses (y 1 , y 2 , . . . , y n ) are obtained by conducting experiments or simulations. By using the initial training dataset (x i , y i ) (i = 1, . . . , n), a regression-type metamodelŷ 1 (x) in Equation (1) is subsequently constructed to approximate the actual model y(x).
where x denotes any point of interest.
Step 2: We suppose that there is a deviation function y d (x). It is obtained by subtracting the approximate modelŷ 1 (x) from the actual model y(x).
Some useful information may be still extracted from the deviation function y d (x).
To approximate the deviation function, the training dataset should be updated. In detail, this paper first uses the established regression-type metamodel in Equation (1) to predict the approximate responses (ŷ 1 1 ,ŷ 2 1 , . . . ,ŷ n 1 ) at the initial sampling locations. Subsequently, the deviations (y 1 d , y 2 d , . . . , y n d ) between the actual responses and approximate responses at these locations are calculated as the updated training dataset.
Step 3: By using the updated training dataset in Equation (3), an interpolation-type metamodelŷ 2 (x) in Equation (4) is constructed to approximate the deviation function y d (x).
Step 4: Finally, the ensemble metamodelŷ ens (x) in Equation (5) is constructed by adding the established regression-type metamodelŷ 1 (x) and interpolation-type metamodelŷ 2 (x) together. By using Equations (1), (4) and (5), the established ensemble metamodelŷ ens (x) can be used to predict the response at any point of interest in the entire design space.
Detailed Modeling Process
To clearly illustrate the proposed metamodeling technique, this paper selects two common regression-type metamodels (PRS and SVR) and two popular interpolation-type metamodels, namely RBFM (RBF with multiquadric-form basis function) and RBFI (RBF with inverse multiquadric-form basis function). Accordingly, four types of ensemble metamodels can be obtained, which are PrsRbfm (Ensemble Scheme 1, ensemble of PRS and RBFM), PrsRbfi (Ensemble Scheme 2, ensemble of PRS and RBFI), SvrRbfm (Ensemble Scheme 3, ensemble of SVR and RBFM) and SvrRbfi (Ensemble Scheme 4, ensemble of SVR and RBFI). The detailed modeling processes of these involved metamodels are introduced as follows.
Step 1: Construction of Regression-Type Metamodels
PRS is a general designation of a series of polynomial regression functions, of which the most popular one is the second-order polynomial model. This paper adopts the second-order polynomial modelŷ 1,prs (x), which can be written aŝ To estimate β, the regression problem in Equation (6) can be transformed as follows by using the initial training dataset.
where y d,prs = (y 1 d,prs , y 2 d,prs , . . . , y n d,prs ) T denotes the deviation vector. Equation (7) can be also expressed as According to the least squares method, β can be calculated as follows.
SVR is a regression functionŷ 1,svr (x) in the high-dimensional space, as shown in Equation (10).
where ω denotes the weight vector, ψ(x) denotes the mapping function, and b denotes the bias.
To estimate ω and b, the regression problem in Equation (10) can be transformed as an optimization problem in Equation (11) by introducing -insensitive loss function.
To solve Equation (11), the regularization parameter, C (> 0), and the slack variables, ξ +(i) and ξ −(i) , are introduced. In addition, Equation (12) can be obtained The Lagrange dual model of Equation (12) can be expressed as where α +(i) and α −(i) denote the Lagrange multipliers, k x i , x j = ψ(x i ) T ψ(x j ) denotes a kernel function, which has several different forms. This paper chooses the Gaussian kernel function, which can be expressed as According to Equation (13), α +(i) and α −(i) can be first obtained. According to KKT conditions [43], ω and b can be then calculated.
Step 3: Construction of Interpolation-Type Metamodels
The general form of RBF can be expressed aŝ where λ i denotes an interpolation coefficient, between points x and x i . φ(r) denotes a radially symmetric basis function, which has several different forms, such as: The interpolation coefficient λ i can be calculated by using the given training dataset (x i , y i ) (i = 1, . . . , n).
After choosing the multiquadric-form basis function, RBFM (ŷ rb f m (x)) can be constructed to approximate the actual model y(x) by replacingŷ rb f (x) and λ i in Equation (17) withŷ rb f m (x) and λ i,rb f m . The coefficient λ i,rb f m can be calculated based on Equation (18). Similarly, after choosing the inverse multiquadric-form basis function, RBFI (ŷ rb f i (x)) can be constructed to approximate the actual model y(x). The coefficient λ i,rb f i ofŷ rb f i (x) can be calculated based on Equation (18).
Additionally, by choosing the multiquadric-form basis function, a modelŷ 2,rb f m1 (x) can be constructed to approximate the deviation function of PRS y d,prs . By replacing the initial training dataset (x i , y i ) (i = 1, . . . , n) with the updated training dataset of PRS (x i , y i d,prs ) (i = 1, . . . , n), the coefficient λ i,2rb f m1 ofŷ 2,rb f m1 (x) can be calculated on the basis of Equation (18). Similarly, by choosing the inverse multiquadric-form basis function, a modelŷ 2,rb f i1 (x) can be constructed to approximate the deviation function of PRS y d,prs .
Finally, by choosing the multiquadric-form basis function, a modelŷ 2,rb f m2 (x) can be constructed to approximate the deviation function of SVR y d,svr . By choosing the interpolation-type metamodel, a modelŷ 2,rb f i2 (x) can be constructed to approximate the deviation function of SVR y d,svr .
Being similar to PrsRbfm, PrsRbfi (ŷ prsrb f i (x)) can be constructed as follows.
The established ensemble metamodels, namely PrsRbfm, PrsRbfi, SvrRbfm, and SvrRbfi, can be used to predict the response at any point of interest in the entire design space by using Equations (19)- (22).
Numerical Setting
For all the benchmark problems, the MATLAB routine "lhsdesign" is used to generate training points and test points. Referred to Jin, Chen, and Simpson [44], n = 3(k+1)(k+2) 2 training points are selected for a k-dimension problem. Moreover, as many test points as possible should be used in practice, since insufficient test points may increase the uncertainty of the results. This paper selects n tst = 20,000 test points for each benchmark problem. Since the DOE sampling scheme may have an obvious influence on the performance of the metamodels, 100 different training and test sets are selected for each problem. The detailed numerical settings for all the benchmark problems are listed in Table 1. The shape parameters (c) of RBFM and RBFI are both selected as 1 by referring to relevant literature [34,45,46]. The parameters ( , C, and γ) of SVR are selected by using the cross-validation method, which was introduced in detail in the published paper of the authors [47].
Performance Criteria
The root mean square error (RMSE) and the max absolute error (MAE) are selected as the performance criteria.
RMSE can be expressed as where n tst denotes the number of test points. MAE can be expressed as Figure 2 shows the boxplots of RMSE of the metamodels over 100 test sets for each benchmark problem with 3(k+1)(k+2) 2 training points. It can be seen that: (1) for all the benchmark problems, the most accurate ensemble metamodels outperform the most accurate individual metamodels;
RMSE
(2) without exception, the least accurate individual metamodels perform worse than the least accurate ensemble metamodels; (3) for each benchmark problem, the performance differences among the four individual metamodels are greater than that among the four ensemble metamodels. To provide a better comparison for these metamodels, the error values are normalized with respect to the most accurate individual metamodel for each benchmark problem. Table 2 shows the normalized means of RMSE of the metamodels for each benchmark problem with 3(k+1)(k+2) 2 training points. The bold values in Table 2 are the most accurate individual/ensemble metamodels, the italic values are the least accurate individual/ensemble metamodels, the underlined values are the ensemble metamodels that perform better than all the individual metamodels, the "Best & Best" values denote the differences between the most accurate ensemble metamodels and individual metamodels, and the "Worst & Worst" values denote the differences between the least accurate ensemble metamodels and individual metamodels. From Table 2, it can be seen that: (1) compared with the most accurate individual metamodels, the means of RMSE of the most accurate ensemble metamodels are reduced, ranging from 1.1% to 22.2%; (2) compared with the least accurate individual metamodels, the means of RMSE of the least accurate ensemble metamodels are reduced, ranging from 21.1% to 52.5%; (3) except for BP3, more than two ensemble metamodels perform better than the most accurate individual metamodels; (4) for BP5, all the four ensemble metamodels perform better than the most accurate individual metamodel. Table 3 shows the frequency of the accuracy ranking (using RMSE) of the metamodels for the six benchmark problems with 3(k+1)(k+2) 2 training points. It can be seen that: (1) the frequency of the ensemble metamodels that rank 1st or 2nd is 11, yet the frequency of the individual metamodels is only one; (2) the frequency of the individual metamodels that rank 7th or 8th is 12, yet the frequency of the ensemble metamodels is zero; (3) considered the frequency of the metamodels that rank the top/bottom two, all the ensemble metamodels have better performance than the individual metamodels; (4) PrsRbfm performs best among the four ensemble metamodels, followed by SvrRbfm, PrsRbfi, and SvrRbfi. To clearly compare the accuracy of each ensemble metamodel with their corresponding individual metamodels, Figure 3 shows the normalized means of RMSE of each ensemble scheme for the six benchmark problems with 3(k+1)(k+2) 2 training points. It can be seen that: (1) in Scheme 1, PrsRbfm ranks 1st among PRS, RBFM, and PrsRbfm for all the benchmark problems; (2) in Scheme 2, PrsRbfi ranks 1st for all the benchmark problems; (3) in Scheme 3, SvrRbfm ranks 1st for four benchmark problems and 2nd for two benchmark problems; although RBFM ranks 1st for two benchmark problems, it is the worst performer for three benchmark problems; (4) in Scheme 4, without exception, the accuracy of SvrRbfi outperforms that of SVR and RBFI. training points. It can be seen that: (1) compared with the most accurate individual metamodels, the standard deviations of RMSE of the most accurate ensemble metamodels are reduced for BP5 and BP6, yet the standard deviations are increased for the other four benchmark problems; (2) compared with the least accurate individual metamodels, the standard deviations of RMSE of the least accurate ensemble metamodels are reduced, ranging from 8.4% to 35.5%.
According to the above experimental results, we think the proposed metamodeling approach could reduce the risk of selecting the worst individual metamodel, and the constructed ensemble metamodels perform better than the used individual metamodels in terms of accuracy. In particular, PrsRbfm performs best among the four ensemble metamodels, followed by SvrRbfm, PrsRbfi, and SvrRbfi.
To provide an explicit explanation for the better performance of the proposed approach, a low-dimensional problem (BP1) and an ensemble scheme (ensemble of SVR and RBFM) are selected as examples. Figure 4 shows the contour plot of the actual function and the approximate functions of SVR, RBFM, and SvrRbfm. It can be seen that: (1) SVR has better global trend fitting capacity than RBFM, such as in the red box area; (2) RBFM performs better in the vicinity of the sampling locations, such as in the red ellipse region; (3) SvrRbfm combines the global trend of SVR and the local accuracy of RBFM, such as in the red box area and the red ellipse region. Therefore, the reason for the better performance of the ensemble metamodels may be that the proposed metamodeling approach combines the advantages of the regression-type and interpolation-type metamodels. The actual model is regarded as the sum of a regression-type model and a deviation function. Some useful information is first extracted by the regression-type metamodel to capture the global trend of the actual model in the entire design space. Then, some other information is extracted from the deviations at the sampling locations by using the interpolation-type metamodel to achieve the local accuracy in the vicinity of sampling locations.
Effect of Performance Criteria
The choice of different performance criteria may influence the results of the metamodels. To reduce the source of uncertainty in the results as much as possible, the max absolute error (MAE) is selected as another performance criterion. Figure 5 shows the boxplots of MAE of the metamodels over 100 test sets for each benchmark problem with 3(k+1)(k+2) 2 training points. Table 5 shows the normalized means of MAE of the metamodels for each benchmark problem with 3(k+1)(k+2) 2 training points. From Figure 5 and Table 5, it can be seen that: (1) for each benchmark problem, the performance differences among the four ensemble metamodels are less than that among the four individual metamodels; (2) except for BP6, more than two ensemble metamodels perform better than the most accurate individual metamodels; (3) compared with the most accurate individual metamodels, the means of MAE of the most accurate ensemble metamodels are reduced for five benchmark problems; (4) compared with the least accurate individual metamodels, the means of MAE of the least accurate ensemble metamodels are reduced, ranging from 14.2% to 48.9%. Table 6 shows the frequency of the accuracy ranking (using MAE) of the metamodels for the six benchmark problems with 3(k+1)(k+2) 2 training points. It can be seen that: (1) considered the frequency of the metamodels that rank the top/bottom two, PrsRbfm, PrsRbfi, and SvrRbfm outperform all the individual metamodels; (2) although SvrRbfi is a little worse than PRS, it still performs better than its corresponding individual metamodels (SVR and RBFI); (3) PrsRbfm is the best performer of the four ensemble metamodels, followed by SvrRbfm, PrsRbfi, and SvrRbfi. In summary, the choice of the performance criteria influence the results slightly, but the conclusions obtained by the two criteria remain unchanged.
Effect of Sampling Densities
The choice of different sampling densities may also influence the results of the metamodels. To investigate the effect of the sampling densities, this paper selects another two schemes with different sampling densities, which are n = 5(k+1)(k+2) 4 and n = 7(k+1)(k+2) 4 . Table 7 shows the normalized means of RMSE of the metamodels for each benchmark problem with 7(k+1)(k+2) 4 training points. It can be seen that: (1) compared with the most accurate individual metamodels, the means of RMSE of the most accurate ensemble metamodels are reduced, ranging from 0.9% to 8.1%; (2) compared with the least accurate individual metamodels, the means of RMSE of the least accurate ensemble metamodels are reduced, ranging from 23.4% to 53.8%; (3) except for BP3, more than two ensemble metamodels perform better than the most accurate individual metamodels; (4) all the ensemble metamodels perform better than the four individual metamodels; (5) PrsRbfm is the best performer among the four metamodels, while SvrRbfi is the worst performer. training points. It can be seen that: (1) compared with the most accurate individual metamodels, the means of RMSE of the most accurate ensemble metamodels are reduced for five benchmark problems, ranging from 0.9% to 16.9%; (2) compared with the least accurate individual metamodels, the means of RMSE of the least accurate ensemble metamodels are reduced, ranging from 20.9% to 51.3%; (3) all the ensemble metamodels have better performance than the four individual metamodels. In summary, the choice of different sampling densities influences the results slightly, but the conclusions obtained by the three schemes with different sampling densities remain unchanged.
Significance of Results
The results above have proven the effectiveness of the proposed method to some extent. To further demonstrate the advantages, the proposed method is compared with some other popular ensemble metamodels, which are BPS (Best PRESS surrogate), PWS (PRESS weighted average surrogate), and OWSD (Optimal weighted surrogate using the diagonal elements). The detailed descriptions of these ensemble metamodels can be found in relevant literature [35,37]. Additionally, Kriging with first order polynomial regression function (KRG1) and Kriging with second-order polynomial regression function (KRG2) are also included in the performance comparison. To be noted, the principle and modeling process of Kriging are different from that of the proposed metamodeling approach in this paper. Figure 6 compares the performance of PrsRbfm, SvrRbfm, KRG1, KRG2, BPS, PWS, and OWSD. It can be seen that: (1) for BP1, PrsRbfm and SvrRbfm perform better than the other five metamodels; (2) for BP2, SvrRbfm and BPS are the best two performers; (3) for BP3, the accuracy of PrsRbfm and BPS are better than that of the other metamodels; (4) for BP4, PrsRbfm and KRG2 are the best two performers; (5) for BP5, SvrRbfm and BPS are more accurate than other metamodels; (6) for BP6, PrsRbfm and KRG2 perform better the other metamodels.
In summary, the proposed metamodeling approach possesses some advantages when compared with KRG1, KRG2, BPS, PWS, and OWSD.
Conclusions
This paper proposed a novel metamodeling approach for building ensemble metamodels. Four types of ensemble metamodels, namely PrsRbfm, PrsRbfi, SvrRbfm, and SvrRbfi, were constructed by choosing four individual metamodels, namely PRS, SVR, RBFM, and RBFI. The performance of these metamodels was investigated through six popular benchmark problems. The effects of the performance criteria and sampling densities on the performance of the metamodels were studied. Additionally, the significance of the results was discussed by comparing the proposed method with some other popular ensemble metamodels. According to the results, some findings of this work could be concluded as follows: (1) According to the experimental results, the proposed metamodeling approach could reduce the risk of choosing the worst individual metamodel, and the constructed ensemble metamodels perform better than the selected individual metamodels in terms of accuracy. (2) The reason for the better performance of the ensemble metamodels may be that the proposed metamodeling approach combines the advantages of the regression-type and interpolation-type metamodels. The ensemble metamodels not only capture the global trend of the actual model in the entire design space, but also achieve the local accuracy in the vicinity of sampling locations. (3) The choices of different performance criteria and sampling densities influence the results slightly, but the obtained conclusions remain unchanged. (4) The proposed metamodeling approach possesses some advantages when compared with some other popular ensemble metamodels.
|
2020-02-06T09:08:42.216Z
|
2020-02-04T00:00:00.000
|
{
"year": 2020,
"sha1": "e8b4e8954545eba70d7d8c79715c171a4651c46f",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/energies/energies-13-00654/article_deploy/energies-13-00654-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "e8b4e8954545eba70d7d8c79715c171a4651c46f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
2325428
|
pes2o/s2orc
|
v3-fos-license
|
Long-Term Low Carbohydrate Diet Leads to Deleterious Metabolic Manifestations in Diabetic Mice
We investigated long-term effects of low carbohydrate diets on wild type mice, streptozotocin-injected and KKAy obese diabetic mice. These mice were pair-fed three different types of diets, standard chow (SC, C∶P∶F = 63∶15∶22), a low carbohydrate (LC, C∶P∶F = 38∶25∶37) diet and a severely carbohydrate restricted (SR, C∶P∶F = 18∶45∶37) diet for 16 weeks. Despite comparable body weights and serum lipid profiles, wild type and diabetic mice fed the low carbohydrate diets exhibited lower insulin sensitivity and this reduction was dependent on the amount of carbohydrate in the diet. When serum fatty acid compositions were investigated, monounsaturation capacity, i.e. C16:1/C16:0 and C18:1/C18:0, was impaired in all murine models fed the low carbohydrate diets, consistent with the decreased expression of hepatic stearoyl-CoA desaturase-1 (SCD1). Interestingly, both the hepatic expressions and serum levels of fibroblast growth factor 21 (FGF21), which might be related to longevity, were markedly decreased in both wild type and KKAy mice fed the SR diet. Taking into consideration that fat compositions did not differ between the LC and SR diets, we conclude that low carbohydrate diets have deleterious metabolic effects in both wild type and diabetic mice, which may explain the association between diets relatively low in carbohydrate and the elevated risk of cardiovascular events observed in clinical studies.
Introduction
For the past decade, the identification of appropriate dietary interventions has been a source of controversy. Low carbohydrate diets have been the focus of considerable interest. Therapeutic effects of low carbohydrate diets have been extensively investigated in several clinical states, including obesity, metabolic disorders, cardiovascular events and mortality [1,2,3,4,5,6,7]. In short-term trials of up to one year [1,2,3], obese patients consuming a Mediterranean or other low carbohydrate diet exhibited more favorable conditions in terms of obesity, dyslipidemia and glycemic control. However, the conclusions drawn were limited by study design, i.e. small numbers, short periods and poor adherence to special diets. The long term safety and efficacy of a low carbohydrate diet for managing cardiovascular disease risk has yet to be determined.
Recently, in a prospective cohort study [4], low carbohydratehigh protein diets were demonstrated to be associated with increased risk of cardiovascular disease. Moreover, a carbohydrate restricted diet was reported to increase mortality [5]. Thus, clinically, detrimental effects of low carbohydrate diets have been established in recent years. However, the scientific mechanism by which a low carbohydrate diet exerts a negative effect on vascular health remains unaddressed.
To date, most rodent studies have focused on an extremely low carbohydrate, high fat diet (HFD), the so-called ketogenic diet (KD) [8,9,10,11]. These mice exhibited weight loss, probably due to decreased food intake [8,9]. In fact, a reduction in dietary carbohydrate is accompanied by an increase in dietary fat and protein. Nevertheless, it would be impossible for patients to continue consuming a KD for a prolonged period. From a practical viewpoint, a low carbohydrate diet with a moderately high fat composition should be investigated in rodent models.
In the present study, we investigated the effects of a long-term relatively low carbohydrate diet on wild type mice and two diabetic murine models, i.e. streptozotocin-induced diabetic and KKAy obese diabetic mice. It has been hypothesized that the hyperphagia-induced obesity in KKAy mice results from reductions in hypothalamic norepinephrine and dopamine [12]. As we expected to examine the effects of diets with variable carbohydrate contents on metabolic states, these mice were fed diets with three different carbohydrate compositions (63%, 38% and 18%). Unlike a KD, the fat composition was at most 37%. As giving mice low carbohydrate diets resulted in excessive caloric intake as compared to mice given standard chow, we performed a calorie-matched pair-feeding study to assure that there would be no differences in caloric intake among the three types of diets employed in this study. After 16 weeks, despite comparable body weights, mice on both LC and SR diets exhibited less insulin sensitivity, and this reduction was dependent on the amount of carbohydrate in the diet. When searching for the cause of glucose intolerance in mice given low carbohydrate diets, we discovered several deleterious metabolic manifestations, which might be due not to a moderate increase in fat composition but rather to a low carbohydrate composition. Our results support scientific evidence pertaining to the central question of why low carbohydrate diets increase the risk for cardiovascular events and/or mortality in obese or diabetic patients.
Ethics Statement
All animal protocol was performed according to the Guide for the Care and Use of Laboratory Animals in Kyorin University. The protocol was approved by the Committee on the Ethics of Animal Experiments of Kyorin University (Approved Number: 2014-152).
Animals
Six-week-old male mice (C57black/6J and genetically obese KKAy) were purchased from Clea Japan, Inc. (Osaka, Japan). After a week of acclimatization, all mice were maintained individually under conditions of controlled temperature (23uC) on a 12:12 h light-dark cycle, fed a standard rodent chow ad libitum, and had unlimited access to water. Streptozotocin (STZ) induced diabetic mice, which exhibit impairment of insulin secretion due to b-cell destruction, were prepared by two intraperitoneal injections of 0.2 ml of 50 mM sodium citrate solution (pH 4.5) containing 150 mg/kg freshly prepared STZ (Sigma-Aldrich, Co., St. Louis, MO, USA). Five days after STZ treatment, plasma glucose levels of all mice were measured and diabetes was confirmed (glucose level .300 mg/dl). Then, wild type, STZ and KKAy mice were divided into three groups (each n = 6) and calorie-matched pair-fed three different types of diets, standard chow (SC, Carbohydrate:Protein:Fat = 63:15:22), low carbohydrate diet (LC, C:P:F = 38:25:37) or a severely carbohydrate restricted diet (SR, C:P:F = 18:45:37) for 16 weeks. The precise nutrient compositions of these three diets, which were purchased from Oriental Yeast Co. (Tokyo, Japan), are presented in Table 1. We named the wild type groups WSC, WLC, and WSR, respectively, and the corresponding SC, LC and SR group were SSC, SLC, and SSR for STZ and KSC, KLC, and KSR for KKAy mice according to the type of diet.
Intraperitoneal glucose tolerance test and insulin tolerance tests
For glucose tolerance tests, mice were fasted for 8 hr and 10% glucose solution was administered intraperitoneally (2 mg/g body for wild type mice, 1.5 mg/g?body for diabetic mice) as previously described [13]. Glucose measurements were conducted before injection, and at 30, 60, and 120 min after injection. For insulin tolerance tests, mice in postprandial states were intraperitoneally injected with 1.0 U/kg?body human insulin (Eli Lilly, Indianapolis, IN, USA). Glucose measurements were conducted before injection, and at 30, 60, and 90 min after injection. Plasma glucose levels were measured using a glucose analyzer (Sanwa Kagaku Kenkyusho, Co., Nagoya, Japan).
Measurements of biomedical markers
Before sacrifice, the animals were fasted for 8 hr. Blood samples were collected by cardiac puncture using heparinized syringes and centrifuged at 12,000 rpm for 5 min. Serum lipid analyses were performed at the Skylight-Biotech Analysis Center (Akita, Japan). Hormone concentrations were measured using commercially available methods, i.e., immunoreactive insulin (IRI) was measured by radioimmunoassay (Morinaga Institute of Biological Science, Inc., Yokohama, Japan), and serum fibroblast growth factor 21 (FGF21) by Quantikine ELISA Mouse/Rat FGF-21 (R&D Systems, Minneapolis, MN, USA). Hepatic triglycerides (TG) were extracted by the methods of Bligh and Dyer [14] and analyzed using a commercially available reagent for TG (Wako Pure Chemical Industries, Ltd., Osaka, Japan). Urine was analyzed for ketones, creatinine and albumin employing a standard laboratory technique.
Fatty acid analysis using HPLC
Fatty acids in biological samples were quantitatively measured using a modified liquid chromatography-tandem mass spectrometry (LC/MS/MS) procedure [15]. Standard solution (Myristic acid, Palmitic acid, Palmitoleic acid, Stearic acid, Oleic acid, Linoleic acid, a-Linolenic acid, c-Linolenic acid, Dihomo-c-Linolenic acid, Arachidonic acid, Eicosapentaenoic acid (EPA), Docosapentaenoic acid and Docosahexaenoic acid (DHA)) (Cayman Chemical Company, Ann Arbor, MI, USA) were used to obtain calibration curves. Five microliters of a serum sample or phosphate buffered saline (PBS) for the calibration curves, were transferred into glass tubes, each containing an internal solution (10 mg/mL; [ 2 H 5 ]-EPA and [ 2 H 5 ]-DHA (Cayman Chemical). Acetonitrile/6 N HCl (90/10, v/v) was added, and the tube was then capped and incubated at 100uC for 45 min. Once the tubes had reached room temperature, 200 ml of methanol/10 N NaOH (90/10, v/v) were added, followed by capping and incubation at 100uC for 45 min. After the tubes had reached room temperature, liquid/liquid extraction was performed using ethyl acetate. This upper layer was reconstituted and injected into an optimized LC/ MS/MS system. LC was performed using an ACQUITY UPLC (Waters, Milford, MA), and an API4000 triple quadrupole tandem mass spectrometer (AB Sciex, Foster City, CA, USA) was used as a detector. An analytical column, YMC-Triart C18 (2.0 mm6100 mm, particle size 1.9 mm) (YMC Co.,Ltd., Kyoto, Japan), was used for separating the fatty acids from each other. For operation of the API4000, atmospheric pressure chemical ionization in negative ionization and selected reaction monitoring mode was applied.
Quantative analysis of hepatic mRNA and proteins
Mice were fasted for 8 hr before tissue harvesting. The mice were killed by cervical dislocation, and liver and epididymal fat tissues were rapidly removed and weighed. Hepatic total RNA was isolated with Isogen (Nippon Gene, Tokyo, Japan). To quantitatively analyze mRNA for the indicated gene, we conducted realtime PCR using an ABI PRISM Model 7000 (Applied Biosystems, Foster City, CA, USA) according to the manufacturer's instructions. The primer sets and probes for mouse insulin substrate 1 (IRS1), IRS2, Forkhead box protein O1 (FoxO1), phosphoenolpyruvate carboxylase (PEPCK), glucose 6 phosphatase (G6Pase), fatty acid synthase (FAS), peroxisome proliferator activator receptor c (PPARc), PPARa, signal transducer and activator of transcription 3 (STAT3), interleukin 6 receptor (IL6R), leptin receptor, uncoupling protein 2, adiponectin receptor 1(adipoR1), sirtuin 1, FGF21, peroxisome proliferator activator recepto c coactivator a (PGC1a), stearoyl-CoA desaturase-1 (SCD1) and elongation of long chain fatty acid member 6 (Elovl6) were purchased from Applied Biosystems. Western blot analysis was performed as previously described [16]. Briefly, liver samples from mice were homogenized in lysis-buffer (1% Triton/PBS) and centrifuged at 14,0006 g for 10 min at 4uC. Supernatants Table 2 including tissue protein extracts were resolved on 10% SDS-PAGE gel, followed by electrophoretic transfer to a nitrocellulose membrane. After blotting with a polyclonal antibody against mouse SCD-1 (Cell Signaling Technology, Inc., Danvers, MA, USA) or mouse PGC1a (Abcam Inc., Cambridge, MA), detection was performed using an ECL chemiluminescent kit (GE Healthcare Life Sciences, Buckinghamshire, UK) according to the manufacturer's instructions. Quantitations were performed using a Molecular Imager (Bio-Rad Laboratories, Hercules, CA, USA). As an internal control, we performed western blot using anti b-actin antibody (Sigma-Aldrich).
Immunohistochemistry for hepatic oxidative stress markers (4-HNE and 8-OHG)
The slices of liver tissue were immersed in 4% paraformaldehyde-PBS at 4uC overnight, and then embedded in paraffin employing routine procedures. Deparaffinized sections (4 mm thick) were rehydrated and used for immunohistochemical staining as previously described [17]. Briefly, the sections were first immersed in Antigen Unmasking Solution pH. 6.0 (Vector Laboratories Inc., Burlingame, CA, USA) and the temperature was kept above 95uC for 30 min by wet-autoclaving. The antigen-retrieved sections were treated with 5% donkey serum-PBS, then incubated with the primary antibodies for 1 hr at room temperature (RT): rabbit anti-4hydroxynonenal (4-HNE) polyclonal antiserum (HNE11-S; 1:5,000 dilution; Alpha Diagnostic International, Inc., San Antonio, TX, USA), goat anti-8-hydroxyguanosine (8-OHG) antiserum (8OHG12-S; 1:2,000 dilution; Alpha Diagnostic International, Inc.). We included a negative control (lacking primary antibody) for each immunohistochemical analysis. The sections were successively treated with ImmPRESS Reagent (Vector Laboratories) as a secondary antibody (anti-rabbit IgG and anti-goat IgG, respectively) for 30 min. Then, the sections were incubated with diaminobenzidine (DAB) solution (Wako Pure Chemical Industries, Ltd.) for 10 min at RT to detect the peroxidase enzyme activity. Finally, sections were counterstained with hematoxylin. Three optical fields of sections from each animal (n = 3) were randomly chosen and photomicrographed. The staining intensity was quantitated using Image J software (National Institute of Health, Bethesda, MD, USA). Briefly, color images were first subjected to color deconvolution [18] using G. Landini's plugin to obtain separate images for DAB and hematoxylin in gray scale. We measured the mean optical density of DAB staining in the cytoplasm for 4-HNE and that in the area of nuclei for 8-OHG, applying binary mask images of nuclei created from hematoxylin images, and evaluated the staining intensity.
Statistical analysis
Data are presented as means 6 SEs. Log transformation of continuous variables was used when needed to satisfy distributional requirements for parametric tests. Comparisons between two groups were made with Student's t-test. For comparisons among three groups, one-way ANOVA was used with the Tukey test. A P value,0.05 was considered statistically significant.
Statistical analyses were performed using Stat View software (Version 5.01; SAS Institute, Cary, NC, USA).
Glucose tolerance and insulin sensitivity depend on the dietary carbohydrate component
In a preliminary experiment, ad libitum consumption was monitored for 1 week and no significant differences in food intakes were observed among mice fed the 3 diets. As our goal was to investigate the effects of these 3 diets with similar caloric intakes, we restricted the food intakes of mice fed the LC and SR diets, which have higher caloric densities than the SC diet ( Table 1). The diets were provided daily at 6 p.m. and the amounts of the LC and SR diets were determined to be the same calorically as the mean of the calories consumed by the mice fed the SC diet on the previous day. Under these conditions, mice were pair-fed the 3 different diets for 16 weeks. The mean of total calories during the 16 weeks of this study, i.e. that consumed by each mouse, is shown at the bottom of Table 1. Fig. 1A shows the body weight changes in all groups throughout the study. Despite the food intake restrictions in WSR, these mice exhibited significantly increased weights as compared with WSC. On the other hand, no significant differences in body weights were seen in the STZ and KKAy diabetic mice. KKAy mice have such abundant subcutaneous and mesenteric adipose tissues that we could not weigh them accurately. Thus, we weighed epididymal fat, which likely represents the fat accumulation in WAT (Fig. 1B). In WSR, epididymal fat accumulation was significantly increased, suggesting weight gain to be attributable to overall fat accumulation. In STZ and KKAy mice, there were no significant differences in fat weights among the three diets. In these mice, the downregulation of insulin signaling due to insulin deficiency or insulin resistance may blunt fatty acid synthesis, which may explain why the body or fat weight changes in these diabetic mice did not reach statistical significance. As shown in Fig. 1C and 1D, the analysis of hepatic TG contents revealed increased ectopic fat accumulations in mice fed the LC diet, but these increases did not result in significant hepatic weight gains.
As shown in Table 2, basic metabolic biomarkers were analyzed. The fasting plasma glucose and fasting IRI levels were significantly higher in WSR and KSR than in WSC and KSC, respectively, indicating deterioration of insulin sensitivity in these mice. In contrast, no significant differences were observed in postprandial plasma glucose levels, though these levels tended to be higher in mice receiving the SC diet than in those on the SR diet. Total cholesterol (T-chol), low density lipoprotein-chol, high density lipoprotein-chol and TG levels differed minimally among the three diets. Urinary ketone/creatinine concentrations (mmol/gCre) were also analyzed. While only diabetic mice given the SR diet showed a significantly higher ketone level, the ketogenic effects of the SR diet, i.e., moderately increased fat and severely restricted carbohydrate, were very small as compared with those seen in STZ mice, i.e., in a state of insulin deficiency. Urinary albumin/creatinine ratios were significantly elevated in diabetic mice fed the LC diet. When dietary carbohydrate is reduced, the protein composition inevitably increases, resulting in elevated urinary albumin.
In an intraperitoneal glucose tolerance test, WSR and KSR exhibited marked glucose intolerance as compared with WSC and KSC, respectively, mice ( Fig. 2A, 2B). In STZ mice, there were no significant differences among the three diets, suggesting the deterioration of glucose tolerance due to insulin deficiency to be so striking that dietary effects on glucose intolerance might be negligible. In the insulin tolerance test (Fig. 2C, 2D), insulin sensitivity was decreased in mice fed the SR diet as compared to those given the SC diet, and this reduction was dependent on the amount of carbohydrate in the diet. These results suggest that a low carbohydrate diet exerts detrimental effects on insulin sensitivity in both diabetic and nondiabetic mice.
Screening for the cause of glucose intolerance by examining hepatic transcriptional levels of various genes in mice fed the SC and SR diets Next, in order to screen for the causes of glucose intolerance in these mice, the hepatic expressions of genes related to glucose and lipid metabolism in mice fed the SC and SR diets were analyzed using real time PCR. The results for wild type mice (WSC vs. WSR) are presented in Fig. 3A. As to insulin signaling molecules, the decreases in IRS-1 and -2 expressions and the increase in FoxO1 expression were significant. The upregulations of gluconeogenesis related enzymes, i.e., PEPCK and G6Pase, might account for the higher FPG in WSR mice. In contrast, hepatic STAT3, which plays a role in the inhibition of gluconeogenesisrelated enzymes, was increased. As IL6R and IL6 expressions were both increased, the activation of STAT3 signaling might not be due to activation of the leptin signal, or rather to activation of IL6 signaling. These results suggest that the STAT3 signal compensates for reduced insulin signaling in wild type mice. As to lipid metabolism, FAS was inhibited, which might be attributable to the negative feedback system followed by TG accumulation, as previously reported [19]. Whlie results similar to those of wild type mice were obtained for some hepatic gene expressions, others in diabetic mice did not differ significantly between mice fed the SC and SR diets (Fig. 3B, 3C), in contrast to those in wild type mice. In particular, the significant elevations of gluconeogenesisrelated enzymes, observed in WSR as compared with WSC, were minimal in diabetic mice. These results suggest that the upregulation of hepatic gluconeogenesis observed in WSR, may not be due simply due to the subsequent phenomenon resulting from hepatic insulin resistance, instead being attributable to be required compensatory mechanism in response to glucose deprivation. In fact, the up-regulation of hepatic gluconeogenesis was blunted in diabetic mice, which have no need to prevent hypoglycemia. We searched for further possible explanations of reduced glucose intolerance and, interestingly, found the expression of hepatic FGF21, which has a role in ameliorating insulin resistance [20], to be markedly decreased in both wild type and KKAy mice.
Both the hepatic expression and serum levels of FGF21 were markedly decreased in WSR and KSR mice
We investigated the hepatic mRNA levels of FGF21 in all groups of mice in detail. As shown in Fig. 4A-C, in wild type and KKAy mice, hepatic FGF21 expressions were decreased, depending on the amount of carbohydrate in the diet, while no statistically significant difference was observed in STZ mice. To confirm this result, we further examined the serum levels of FGF21 by ELISA and obtained results consistent with those of hepatic transcriptional expression (Fig. 4D). These findings might explain, at least in part, the glucose intolerance in WSR and KSR in comparison with WSC and KSC mice, respectively. Next, we investigated the downstream target of FGF21 and found that PGC1a, which has crucial roles in energy expenditure in adipose tissues, was markedly reduced in both wild type and KKAy mice fed the SR diet at both the transcriptional (Fig. 5A) and the translational level . SCD1 (C) and Elovl6 (D) mRNA levels of all murine models were analyzed by quantitative real time PCR (white bar: mice fed the SC diet, gray bar: mice fed the LC diet, black bar: mice fed the SR diet). Hepatic SCD1 protein levels of wild type (E), STZ (F) and KKAy (G) mice were analyzed by western blotting. In the middle panels, representative data (two samples for each group) are presented. In the lower panels, each column shows the mean 6 S.E. obtained from 6 mice (white bar: mice fed the SC diet, gray bar: mice fed the LC diet, black bar: mice fed the SR diet). Upper panels show the internal control using anti-b actin antibody. *p,0.05 (vs corresponding mice fed the SC diet). **p,0.05 (vs corresponding mice fed the SR diet). doi:10.1371/journal.pone.0104948.g006 (Fig. 5B). These results might explain the greater weight gain in WSR than WSC.
Decreased expression of stearoyl CoA desaturase 1 (SCD-1) in mice fed low carbohydrate diets We next analyzed serum fatty acid compositions by using LC/ MS/MS. As shown in Table 3, monounsaturated fatty acids, i.e., palmitoleic acid (C16:1) and oleic acid (C18:1), were strikingly decreased in mice fed the SC diet as compared to those in mice receiving the SR diet. Thus, we calculated C16:1/C16:0 and C18:1/C18:0 ratios to clarify monounsaturation activity. As shown in Fig. 6A, monounsaturation reactions were impaired in mice fed low carbohydrate diets, while the elongation reaction (C18:0/C16:0) was conversely increased (Fig. 6B). Then, transcriptional and translational expressions of hepatic SCD1, an enzyme that catalyzes the synthesis of monounsaturated fatty acids, were analyzed (Fig. 6C, Fig. 6E-G). Consistent with the decreased C16:1/C16:0 and C18:1/C18:0 ratios, the expression of hepatic SCD1 in both wild type and diabetic mice also decreased, and this reduction was dependent on the amount of carbohydrate in the diet. In contrast, there were no differences in transcriptional levels of hepatic Elovl6 (Fig. 6D), a rate-limiting enzyme catalyzing the elongation of saturated and monounsaturated fatty acids, among mice fed the 3 diets. Considering that oleic acid has a crucial role in reducing oxidative stress [21], we confirmed the hepatic accumulation of oxidative stress in a histochemical study. We quantitatively evaluated immunostaining for 4-HNE in the cytoplasm and 8-OHG in nuclei as oxidative stress markers. As shown in Fig. 7, while neither 4-HNE nor 8-OHG accumulations were found to differ between WSC and WSR, there were significant differences between KSC and KSR, which partially explains the marked increase in insulin resistance in KSR mice. Histologically, KKAy mice had fatty livers without inflammatory cellular infiltrations. Though the reason for KKAy mice showing these differences in reactive oxygen species (ROS) accumulation remains unknown, fatty liver observed in KKAy mice, which are vulnerable to oxidative stress, may have accelerated the hepatic ROS accumulation only in KSR mice.
Discussion
In this study, we prepared 3 types of diets, i.e. a standard chow, a relatively low carbohydrate and a severely carbohydrate restricted diet. The SC diet, containing 63% carbohydrate, corresponds to a high carbohydrate/low fat diet, which has an established record of safety and efficacy. Thus, an evidence-based recommendation for high carbohydrate ( §55%) nutrition therapy for individuals with diabetes exists in most countries, including Japan [22]. The C:P:F composition of the LC diet, i.e. moderately reduced carbohydrate content (35,40%)/moderately high fat diet (35,40%), is a Mediterranean-like diet, which is among the most popular low carbohydrate diets in western societies [23]. In addition, we prepared a SR diet to investigate how such a diet influences human health. In previous studies, mice fed a KD or a HFD exhibited marked reductions in body weight [8,9]. In a human study as well, low carbohydrate diets were effective for weight loss [1], which might be attributable to the decreased intake and appetite loss associated with low carbohydrate diets [6]. However, these effects lasted only for 6 months [1], probably because patients adapted to the low carbohydrate diets and their appetites were restored. Therefore, in such rodent experiments and human intervention studies, it is not possible to distinguish various effects of the low carbohydrate diet from those of the associated decreased calorie intakes. In the present study, we devised the SR diet to have a high protein instead of a high fat content in order to avoid appetite reduction and weight loss. In addition, we employed pair-feeding, in which each mouse had the same caloric intake as its counterpart. Thus, our experiments precisely reflect the actual effects of a low carbohydrate diet.
As the aim of this study was simply to investigate the effects of different macronutrients on biological phenotypes in wild type and diabetic mice, we could not prepare corresponding control mice, i.e. STZ mice versus PBS injected mice, or KKAy versus KK mice. Thus, our observations do not allow us to draw conclusions about the differences between animal models. In a preliminary experiment, we prepared STZ mice by a conventional method [24]. However, STZ mice fed the SR diet (SSR mice) did not survive for more than a week after injection, which might be attributable to severe ketoacidosis. Thus, we changed the protocol and injected a smaller amount of STZ as mentioned in the Methods. With this modification, the STZ mice used in our study exhibited less plasma glucose elevation and higher fasting insulin levels than the STZ mice previously described [24], indicating that these mice are not like type 1 diabetes models, instead resembling type 2 diabetes models with b cell exhaustion.
We unexpectedly discovered that both hepatic expressions and serum levels of FGF21 were suppressed in wild type and KKAy mice fed the low carbohydrate diets, and that these reductions were dependent on the amount of carbohydrate in the diet. As FGF21 positively regulates the expressions of thermogenesisrelated genes, i.e., UCP-1 and PGC1a, in white adipose tissues [25], these decreases in FGF21 levels are likely to be the main cause of weight gain in mice fed low carbohydrate diets. In very recent studies [11,20,26], FGF21 was demonstrated to have a crucial role in enhancing insulin sensitivity. FGF21 administration to obese mice resulted in improvements in hepatic insulin sensitivity, which may be attributable, at least in part, to increased energy expenditure in the liver and white adipose tissue. Thus, markedly decreased serum FGF21 levels likely explain their insulin intolerance. However, in these studies, HFD fed mice exhibited higher serum levels of FGF21 [27], which would appear to be incompatible with our present results. We can explain this contradiction as follows: both hepatic expression and circulating levels of FGF21 are known to be strongly up-regulated by fasting or ketogenesis induced by HFD [10]. In our study, wild type and KKAy mice fed the LC or SR diet did not manifest ketogenesis, because fat compositions (37%) in these diets are not as high as in HFD. Thus, FGF21 expression might not be stimulated under these non-ketotic conditions. In contrast, the results obtained in STZ mice were rather ambiguous, i.e. no significant difference in Figure 7. Immunohistochemistry for 4-HNE and 8-OHG. Three optical fields of sections from each animal (n = 3) were randomly chosen and photomicrographed. Original magnification is 650. The staining intensity was quantitated using Image J. Color images were first subjected to color deconvolution using G. Landini's plugin to obtain separate images of DAB and hematoxylin in gray scale. We measured the mean optical density of DAB staining in the cytoplasm for 4-HNE (A) and that in the area of nuclei for 8-OHG (B), applying binary mask images of nuclei created from hematoxylin images, and evaluated the staining intensity. Upper panels are negative controls, lacking primary antibody. One representative image of wild type (middle panels) and KKAy (lower panels) mice, fed the SC diet (left panels: without hematoxylin staining, middle panels: with hematoxylin staining) and the SR diet (right panels: with hematoxylin staining) is presented. Right bar graphs show staining intensity averages. *p,0.05 (KSC vs. KSR mice). doi:10.1371/journal.pone.0104948.g007 FGF21 expressions among mice fed the 3 diets, possibly reflecting moderate ketogenesis in STZ mice. Though hepatic FGF21 was previously reported to be induced by acute exercise [28] or endoplasmic reticulum stress [29], we could not identify the mechanism by which FGF21 was down-regulated by long-term feeding of a low carbohydrate diet. In a very recent study [30], bKlotho, known as a longevity-promoting gene, was demonstrated to be essential for the beneficial metabolic actions of FGF21. These two proteins were revealed to work cooperatively. Moreover, transgenic overexpression of FGF21 markedly extends lifespans in mice without reducing food intake [31]. These findings raise the possibility that FGF21 can be expected to extend lifespan. Therefore, the evidence obtained in this study strongly supports those of the clinical study [5], in which a low carbohydrate diet increased mortality.
We further assessed the serum compositions of fatty acids, because fatty acids were reported to be associated with insulin sensitivity [32]. In this analysis, the conversion from saturated fatty acid to monounsaturated fatty acid, corresponding to hepatic SCD1 activity was decreased in mice fed the low carbohydrate diets, and this reduction was dependent on the amount of carbohydrate in the diet. The present study is, to our knowledge, the first to directly confirm fatty acid composition changes in rodents, while the decreased SCD1 expression was previously reported in obese humans [33]. Though the precise mechanism of hepatic SCD1 suppression remains unknown, several possibilities have been raised. For example, the transcriptional downregulation of SCD1 was suggested to be associated with fatty liver [34]. A previous noteworthy study [35] showed murine hepatic SCD1 expression to be induced by consuming a fat-free, high carbohydrate diet, which is consistent with our results. Taken together, our results support the idea that dietary carbohydrates regulate the expression of hepatic SCD1. Intake of monounsaturated fatty acids has been reported to reduce oxidative stress and insulin resistance [36]. In a very recent in vitro study, incubation with saturated fatty acids induced marked ROS accumulation in rat hepatocytes, while incubation with oleic acids did not [37]. Thus, the decreased SCD1 expression, i.e., C16:1/C16:0 ratio, observed in mice fed the low carbohydrate diets may partially be explained by the hepatic accumulation of oxidative stress observed in our immunohistochemical analysis. Numerous clinical studies have shown that a Mediterranean diet, which involves the use of olive oil as the principle fat component, protects against cardiovascular diseases [38]. Taking into consideration that the C:P:F composition of the Mediterranean diet is similar to that of the LC diet, the addition of olive oil may exert a beneficial effect by reversing the major disadvantage of this popular diet, which may disturb the production of monounsaturated fatty acids in the liver.
One of the limitations of our study is that we cannot rule out the possibility that our observations may have been affected by the high-protein diets provided, since protein and carbohydrate proportions are inversely proportional. In a very recent rodent study, the carbohydrate/protein ratio, not calorie intake, was found to determine cardiometabolic health, aging and longevity in mice [39]. Though our findings do not allow us to identify which macronutrient, i.e., low carbohydrate or high protein, is detrimental to health, it is noteworthy that our findings shed light on the cardiometabolic phenotypes affected by the carbojydrate/ protein ratio.
We investigated the long-term effects of a low carbohydrate diet on diabetic murine models. These mice, when fed the low carbohydrate diet, exhibited glucose intolerance, decreased serum levels of FGF21 and also decreased expression of hepatic SCD1. All of these reductions were dependent on the amount of Table 3. Fatty acid composition of each group (mg/mL). The Effects of Long-Term Low Carbohydrate Diet carbohydrate in the diet. Notably, these manifestations were unrelated to weight regulation in diabetic mice. To our knowledge, this is the first report indicating that a low carbohydrate diet leads to deleterious metabolic manifestations in diabetic mice, which may explain the close link between such diets and the high risks for cardiovascular events and morbidity observed in clinical settings.
|
2016-05-12T22:15:10.714Z
|
2014-08-29T00:00:00.000
|
{
"year": 2014,
"sha1": "90fe5aaf7846c396763572eb955db8ff52eac4a5",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0104948&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "90fe5aaf7846c396763572eb955db8ff52eac4a5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
256154031
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the effect of acceptance and commitment therapy and cognitive behavioral therapy on pain tolerance and intensity perception in patients with dental anxiety: A randomized trial
ABSTRACT Background: Dental anxiety has negative effects on dentists’ pain management. Patients have different levels of pain tolerance. Therefore, providing psychological interventions can reduce treatment avoidance and promote oral health. This study compared the effect of acceptance and commitment therapy (ACT) and cognitive behavioral therapy (CBT) on pain coping strategies and pain perception intensity in patients with dental anxiety. Materials and Methods: This clinical trial with a pretest–posttest control group design and a 3-month follow-up period was performed on 45 patients with dental anxiety. They were randomly selected by convenience sampling method and assigned to two experimental groups and one control group. The first experimental group underwent 10 sessions of ACT, the second experimental group underwent 10 sessions of CBT, and the control group underwent oral care training. Data were collected by the Rosenstiel and Keefe's Coping Strategies Questionnaire and McGill Pain Questionnaire and analyzed by SPSS (version 24) software. The considered significance level is 0.05. Results: The results showed no significant difference between ACT and CBT in pain coping strategies and pain perception intensity (P < 0.05) but indicated a significant difference between the treatment groups and the control group. Moreover, the results showed a significant difference between posttest and follow-up and pretest in pain coping strategies and pain perception intensity (P < 0.01) but indicated no significant difference between posttest and follow-up (P < 0.05). Conclusion: ACT and CBT can play an important role in the sustainable improvement of pain coping strategies and pain perception intensity in patients with dental anxiety.
INTRODUCTION
Oral health is one of the most important aspects of health, but visiting a dentist is not an easy task for most people [1] because there are obstacles in this regard, one of the most important of which is dental anxiety. [2] Dental anxiety is one of the major reasons for panic, avoidance, and nonreferral of patients to dental care centers, which consequently increases oral health deterioration. [3] Dental anxiety is a reaction to an unknown risk and is defined as a psychological reaction to the fear of dental interventions. [4] This problem is ranked fifth among common anxiety-inducing situations and can even lead to social disability and reduced quality of life. [5] The prevalence of dental anxiety in young adults has been reported to be 14.9% in Australia, 12.5% in Canada, and 12.6% in Russia. [6] Morovati et al. [7] surveyed 400 patients in 20 dental offices in Mashhad and reported that 16.8% had mild dental anxiety, 58.5% had moderate dental anxiety, and 24.8% had severe dental anxiety. Yaghouti and Sistani [8] also reported that 333 participants (about 83%) were afraid of dental treatment and 161 (about 40%) had dental anxiety.
Since dental anxiety stems from a pervasive sense of fear of dental situations with a concern originating from a recurring thought, [9] it seems that the use of ineffective pain coping strategies plays an essential role in the emergence of this form of anxiety. Coping refers to a person's mental, emotional, and behavioral efforts while encountering stress to overcome, tolerate, or minimize complications. [10] Pain coping strategies are also defined as specific thoughts and behaviors that people use to manage their pain or emotional reactions to pain. These behaviors are observed as verbal and nonverbal messages in the person in pain. Nonverbal messages such as voice behaviors, facial expressions, body movements, fisting, and body pulling, in addition to completing verbal messages, better represent patients' true thoughts and feelings. [11] Patients' differences in the use of pain coping strategies explain the differences between them in the range of adaptation to the situation and can anticipate the pain perception intensity. [12] Pain perception intensity is one of the extreme forms of maladaptive response. Most anxious people assume dentistry to be accompanied by pain, which is one of the factors affecting the increase of psychological reactions to the sensation of pain and its transmission. [13] Some pieces of evidence indicate a relationship between dental anxiety and invasive treatment and painful experiences. [14] De Jong et al. [15] found that patients with high dental anxiety reported about five times more pain than others. This anxiety is closely related to painful stimuli that lead to greater pain perception in people. The high pain perception intensity in people with dental anxiety marks the exaggerated memory of pain experience.
It seems that acceptance and commitment therapy (ACT) can be successful in improving pain variables related to dental anxiety, such as pain coping strategies and pain perception intensity. This treatment addresses ineffective control and avoidance strategies by developing techniques that promote psychological flexibility. [16] Further, it helps people accept pain (the desire to experience pain or unpleasant events without trying to control them) or thoughts related to pain, promote the valuable aspects of life, and increase valuable activities. It also encourages patients with pain to accept pain and its consequences and to perform valuable activities to improve their psychological well-being instead of making a vain attempt against pain. [17] Research in this field has shown that ACT improves pain indices such as pain perception intensity [18,19] and pain acceptance. [19] Cognitive behavior therapy is also one of the practical therapies in this field owing to its strong empirical support in the improvement of anxiety disorders through regular desensitization. In this type of treatment, the patient is assisted to recognize distorted thinking patterns and dysfunctional behaviors. To be able to change these distorted and dysfunctional thoughts, regular discussions and organized behavioral tasks are used, which can have positive effects on pain variables. [20] In other words, the cognitive behavioral approach to the formation of pain variables is based on the basic assumption that people involved with pain variables enter the treatment process believing that many of their problems are uncontrollable. Therefore, the goals of cognitive behavioral therapy (CBT) are to create this expectation in patients that they can control their problems effectively and to teach them skills to effectively deal with their current problems and respond to new problems that occur following the treatment. The cognitive behavioral approach to modifying pain variables explicitly seeks to assist individuals in identifying and altering beliefs, recognition, and nonconforming or unhelpful coping strategies, which, based on existing research, cause some of the problems observed among patients with dental anxiety. [21] Research shows that CBT improves pain coping strategies, [22] decreases the pain perception intensity, [23] and reduces psychosomatic problems caused by dental situations. [3,24] Therefore, considering the role of pain coping strategies and pain perception intensity in reducing the pain tolerance threshold of patients with dental anxiety, and the negative consequences of pain indices in exacerbating dental anxiety and avoiding treatment, this study was conducted to compare the effect of ACT and CBT on pain coping strategies and pain perception intensity in patients with dental anxiety to select appropriate treatments to help these people.
MATERIALS AND METHODS
The present study was an applied single-blind clinical trial with a pretest-posttest control group design and a 3-month follow-up period. The research project was approved by Isfahan University of Medical Sciences, with research code 298221 and ethics code IR.MUI.MED.REC.1398.626, and registered in Iranian Registry of Clinical Trials, with registration code IRCT20190505043473N2. The statistical population included patients with dental anxiety in Isfahan in the second half of 2020. Using the formula for research sample calculation with unknown population size, the sample size in this study was estimated to be 48 patients who were selected by convenience sampling and based on inclusion and exclusion criteria. They were randomly divided equally into two experimental groups and one control group.
The inclusion criteria comprised age range 19-50 years, education higher than junior high school, not studying dentistry and psychology, diagnosis of dental anxiety using Southard Dental Anxiety Questionnaire and patients suffering from dental anxiety, with a score of 130.5 ± 23.6, [25] and supplementary confirmation of dental anxiety diagnosis by a pulse oximeter (number of heart rates), as the patient rested in the waiting room for 5 min, and then his/her heart rate was measured and averaged after being placed on a dental unit twice. If this number was reported with an increase of 7.3% beats/min, [26] the patient was included in the study. This was done by a dentist. Other inclusion criteria were willingness to participate in intervention sessions, completion of informed written consent, additional review of psychiatric criteria using the Symptom Checklist-90-Revised (SCL-90-R), lack of systemic diseases and congenital syndromes, absence of psychiatric disorders except a spectrum of anxiety disorders, minimum physical and cognitive ability to participate in psychological interventions through psychiatric interview according to DSM-5 criteria, having at least 20 natural teeth and at least one treated tooth, no need for emergency dental treatment via dental examination, lack of psychological interventions, and nonuse of psychiatric drugs since the past 6 months. The exclusion criteria included the use of various drugs and alcohol, lack of cooperation or unwillingness to continue the research, failure to complete the assignments presented in sessions, and absence of more than two sessions in treatment sessions. Ethical principles of confidentiality, use of data only in line with the research objectives, freedom and full authority of the participants to withdraw from the research, providing accurate results upon the request of the participants, and training the control group after the intervention were also taken into account.
Pretest was performed through the Coping Strategies Questionnaire (CSQ), McGill Pain Questionnaire, and Dental Anxiety Inventory. After the pretest, the first experimental group underwent CBT in 10 weekly sessions for 90 min for two and a half months [ Table 1], and the second experimental group underwent ACT in 10 weekly sessions for 90 min for two and a half months [ Table 2] in Pardis Clinic of Isfahan. The control group received oral health training during this treatment period. At the end of the treatment sessions, all three groups completed the research questionnaires again. Three months after the posttest, the follow-up was performed. The research tools included the following:
Coping strategies questionnaire
The CSQ was used to measure pain coping strategies. This questionnaire was designed by Rosenstiel and Keefe in 1983 [27] and has 42 items that measure pain coping strategies. Coping strategies include six cognitive strategies: attention-grabbing, reinterpretation of pain, self-talk, catastrophizing, prayer and hope, and a behavioral strategy to increase behavioral activity. Each coping strategy consists of 6 terms, and the respondent is asked to use a seven-point scale from 0 to 6 to determine how much they use each of the strategies when faced with pain. The scores of the six terms are added up and a combined score is obtained for each strategy, which can vary from 0 to 36. The overall score of the coping strategies is 0-252. This questionnaire was first normalized by Rosenstiel and Keefe [27] in patients with chronic back pain, and its validity and reliability have been confirmed by various studies. In Iran, the psychometric properties of the questionnaire have been studied by Asghari Moghadam and Golk, [28] with Cronbach's alpha coefficient of 0.80 for the whole questionnaire at 0.05 level. Nasimi Far et al. [29] also Welcoming, introduction, instructions for group work and clarifying the type of therapy, overall assessment and talking about the negative thoughts and feelings and concerns of treatment seekers, expressing the nature and features of normal dental fear and anxiety, focusing on the therapeutic objective and commitment of therapist, practicing concentration and introducing mindfulness, and practicing conscious breathing Two Practicing concentration, performance assessment of references in the past week, reviewing dental therapy avoidance models, efficacy and costs of this avoidance, and observing dental anxiety instead of reaction to it through practicing acceptance of thoughts and emotions Three Practicing concentration, performance assessment, reviewing the reactions of the treatment seekers to former sessions, repracticing the acceptance of thoughts and emotions, introducing control as a problem and explaining whether the main problem is control or abandoning control is an alternative solution, metaphor: Challenging the dental anxiety monster, and assigning homework Four Practicing concentration, reviewing the acceptance of thoughts and emotions, practicing anxiety acceptance based on the knowledge through expression of the nature of acceptance and awareness, accepting anxiety and that acceptance is not a quick solution to anxiety, talking about controlling external events versus controlling internal issues, and homework: Life promotion tasks Five Practicing concentration, performance assessment, review of reactions to former sessions, introducing oneself as context versus oneself as content, metaphor "plying volleyball with thoughts and stressful emotions," metaphor "chess plate," metaphor "radio of anxiety news," life compass as the final cause for exposure, analyzing the valuable paths sheet, and assigning homework Six Practicing concentration, performance assessment, reviewing the reactions to former sessions, discussing emotional desires through attempts or actions along with pencil practice, parable: Thermostat of desire and exposure to thoughts and intense emotions along with the metaphor "bus driver," and assigning homework. Seven, eight, nine Practicing concentration, performance assessment, reviewing the reactions to former sessions, normal value-oriented behavioral activation via behavioral activation, defusion and mindfulness techniques, knowledge of mental and verbal traps, empirical practice of life promotion, including practicing anxiety acceptance, life sensing exercises (internal and/or visualization exercises) or activities related to valuable life objectives, monitoring the experiences related to anxiety and fear, and assigning homework Ten Practicing concentration, performance assessment, reviewing the reactions to former sessions, continuing the introduction of values, enhancing concentration on behavioral commitment, preparing the treatment seekers for the end of treatment, presenting a summary of treatment procedures, preparing for the recurrence of the problem and possible failures, identifying high-risk situations, asking the treatment seekers to implement these principles in their life, and giving a summary of metaphors used to the treatment seekers in a brochure and end of treatment Introducing the therapist and group members, creating a secure and reliable environment for the members, and providing a ground for group coherence and relationship (techniques: Establishing rapport or therapeutic relationship, familiarity with the general rules of treatment, pretest components, familiarity with dental anxiety, assessment of therapeutic expectations, and assigning homework) Two Reviewing the homework of the former session, explaining dental anxiety vicious cycle, extensive analysis of negative psychological, cognitive, and physiologic effects associated with dental anxiety, assessment of dental anxiety in the members, and assigning homework Three Reviewing the homework of the former session, presenting the importance of thoughts and their role in inducing emotions, identifying the thoughts, identifying the negative spontaneous thoughts of patients, analyzing common cognitive distortions during the occurrence of dental anxiety and distinguishing the difference between thoughts and reality, expressing the importance of thoughts and their role in inducing emotions, presenting the three-component model of dentistry, presenting the therapy rational, and assigning homework Four Reviewing the homework of the former session, finding the implication of thoughts, validating the negative thoughts and beliefs related to dental anxiety, presenting strategies for coping with negative thoughts related to dental anxiety, and assigning homework Five Reviewing the homework of the former session, evaluating the quality of evidence, creating adaptable thoughts and beliefs, evaluating the adaptable thoughts, introducing exposure, investigating the instructions of exposure and its practice, and assigning homework Six Reviewing the homework of the former session, teaching tensionless relaxation, practicing confrontation and imaginal exposure, and homework Seven Reviewing the homework of the former session, expressing anxiety changes in imaginal exposure, testing the indicators and analyzing the progress of patients, reviewing the negative memories related to dental situations, focusing on behavior rather on emotions, and assigning homework Eight Reviewing the homework of the former session, presenting the experiences of group members about their imaginal exposure, testing the remaining indicators, practicing imaginal exposure in group meetings, and assigning homework Nine Reviewing the homework of the former session, sharing the achievements and failures in imaginal exposure, emphasizing the common topics and issues, in vivo exposure, and assigning homework Ten Reviewing the homework of the former session, reviewing the progress of group members through a ranking from, expressing the thoughts and emotions about the end of sessions, and determining the probable future barriers and problems to prevent their recurrence reported a Cronbach's alpha coefficient of 0.85 for this questionnaire.
McGill pain questionnaire
This questionnaire was developed by Melzack [30] and has 20 sets of phrases to assess people's perception of pain. [30] If the respondent does not consider any of the phrases to be consistent with his/her pain description, that set will be assigned a score of 0. To obtain the total score of the questionnaire, the sum of scores of all questions is calculated. A higher score indicates a higher degree of pain perception in the respondent and vice versa. Dworkin et al. [31] confirmed the validity of this questionnaire. Its reliability was also calculated using Cronbach's alpha. The alpha coefficient for all dimensions was between 0.83 and 0.87. Naseri [32] reported a Cronbach's alpha of 0.722 for the sensory perception of pain, 0.837 for the emotional perception of pain, 0.211 for the pain perception assessment, 0.648 for various pains, and 0.838 for the whole questionnaire.
Dental anxiety inventory
This inventory, which was developed by Stouthard et al., was used to measure dental anxiety. [25] It is a self-report questionnaire that consists of 36 items in the form of scary statements about dental situations. The items are answered on a five-point Likert scale (including completely false = score 1 to completely true = score 5). It takes 5-10 min to complete the questionnaire, and none of the items has a reverse score. The minimum score in this questionnaire is 36 and the maximum score is 180; a higher score indicates higher dental anxiety. This questionnaire was translated into Persian by Yousefi and Piri [33] after obtaining permission from its developers, and the final version was prepared after performing the relevant review and evaluation.
According to the study of Stouthard et al., [25] people with an anxiety score of 130 ± 23.6 were considered anxious. Regarding the evaluation of psychometric properties, the main constructors of the Dental Anxiety Questionnaire showed that the internal consistency of the questionnaire through Cronbach's alpha ranged from 0.96 to 0.98, and the test-retest reliability of the questionnaire in different groups ranged from 0.84 to 0.87. [34] Further, the structure of the Dental Anxiety Questionnaire in the Iranian population has been confirmed through confirmatory factor analysis. Moreover, the internal consistency of this questionnaire was evaluated by Cronbach's alpha (α = 0.94) and split-half method (r = 0.94), which indicated the high internal consistency of the questionnaire. The reliability coefficient of the instrument obtained by the test-retest method was equal to 0.71, which indicated the optimal reliability of the questionnaire.
Visual analog scale
This scale is used to determine the severity of pain in patients. The Visual Analog Scale uses a graded 10-cm line, with a score of 10 for the most severe pain and a score of 0 for no pain. [34] The Visual Analog Scale is the most widely used instrument for pain measurement in the world. In addition to its confirmed validity and reliability, the most important feature of this instrument is its ease of use. A score of 1-3 indicates mild pain, 4-7 indicates moderate pain, and 8-10 indicates severe pain. Numerous studies have confirmed the validity and reliability of this tool. [35] The reliability of this scale, with a correlation coefficient of 0.88, has also been confirmed in Iran. [36]
Symptom checklist-90-revised
This questionnaire was first developed by Derogatis, Lipman, and Covi, and was then revised. This scale is a psychiatric self-assessment checklist in which respondents answer 90 questions on a five-point Likert scale. The score of each subscale is obtained by summing the scores of the items in that subscale divided by the number of items in that subscale. The scores obtained are interpreted as follows: a mean score of ≥1 indicates morbidity and a mean score of >3 shows psychosis. In the depression subscale, a score >3 usually indicates severe depression and psychosis. If a person does not answer more than 20% of the questionnaire questions or more than 40% of the questions of each subscale, the score of the questionnaire or subscale will not be valid. This scale includes 9 dimensions of physicality (12 items), obsessive-compulsive disorder (10 items), interpersonal sensitivity (9 items), depression (13 items), anxiety (10 items), hostility (6 items), morbid anxiety (7 items), paranoid thoughts (6 items), and psychosis (10 items) as well as 7 additional items that are not part of any of the nine dimensions, some of which measure sleep disorders and sexual desire. The SCL-90-R has been used in many studies as a brief indicator of mental health. [37] Confirming its internal consistency, Derogatis et al.
reported the test-retest reliability of 0.77-0.90 for this scale. [37] Pulse oximeter It is a device used to measure the percentage of oxygen saturation in human arterial blood. Pulse oximetry is a noninvasive method that measures the number of hemoglobin molecules that are mixed with oxygen and expresses it as a percentage. Its normal rate is from 95% to 97%. If this rate is <90% in patients, an alarm will sound. It also displays the number of heart rates. [38] Abbasi et al. [38] reported that the accuracy and validity of the pulse oximeter device in measuring the heart rate of patients were directly confirmed by using electrodes on the patient's skin and measuring the electrical activity of contracted heart muscles (electrocardiogram). In addition, the heart rate was indirectly confirmed by listening to the heartbeat. The accuracy and validity of the pulse oximeter were also confirmed by a medical earphone and palpating the wrist pulse and counting the heart rate. [38] Data were analyzed by SPSS software (version 24
RESULTS
The results of demographic studies showed that the mean age of participants was 32.00 ± 6.11 in the ACT group, 33.34 ± 7.98 in the CBT group, and 32.43 ± 6.06 in the control group. Further, most of the participants were male, and more than 60% of them were married. Table 3 presents the descriptive statistics for pain coping strategies and pain perception intensity for each study group in three stages of research. As indicated, the scores of pain coping strategies increased in the posttest and follow-up compared to the pretest in the experimental groups, and the scores of pain perception intensity decreased in the posttest and follow-up compared to the pretest.
Before performing repeated measures analysis of variance, to examine the assumptions of this type of analysis, the Shapiro-Wilk test showed that data were normally distributed in three stages of pretest, posttest, and follow-up (P < 0.05). Levene's test showed the equality of error variance among the three research groups (P < 0.05). Box's M test also confirmed the equality of the variance-covariance matrix (P < 0.05).
Mauchly's test confirmed the sphericity assumption for all scores (P < 0.05). Table 4 presents the results of multivariate tests for the test factor and the test group interaction (ACT, CBT, and control groups) for pain coping strategies and pain perception intensity. The results of this table show significant differences between pretest, posttest, and follow-up in pain coping strategies and pain perception intensity. There are also significant differences in pain coping strategies and pain perception intensity in terms of group membership between the pretest, posttest, and follow-up. Table 5 presents the results of repeated measures analysis of variance for the test factor and the test group interaction for the research variables.
The results of this table show a significant difference between pretest, posttest, and follow-up in pain coping strategies and pain perception intensity (P < 0.01). In addition, there is a significant difference between pre-test, posttest, and follow-up in the two experimental groups and the control group (P < 0.01). Table 6 presents the results of the Bonferroni post hoc test for pairwise comparison between the experimental and control groups in coping strategies and pain perception intensity. As shown, there is no significant difference between the ACT and CBT groups in coping strategies and pain perception intensity (P < 0.05), but there is a significant difference between the two experimental groups and the control group. Furthermore, there is a significant difference between posttest and follow-up and pretest in pain coping strategies and pain perception intensity (P < 0.01), but there is no significant difference between posttest and follow-up (P > 0.05).
DISCUSSION
The main purpose of this study was to compare the effect of ACT and CBT on pain coping strategies and pain perception intensity in patients with dental anxiety. The results showed that ACT and CBT had a similar and positive effect on improving coping strategies and reducing the pain perception intensity in patients with dental anxiety.
Although the effects of ACT and CBT on dental pain and anxiety have been confirmed so far, the comparison of these two approaches has received less attention, which indicates the innovative aspect of the present study. In line with the results of the present study, previous studies have confirmed the effect of CBT on dental anxiety, [3,24] which can be cited indirectly. Moreover, it can be argued that this part of the results is consistent with the findings of the study of Dehestani et al. [22] on the effect of CBT on coping strategies in patients with chronic pain. Saedi et al. [23] also reported that CBT could reduce pain severity due to its positive effect on coping strategies. Vowles and McCracken [17] also reported that ACT was effective in reducing pain perception. Fatemi and Manshei [18] believed that ACT affected pain perception intensity in patients with rheumatoid arthritis. Sabour and Kakaberi [19] also emphasized the positive effects of ACT on pain perception.
The positive effect of CBT on pain coping strategies and reducing pain perception severity in patients with dental anxiety is associated with enhanced ability to manage dental anxiety, possibly by reducing avoidance and inducing the ability to diagnose fear of dental interventions and increasing group self-efficacy. During CBT, the vicious circle of dental anxiety is broken by increasing the awareness of dental anxiety exacerbation and removing one of the components of this cycle, which consequently reduces the pain perception intensity. [39] Expressing the role of thoughts in the type of emotions, identifying the negative thoughts and common cognitive distortions, and inducing the ability to distinguish thoughts from reality enabled the patients to improve dysfunctional coping strategies by correcting dysfunctional thoughts and cognitive distortions, thereby reducing their pain perception severity.
During the CBT, emotions and their relationship with preconceived thoughts were examined, and other facts were called upon to decrease the negative thoughts associated with dental anxiety. Replacing adaptive, realistic, positive, and flexible thoughts and beliefs could help patients to transform their old and inefficient principles and assumptions into new and effective ones and ultimately refine their ineffective coping strategies. [40] Performing the virtual reality exposure technique, practicing it, and generalizing it to a real situation individually helped patients to gradually face the annoying anxiety-inducing stimuli and to gradually deal with those stimuli for a longer period. It also gave patients a chance to analyze the anxiety-inducing stimuli mentally, which played an important role in reducing the pain perception intensity. The implementation of this technique along with the stress-free relaxation technique reduced the patients' willingness to use inefficient and avoidant methods to deal with anxiety-inducing stimuli. [23] In general, it can be argued that CBT made clients aware of the impact of negative thoughts and emotions on the use of ineffective coping strategies and intensification of pain perception in dental settings. It also assisted them to replace adaptive thoughts to reduce negative emotions by identifying common cognitive distortions from dental situations and challenging them, identifying destructive or disturbing thought patterns (rumination) that have negative effects on behavior, and finding the implication of thoughts and their relationship with emotions. Moreover, due to virtual reality exposure techniques, effective coping strategies were practiced and repeated, and as a result, the estimation of perceived pain intensity was corrected.
The effect of ACT on modulating pain management strategies and reducing pain perception severity in patients with dental anxiety is probably linked with the basic principle of this treatment, which is to achieve psychological flexibility. During the treatment sessions, patients were taught the concept of acceptance through metaphors, allegories, and exercises. Acceptance of the problem led to the development and reinforcement of self-confidence and ultimately psychological flexibility, which was effective in applying coping strategies and reducing pain perception intensity. During this treatment, the patients explained the high cost of dysfunctional values in their lives and were asked to identify the efficient values of their lives, to determine appropriate goals to achieve them, and to promise to make an attempt to achieve those goals based on the value set. [16] This helped patients to get rid of their past dysfunctional beliefs and values and increase their involvement in the present, thereby reducing conflict with dysfunctional thoughts and pain perception intensity. Cognitive fault and its practices made the thoughts less intrusive and made the individuals less involved with negative thoughts. [33] Given the important role of dysfunctional thoughts in using wrong coping strategies, this treatment helped the patients to find themselves free from dental anxiety and not to identify themselves with its associated thoughts and feelings. Further, to explain this finding, it can be argued that teaching acceptance and commitment rather than ignoring inner feelings and experiences helped the patients to become aware of their feelings and inner and emotional experiences, to accept them, and to use them properly and appropriately, making it possible for them to relate well to their situations and interactions and experience them with a new perspective, [25] which led to the improvement of pain coping strategies and reduction of pain perception intensity.
Thus, it should be noted that in the ACT, no attempt is made to improve pain coping strategies and reduce pain perception intensity, rather these changes are the side effects of this treatment. Hence, by teaching acceptance to clients, ACT could help patients change their interpretation of the situation and offer an alternative to experiential avoidance, making them accept their inner experiences such as thoughts, desires, feelings, and physical symptoms in dental situations without defense against them. That is, the clients learned to shift their focus from reducing anxiety to having a rich and fruitful life in accordance with their values, and by teaching cognitive fault, they were encouraged to change their relationship with thoughts and other inner experiences and see them as mental events that come and go one after another. During ACT, the clients learned to see thoughts only as thoughts, emotions only as emotions, and memories only as memories. Therefore, in areas where experiential avoidance occurs, such as dental situations, cognitive fault processes, and acceptance, it helps the individual to break the dysfunctional coping pattern and perceive less pain.
CONCLUSION
Based on the results of the study, both ACT and CBT can be used to improve pain coping strategies and reduce the pain perception intensity in patients with dental anxiety. It should be noted that the present study, like previous studies, had some limitations that should be considered in generalizing the results. One of the limitations was the multidisciplinary (psychological-medical) nature of the study, which made it impossible to control the medical treatment. Short-term quarterly follow-up was another limitation of the present study. Finally, experts are suggested to use these two treatments to increase preventive oral health measures.
Financial support and sponsorship
Nil.
|
2023-01-24T17:15:15.927Z
|
2023-01-18T00:00:00.000
|
{
"year": 2023,
"sha1": "48cf42e83949c4a19d8a6406e46bd792a243e45b",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1735-3327.367910",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b37968c26f229dc67c72201d900ea8f89b0762b8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
122745106
|
pes2o/s2orc
|
v3-fos-license
|
NON SYMMETRIC RANDOM WALK ON INFINITE GRAPH
. We investigate properties of a non symmetric Markov’s chain on an infinite graph. We show the connection with matrix valued random walk polynomials which satisfy the orthogonality formula with respect to non a symmetric matrix valued measure.
INTRODUCTION
In the Book of Genesis (cf. Gen 28,12) the biblical patriarch Jacob dreams about a ladder, set up on earth, and with its top reaching heaven. He sees also the angels of God ascending and descending on it.
Let us now assume that an angel standing on earth begins to climb the ladder in a very special way: he tosses a coin, then steps one step forward in the direction he actually is aiming in case of head, or reverses his direction in case of tail. The question is to investigate properties of his "random walk", i.e. what is the probability he eventually returns is, how much time his return takes or how high he climbs. The corresponding Markov's chain can be considered as a random walk on an infinite graph as in Figure 1.
RANDOM WALK MATRIX POLYNOMIALS
We present a characterization of this specific random walk by properties of the blocks of the transition matrix Hence we are going to investigate properties of matrix polynomials satisfying the following recurrence The classical case, ie. random walk polynomials on the real line, has been studied extensively in the literature (see [5,7,8] among many others). But in our case the infinite Jacobi block matrix J instead is not self-adjoint or even symmetric as an operator on the Hilbert space 2 (N). Moreover the matrix coefficient A is not invertible. Hence methods from the theory of Matrix Orthogonal Polynomials (cf. [2,4,6,9]) cannot be used directly. We need a new approach. Note first that polynomials where u n (x) = cos(nθ) are Tchebyshev polynomials of the first kind and x = cos θ, satisfy the recurrence (2.1). We recall that xu n (x) = 1 2 u n+1 (x) + 1 2 u n−1 (x) for n ≥ 1, Let's now set In [10] it was shown that polynomials P o,ε n , which satisfy the recurrence formula fulfill also the following property: for any matrix polynomial P of degree lower than n. Matrix measure W o,ε (x) dx is given by the inverse Stieltjes-Perron formula where F o,ε is a Stieltjes transform of W o,ε and can be obtained by equality (Lemma 2.4 in [10]) The corollary of Theorem 2.7 in [10] states that for Im z > 0. Polynomials P o n are the limit case of P o,ε n as ε tends to 0. It is not difficult to see that then equation (
2.2), (2.3) and (2.4) still hold, which leads to
for Im z > 0. So the function F o is given by equality Thus the question is to solve with the additional condition lim z→∞ X(z) = 0. The exact solution is The matrix of functions W o (x) can be uniquely determined by the formula (2.5), hence Polynomials P o n are not MOP, but they satisfy for any polynomial P of degree lower then the degree of P o n . Thus P o n could be considered as orthogonal but with respect to a non positive definite matrix of measures (exactly non-symmetric). Now we can return to the random walk on "Jacob's ladder". The probability that an angel eventually returns to the ground is equal to This shows that the random walk considered in this section is recurrent (f 00 = 1). The quantity p 00 (1) is equal to the average number of visits at the starting point (we refer the reader to [1] in case of a random walk on graphs, or to [7] in general case).
CASE OF AN UNFAIR COIN
What happens if the coin the angel tosses is unfair, ie. head and tail occur with probability p and 1 − p respectively, with 0 < p < 1 and p = 1 2 ? In that case we should consider the following relation xP p,n (x) = A p P p,n+1 (x) + B p P p,n (x) + C p P p,n−1 (3.1) for n ≥ 1 where The corresponding function F p satisfies The solution is given then by This shows that the corresponding random walk is still recurrent.
|
2018-04-09T07:25:44.698Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "c72b453a573f7afafa0be6fa2d260bf4df073d3e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7494/opmath.2011.31.4.669",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cfe587034c677bf703c770300fe55cbaa9c34d77",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
214493984
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of clinicopathological characteristics of endogenous and exogenous cervical cancer with its clinical significance
Cancer is fetal disease and leading cause of death worldwide by affecting more than 17 million people. Different survey reports have shown that around 8.5 million deaths occur per year related to cancer and 9.5 million deaths occur in 2018, while the number of death due to cancer are exceeding compared with death from other leading disorders including human immune deficiency virus, malaria and tuberculosis combined. Globally, the rate of cancer suffering patients increasing dramatically and estimated to be more than 23.6 million new cases by 2030. Another report have determined that more than 50% of all cancer cases are reported in low and middle-income countries and the ageing population and also western life style adaptations potentially increasing the case prevalence. ABSTRACT
INTRODUCTION
Cancer is fetal disease and leading cause of death worldwide by affecting more than 17 million people. 1 Different survey reports have shown that around 8.5 million deaths occur per year related to cancer and 9.5 million deaths occur in 2018, while the number of death due to cancer are exceeding compared with death from other leading disorders including human immune deficiency virus, malaria and tuberculosis combined. 2 Globally, the rate of cancer suffering patients increasing dramatically and estimated to be more than 23.6 million new cases by 2030. 3 Another report have determined that more than 50% of all cancer cases are reported in low and middle-income countries and the ageing population and also western life style adaptations potentially increasing the case prevalence. 4,5 Cervical cancer is the fourth most commonly diagnosed female malignancy affecting the reproductive system and is third leading cause of death among women in developing countries. 6 Previous reports have shown that the cases of cervical cancer increasing rapidly with 50% death rate. 7 Based on statistical survey analyses, the morbidity of cervical cancer in China is high with onethird of total worldwide cervical cancer cases. 8 According to epidemiological reports, recently the cases of cervical cancer in young age women are increasing rapidly compared with aged women. [9][10][11][12] Few reports showed that young women with cervical carcinoma had low prognosis and survival rate compared with elder women. 13,14 Conversely, different researchers have found no relationship of cervical cancer with aging. 15,16 Based on tumor growth, cervical cancer is divided into four types including, exogenous, endogenous, ulcerative and cervical canal type. Exogenous and endogenous cervical cancer is the most common type with typical symptoms and clinicopathological features. Exogenous cervical cancer has papillary or cauliflower-like brittles, and highly affecting the vaginal part of reproductive tract compared to other reproductive structure. Endogenous cervical carcinoma is deep infiltration into the cervix with no obvious abnormalities on the surface. Ulcerative is exogenous and endogenous type cervical cancer continues to develop and causes infection necrosis by forming a crater-like ulcer or cavity inside the cervix. Cervical canal type also affects the lower segment of the cervix and uterus of female reproductive tract.
The persistent infection of high-risk human papilloma virus (HPV) is the leading cause of cervical oncogenesis. 17,18 HPV is most commonly sexually transmitted infection worldwide and more than 80% women and men effected with this disease at some stage of life. The reliable screening test availability and techniques for diagnosis of cervical cancer including, HPV detection, liquid based cytology, and biopsy technique could make feasible to detect precancerous lesion and intervene during early stage of disease. 19 Recently, cervical cytological based screening proved effective tool to improve the cervical cancer diagnosis. 20 Although, cervical cancer is curable at early stage by the applications of different medical therapies but still morbidity and mortality is high due to poor diagnosis of different clinicopathological features of cervical cancer. Hence, there is an urgent need for further study for systematic and retrospective analyses to diagnose and differentiate different types of cervical cancer based on lab experimental examination with relevant clinical data.
In this study we performed detailed comparative systematic analyses of 663 patients suffering in endogenous and exogenous cervical cancer by using different diagnostic techniques including thin-prep liquidbased cytology test (TCT) and HPV-DNA, CT examination, and ultrasonography. We observed that combined examinations of TCT and HPV-DNA for cervical cancer patients diagnosis has more efficacy compared to performed separately. Based on tumor growth pattern and high risk HPV-DNA examination, no difference was observed in endogenous and exogenous cervical cancer. However, endogenous cervical cancer indicated more ratios of pathological features including, lymph node metastasis, deep interstitial infiltration, and lymph vascular infiltration compared to exogenous cervical cancer.
Cervical cytology examination
Cervical cytology was performed by TCT. Samples were taken according to the instructions of the US New Berthner TCT detector. By following instructions, cervical cells were collected from the cervical canal of all participants with plastic brushes and placed into vials of ThinPrep® PreservCyt® solution for cytology. Thin layer slices with a diameter of 2 cm were prepared by the program treatment of ThinPrep 2000 system, and then fixed by 95% ethanol and stained by pap staining to observe heterotypes in cervical epithelial cells. Cytological diagnosis was performed by the 2001 TBS classification system (the Bethesda system).
The TBS diagnosis report is as follows: negative for intraepithelial lesion or malignancy, including normal and inflammation; squamous cell abnormalities, including atypical squamous cells (ASC), squamous intraepithelial lesions (SIL) and squamous cell carcinoma (SCC); glandular cell abnormalities, including atypical glandular cells (AGC) and adenocarcinoma. ASC includes atypical squamous cells undetermined significance (ASC-US), atypical squamous cells, and high-grade squamous intraepithelial lesion (ASC-H). SIL including, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL).
HR-HPV detection
The hybrid capture 2 (HC2) technology provided by Digene is used to detect the HPV-DNA content in cervical secretion samples. The 96-well plate method is adopted for HC2, which can detect 13 high-risk HPV at one time (HPV16, 18, 31, 33, 35, 39, 45, 51, 52, 56, 58, 59, and 68). Using a HPV sampler (digene), inserted into the outer cervix, turned 5 times clockwise or counterclockwise, and slowly take out the sampler and then put into the vial containing the preservative solution.
The sample was stored -20 ℃ before deliver to the laboratory for examination.
The test results were analyzed by the ratio (RIUs/CO) obtained by dividing the RIUs value of the tested cervical secretion sample by the positive control value (CO). RIUs represents the measurement unit of light (i.e. relative light units), and CO represents the positive control value (i.e., RIUs value at the concentration of hr-HPV DNA in the solution at 1.0 pg/ml). The RIUs/CO value of specimens' ≥1.0 is positive; RIUs/CO value less than 1 is negative.
Statistical analysis
SPSS 20 software was used for the analysis of the composition ratio and non-parametric test. The chisquared test and R×C chi-squared test served to compare the composition ratio, and rank sum test was adopted for the non-parametric test, p<0.05 was considered statistically significant. Binary logistic regression analysis was conducted by SPSS 17 software. (Table 3). These results suggest that the chance of different pathological risk factors and HPV infection percentage probability is same in both exogenous and endogenous type cervical cancer. The endogenous and exogenous cervical cancer abnormalities were estimated on imaging-based analysis. The ultrasonography and CT examination of pelvic region showed no significant difference in cervical enlargement among both endogenous and exogenous cervical cancer (106/171=62%, 243/415=58.6%; 58/67=86.6%, 165/180=91.7%) respectively (Table 4). In addition, the cervical enlargement and uterine cavity abnormalities were also determined by ultrasound imaging and CT of pelvic, and data revealed higher percentage of uterine cavity abnormalities in endogenous cervical cancer compared with exogenous cervical cancer (2.3% vs 1.2% by ultrasonography; 1.5% vs 0.6% by CT of pelvic) ( Table 4) respectively.
Comparison of staged lymph node metastasis of endogenous and exogenous cervical cancer
Lymph node metastasis is divided into different stages from earlier to later stage including IB1, IB2, IIA1, and IIA2. We performed detailed lymph node metastasis comparison of both endogenous and exogenous cervical cancer at different stages.
Comparison of cervical interstitial infiltration and lymphatic vascular infiltration in endogenous and exogenous cervical cancer
Next, we observed and compared the depth of cervical interstitial infiltration among endogenous and exogenous cervical cancer patients. The results had shown that all stages including IB1, IB2, IIA1 and IIA2 revealed higher positive rate of cervical interstitial infiltration during endogenous cervical cancer cases compared with exogenous cervical cancer, however the IB1 stage showed significant difference (p<0.001) in positive cases 82 out of 108 (75.9%) of cervical interstitial infiltration during endogenous cervical cancer compared to exogenous cervical cancer cases 126 out of 253 (49.8%) ( Table 6). The higher interstitial infiltrations ratio was also detected in IIA1 and IIA2-stages of endogenous cervical cancer compared with exogenous cervical cancer (Table 6). Taken together, the percentage of cervical interstitial infiltration regarding cases of all stages was significantly higher in (p<0.001) endogenous cervical cancer 153 out of 185 (82.7%) in comparison with endogenous cervical cancer 279 out of 447 (49.8%) ( Table 6). These results suggest cervical interstitial infiltration at IB1 stage might be the biomarker for patients suffering in endogenous cervical cancer which might be valuable for screening. We also determined the lymphatic vascular infiltration in both types of cervical cancer during different staging.
The results revealed that endogenous cervical cancer during all staging IB1, IB2, IIA1 and IIA2 showed higher lymphatic vascular infiltration compared with exogenous cervical cancer patients and statistically significant difference (p<0.001) was observed in positive cases (Table 7). Collectively, higher lymphatic vascular infiltration in endogenous cervical cancer cases 62 out of 188 (33%) were detected compared to exogenous cervical cancer 87 out of 475 (18.3%) ( Table 7). Taken together, these results suggest that lymph node metastasis, cervical interstitial infiltration and lymphatic vascular infiltration might be the possible screening tools to diagnose and differentiation among endogenous and exogenous cervical cancer patients.
DISCUSSION
Previous studies have shown that the rate of cervical cancer is decreasing in different regions of the world, however the incidence of cervical cancer is still high in Asian countries. 21,22 According to epidemiological survey reports more than 78% cervical cancer cases occurred in developing countries, where second most death cause in women is due to cervical cancer. 23 In China the morbidity of cervical cancer is high which is estimated one-third of the worldwide cases, and is considered major health issue. 16 The main reason of high prevalence in that region is due to lack of proper screening and diagnosis of patients suffering in cervical cancer.
In our study, we have shown that liquid based TCT and high risk HPV-DNA combined examination is effective tools to diagnose and differentiate in endogenous and exogenous cervical cancer on the basis of growth pattern. The prevalence of endogenous and exogenous cervical cancer was similar by TCT and high risk HPV-DNA detection analysis. HPV infection is considered the major cause of cervical cancer in women age more than 35 year. In some region of China the prevalence of HPV was estimated 15-22% which is comparable with our study. [24][25][26] Conversely, a report indicated the prevalence of HPV infection in Shandong region was found to be 11% in women enrolled in the hospital. 27 We also examined the cervical enlargement and uterine cavity abnormalities in both type of cervical cancer based on imaging methods, and data revealed more percentage of uterine cavity abnormalities in endogenous cervical cancer. In addition, our data indicated that the percentage of lymph node metastasis was higher in endogenous cervical cancer compared to exogenous cervical cancer. Initial evaluation of cervical cancer based on staging is essential for prognosis and treatment in less developing countries rather than surgical staging. 28 Previous report also indicated the higher lymph node metastasis in cervical cancer patients and identified a significant prognostic factor. 16 It was determined that instability in HPV caused by transforming growth factor influence the accumulation of lymph node metastasis which is estimated cervical cancer predictor. 29 Recent report indicated the percentage of lymph node metastasis involvement in cervical cancer (25.8%), which is comparable with our study results. 30 Another study have shown the significance relevance of lymph node metastasis with clinicopathological features of cervical cancer. 31 Finally, our study also has shown the higher deep cervical interstitial infiltration and lymphatic vascular infiltration in endogenous cervical cancer, which is essential determinant for cervical cancer prognosis, and also to differentiate endogenous cervical cancer from exogenous cervical cancer. Cervical interstitial infiltration refers to cervical tissue involvement with interstitial infiltrations in measured with depth and width. According to recent report, the depth of interstitial infiltration during stage 1A1 should not exceed from 3mm. In our data the depth of interstitial infiltration in women with endogenous cervical cancer was high with increased percentage of positive cases in all stages compared with exogenous cervical cancer. In addition, the lymphatic vascular infiltration, the high risk factor for cervical cancer was found to be in higher percentage during all stages of endogenous cervical cancer which might be the effective biomarker for diagnosis.
CONCLUSION
In conclusion, our study provides distinct pathological features to diagnose and differentiate endogenous cervical cancer patients from exogenous cervical cancer patients based on uterine abnormalities, lymph node metastasis, cervical interstitial infiltration, and lymphatic vascular infiltration.
|
2020-02-20T09:09:06.871Z
|
2020-02-25T00:00:00.000
|
{
"year": 2020,
"sha1": "7f38d9782f0adde877ea9e00f574e772fb34738c",
"oa_license": null,
"oa_url": "https://www.sci-rep.com/index.php/scirep/article/download/672/346",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7fd72b912f5605a7b9523ab6987185cec2481c73",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245296746
|
pes2o/s2orc
|
v3-fos-license
|
Reconciling Evaluations of the Millennium Villages Project
Abstract The Millennium Villages Project was an integrated rural development program carried out for a decade in 10 clusters of villages in sub-Saharan Africa starting in 2005, and in a few other sites for shorter durations. An evaluation of the 10 main sites compared to retrospectively chosen control sites estimated positive effects on a range of economic, social, and health outcomes (Mitchell et al. 2018). More recently, an outside group performed a prospective controlled (but also nonrandomized) evaluation of one of the shorter-duration sites and reported smaller or null results (Masset et al. 2020). Although these two conclusions seem contradictory, the differences can be explained by the fact that Mitchell et al. studied 10 sites where the project was implemented for 10 years, and Masset et al. studied one site with a program lasting less than 5 years, as well as differences in inference and framing. Insights from both evaluations should be valuable in considering future development efforts of this sort. Both studies are consistent with a larger picture of positive average impacts (compared to untreated villages) across a broad range of outcomes, but with effects varying across sites or requiring an adequate duration for impacts to be manifested.
Background
In 2000, the United Nations set "Millennium Development Goals" (MDGs) for reducing extreme poverty in the world. The Millennium Villages Project (MVP) was launched in 2005 by Columbia University's Earth Institute with the aim of demonstrating the feasibility of achieving the MDGs using an integrated rural development strategy based on proven economic, social, health, and infrastructure interventions that could ultimately be sustained globally within the promised aid budget of 0.7 percent of GDP of the world's donor countries (Sachs and McArthur 2005). The MVP was applied in clusters of villages in 10 countries of sub-Saharan Africa from 2005 through 2015, and in a few other sites for shorter durations.
The MVP has been controversial, both in its conception and in evaluation of its effects. The starting point for the controversy was the project's approach of economic and social development catalyzed by foreign aid, which has been criticized as a doomed-to-fail relic of a bygone paternalistic era (see, for example, Easterly 2014). In addition, the MVP was criticized for not being designed as a randomized controlled trial. Clemens and Demombynes (2011) review the difficulty of estimating the impacts of the MVP given its lack of prospective control group. As discussed by de Souza Leāo and Eyal (2019), recent decades have seen a resurgence of enthusiasm for randomized controlled trials to study the effect of interventions in international development, as underscored by the 2019 Nobel Prize in economics.
The MVP stands out as a high-profile project organized by an academic economist that did not include such a control group.
At the inception of the MVP, two reasons were given for not designing the MVP as a randomized controlled trial. First, the MVP used a basket of many interventions that had already been shown to work, often through previous controlled trials. The main focus of the MVP was on the feasibility of implementing the package of proven interventions within the specified budget and timeline, a concern for which a control group is not relevant. Second, the MVP did not have an adequate project budget to engage systematically with control sites, especially to be able to offer those other sites the package of interventions at a later date. From a pragmatic, political, and ethical point of view, the MVP was therefore wary of identifying and engaging actively with non-project sites.
A related debate is over cost-effectiveness: To the extent that the MVP has been shown to demonstrate an effective lowcost intervention, this provides encouragement for larger-scale programs of this sort; conversely, if any positive effects of these innovations could be achieved using more efficient, inexpensive, and scalable approaches, this would point policymakers to alternative strategies for poverty reduction.
The Earth Institute conducted a retroactive impact evaluation of the MVP's first five years , reporting positive effects on some indicators and not others. The paper made an erroneous claim regarding progress on under-5 mortality relative to the national rural average that was pointed out by Bump et al. (2012) and acknowledged by Pronyk (2012). A few years later, the Earth Institute performed an entirely new evaluation of the full ten-year project (Mitchell et al. 2018), reporting positive impacts in a wide range of poverty and health outcomes, compared to retrospectively-chosen control villages.
More recently, Masset, Hombrados, and Acharya (2020) performed a separate analysis at a single MVP site in operation for 4 years 7 months, in the Savannah Accelerated Development Authority (SADA) region of northern Ghana, and reported mostly small or null results. The Masset et al. study is based on the results of an independent evaluation of the SADA project managed by Itad (Barnett et al. 2018), funded by the UK Department for International Development (DFID).
The purpose of the present article is to assess the apparent discrepancy between Mitchell et al. who report consistent positive effects, and Masset et al. who are more pessimistic in their conclusions.
The present authors were involved in the Millennium Villages Project in different ways: Jeffrey Sachs, an economist and former director of the Columbia Earth Institute, was the coordinator and leader of the MVP; Mitchell, a statistician, was brought into the project in 2014 to design and conduct a quantitative evaluation of the program; Gelman, a statistician at Columbia who is also affiliated with the Earth Institute, provided guidance in this effort; and Sonia Sachs, an MD and MPH, oversaw the public health interventions. All of us were among the authors of Mitchell et al. We do our best to assess the evidence and claims of the two papers impartially, while recognizing our involvements in the MVP and its evaluation.
Comparison of two evaluations of the Millennium
Villages Project Mitchell et al. (2018) summarize: Averaged across the 10 project sites, we found that impact estimates for 30 of 40 outcomes were significant (95% uncertainty intervals [UIs] for these outcomes excluded zero) and favoured the project villages. . . . The MVP had favorable impacts on outcomes in all MDG areas, consistent with an integrated rural development approach. The greatest effects were in agriculture and health, suggesting support for the project's emphasis on agriculture and health systems strengthening.
In contrast, Masset et al. (2020) conclude: Our study finds that the impact of MVP on the MDGs was limited, and that core welfare indicators such as monetary poverty, child mortality and under-nutrition were not affected. . . . despite some positive impacts, we found mostly null results, suggesting that the intervention was ineffective.
Both of these were serious studies conducted by comparing outcomes in Millennium Villages to matched control villages, attempting to adjust for pre-treatment differences between treated and control groups. So how can we understand the starkly different conclusions? In this article, we consider several differences between the studies. First, we summarize the methods in both papers. Mitchell et al. (2018). Mitchell et al. aimed to estimate the MVP's impact in the 10 main sites, where the project was applied from 2005 to 2015. These sites are clusters of 3 to 28 villages, and were chosen non-randomly, without random assignment into treatment versus control. In 2015, they retrospectively selected comparison villages that in 2005 best matched project sites on possible confounding variables. They chose 5 comparison villages per project site, balancing statistical power and budget.
Methods in
They collected cross-sectional survey data on 40 outcomes of interest from both the project and the comparison villages. They randomly sampled 300 households in each site and comparison group. They captured household-level data by a household survey. Within these sampled households, they captured personlevel data by a sex-specific adult survey, malaria and anemia testing, and anthropometric measurements.
They report raw differences between project and comparison for each outcome and site. They also took standardized averages across related outcomes to create 8 outcome indices. They fit a Bayesian hierarchical model to obtain site-specific and outcome-index-specific estimates based on information from all sites and outcomes. This model includes parameters that vary by country and village, accounting for random shocks at these levels, with weak priors on the hyperparameters so that the amount of partial pooling was determined by the data; for details see the appendix of Mitchell et al. (2018). Masset et al. (2020). Masset et al. aimed to estimate the MVP's impact in the Northern Ghana site, a cluster of 35 villages, where the project was applied from 2012 to 2016. This site was also chosen non-randomly, without random assignment into treatment versus control. In 2012, they prospectively selected comparison villages based on village-level characteristics from the 2000 and 2010 censuses, along with additional field data. They chose two comparison villages per project village, with one near the project and the other far from the project. They then did further matching at the household level.
Methods in
In 2012, they collected baseline data from sample of 755 project households and 1496 comparison households. They collected follow-up rounds each year from 2013-2016, with less than 5% attrition. Similarly to Mitchell et al. they captured household-level data by a household survey, and person-level data by a sex-specific adult survey, malaria testing, and anthropometric measurements.
They estimate impacts using a difference-in-difference regression within subclasses of the propensity score (see, e.g., Angrist and Pischke 2009). Comparing time periods is a challenge without further data analysis (for example, one might want to look at outcomes after just the first five years of the main MVP study), but we might expect much larger impacts on some metrics from a 10-year program than from one that ran for less than 5 years. The first two to three years of the MVP involved the construction of schools, clinics, roads, and other basic infrastructure, and recruitment and training of personnel in health, education, agriculture, and infrastructure management. Since the MVP was based on implementing and operating public systems in many sectors for which the basic infrastructure is a necessary starting point, it is natural that these systems take several years to bring into operation and even longer to refine those operations in line with experience. Future work could attempt to compare metrics that are more linked to infrastructure demands versus those that are not.
When the SADA MVP was launched, none of the major participants (including DFID, the MVP, and the government of Ghana) expected that 5 years would be sufficient to achieve the MDGs. But all parties agreed to move forward, as it was felt that even the shorter project would benefit the SADA region in light of its impoverishment.
Different numbers of sites. In 2005-2006, the Millennium Villages Project was initiated at 14 different sites in Africa. Mitchell et al. analyzed results from 10 of these sites; the other four were not scaled up or were discontinued because of funding constraints or regional conflict.
Masset et al. analyzed the final (15th) Millennium Village site added to the project, located in northern Ghana (not the same location as the Ghana villages which were one of the 10 locations analyzed by Mitchell et al.). To get a handle on the effect of considering just one location compared to 10 locations, we start with Figure 1, which displays separate estimates for each site, from Mitchell et al. (2018). We see substantial site-to-site variability in treatment effect estimates across outcomes.
In general, distributions of outcomes differ by geography, regardless of treatment. To account for this, the model in Mitchell et al. includes varying coefficients for villages and countries in a multilevel regression. Masset et al. account for the hierarchical structure of the data by computing standard errors that are clustered by village.
In both studies, villages are either entirely treated or not, and treatment villages are matched to control villages within the same country. So while country effects are shared across treatment and control groups, village effects are not. As Imbens (2014) points out, with only one treatment village and one control village, the treatment effect cannot be separated from the difference in village effects. Luckily, in both studies, there are more than one village per treatment group. In Mitchell et al. there are 3 to 28 villages per country-treatment-group and 10 countries, and in Masset et al. there are 35 to 68 villages per country-treatment-group and one country.
One could be concerned that in both studies, the treatment villages are relatively close to each other. The project describes these as a "cluster. " Neither study takes into account spatial correlations beyond village (and country) effects. This could mean that both studies underestimate statistical uncertainty, and that their discrepancies could be attributed to statistical imprecision. If there were a cluster-level effect, the 10-site study in Mitchell et al. would be better-equipped to estimate the overall average treatment effect, while the one-site study in Masset et al. would be stuck with the lack of identification discussed by Imbens (2014). Without such cluster-level effects, the site-to-site variation seen in Mitchell et al. could be attributed to treatment effect variation (to the extent we believe unconfoundedness). Thus, differences between the two studies could arise both from site-to-site variation in treatment effects and from cluster-level effects. In that context, the apparent null findings from Masset et al. can be attributed to their using less data.
Prospective or retrospective design. Both studies assigned treatment non-randomly, but a key strength of the study conducted by Masset et al. (2020) is that it is prospective: control villages were chosen at the start. In contrast, Mitchell et al. (2018) conducted a retrospective study, imitating as best as possible a prospective design by matching treated and control villages only based on information that could have been available in 2005 at the start of the intervention or which could not have been affected by the intervention. Masset's prospective approach enabled them to collect more baseline data to adjust for possible confounding. Therefore, confounding may account for some of the difference in results between the two studies.
Another advantage of Masset et al. 's prospective design is that they were able to collect data at each site in each year. Even if we have disagreements of how they analyzed these data, it is a strength of that study that yearly estimates of outcomes in treated and control villages are available, including for additional analyses. It is a tradeoff that this prospective study was only performed at one location covering a short time period, making it difficult to detect effects that are variable.
Choices in modeling and inferential summaries. We have concerns with the difference-in-difference regressions of Masset et al. which specifies a treatment effect that does not vary by time (see their equations (3.1)-(3.2)), hence if the program has cumulative effects that vary over time, as would be expected, the result would be to underestimate the effect over the full period. Furthermore, the difference-in-difference assumptions may be less attractive than assuming unconfoundedness given the baseline outcome (Imbens and Wooldridge 2009, p. 70). As mentioned above, Mitchell et al. (2018) did not have adequate baseline data with which to use either difference-in-differences or unconfoundedness given the baseline outcome. Instead they selected comparison villages that best matched project sites on available baseline data from Demographic and Health Survey (DHS) and geographic information system (GIS) databases. These possible confounding variables were only available at the area level, limiting the number of data points available to estimate propensity scores. Instead, they matched on indices of related variables. In contrast, Masset et al. (2020) used their richer baseline data to estimate propensity scores. They then used these propensity scores for subclassification, a type of matching method (Stuart 2010).
Thus, both Mitchell et al. (2018) and Masset et al. (2020) combine matching with regression, using the data they have available. As mentioned above, Masset et al. (2020) has richer data to adjust for possible confounding. However, we think their difference-in-differences regression could be improved by allowing treatment effects to vary over time, including baseline outcome as a covariate, and using hierarchical modeling to better describe statistical uncertainty.
Given the inherent noisiness in estimates for a single site over a short time period, we feel it was a mistake for Masset et al. to summarize their findings in terms of statistical significance (for example, "the count of statistically significant impacts is low") or to report non-significant comparisons as if they were zero (for example, "we found mostly null results, suggesting that the intervention was ineffective"); this latter is a statistical fallacy, as discussed by Gelman, Carlin, and Nallamothu (2019). These concerns do not invalidate the study as a whole, just the interpretations of some of the results.
Masset et al. use a statistical procedure to control the false discovery rate at the 10% level. As they say, this results in fewer reports of statistical significance. They then interpret nonsignificant comparisons as if they are zero, a statistical fallacy. Mitchell et al. address the issue of multiple comparisons as recommended in Gelman et al. (2012), considering countries and outcomes jointly to reduce statistical uncertainty through combining data. They fit a Bayesian hierarchical model to obtain site-specific and outcome-index-specific estimates based on information from all sites and outcomes, and then report estimates and uncertainty intervals rather than using a significance threshold.
Framing the interpretation of results. Much of the difference in the conclusions of the two reports can be explained by differences in framing. On one hand, the report from the Millennium Villages team found improvements in 40 different outcome measures, even if those improvements did not always reach the MDG target; on the other hand, the outside group reported that impacts were limited. Is it a plus that "the project conclusively met one third of its [MDG] targets" (Mitchell et al. 2018) or a minus that "the impact of MVP on the MDGs was limited" (Masset et al. 2020)?
Much depends on expectations. If we consider the MVP as "a plan for meeting the Millennium Development Goals" (Sachs and McArthur 2005), then it is indeed a shortfall that after ten years it only met on third of its targets, justifying Masset et al. 's description of the project as "aiming high and falling low. " If we consider the MVP as a study of feasibility of implementing a realistic integrated approach to aiding low-income rural areas, then consistently positive average effects across multiple sectors are encouraging, even if the outcomes are variable enough that improved outcomes do not appear in all locations for all measures.
Look again at Figure 1, which shows estimates of effects on the Millennium Villages, compared to retrospective control villages, on several different indexes. The overall positivity of the comparisons can be taken as a sign of the success of the program, but the positive outcomes fell short of the ambitious MDG targets. In any case, the variation across sites on particular outcomes also suggests the importance of local context. Masset et al. suggest that the MVP is a test of the "big push" solution for Africa recommended by Sachs et al. (2004). Yet they acknowledge that the MVP "was not meant to address all potential sources of the poverty trap, " especially those arising at the "macro level, " such as national infrastructure required for villages to be connected to the national economy. In fact, the MVP was not designed as a test of the big push hypothesis, but of something much more limited: the feasibility of integrated rural development, in the face of long-standing skepticism by some that integrated development projects are too complex to implement. This is the perhaps the main achievement of the MVP: the successful implementation of a multi-sector strategy at low cost. It is notable that such a multi-sector strategy could be implemented at a very local scale even when the country as a whole was unable to mobilize the resources for national-level infrastructure (roads, power, water, health, education and other areas) needed for national success in meeting the MDGs. Masset et al. report the broad scope of activities carried out by the project across health, education, infrastructure, and agriculture, and the background evaluation (Barnett et al. 2018) presents data on the high level of community engagement in the project. Masset et al. criticize the program for using "a parallel structure [to government] to manage its activities, " but this can be viewed in a positive light given that the aim was to demonstrate to the SADA government (and governments across Africa in the full project) how to undertake such a village-based program, in close consultation with local and regional officials. It was a demonstration project and training ground for governments to implement such projects through their own structures.
An important difference in interpretation arises from claims about poverty reduction. Both papers report a nonsignificant impact on household consumption (consumer expenditures). Yet Masset et al. (2020) also reports a significant positive impact on income (see Figures 2(a) and 2(b)). While Mitchell et al. (2018) did not have high quality income data and so did not report on incomes, Mitchell et al. instead reported on asset ownership data, finding a positive impact on assets. Masset et al.
does not report on assets, though they were measured in their evaluation, as Barnett et al. (2018) reports: "The analysis gives some credence to the notion that income gains were spent on durable goods, saved in cash or invested in livestock and assets. " The implication of both studies, therefore, is that the project achieved gains in income that were translated into saving in consumer durables and other assets. The evaluation in Barnett (2018) notes clear reductions in multi-dimensional poverty (that is, a measure of deprivation across several dimensions beyond income): "MVP produced a considerable reduction in the multidimensional poverty index, and by implication, on multidimensional poverty. " Cost comparisons. Masset et al. suggest that the MVP was not cost effective because of the relatively high spending per impact. They acknowledge, however, that they only have spotty evidence of cost comparisons. We believe their cost analysis does not support their conclusions. The MVP spending of $88 per person per year in the SADA site covered interventions across multiple sectors (health, education, roads, power, water and sanitation, agriculture, community engagement, and others). In the ten sites, MVP spending per person per year averaged $66 per year in the first five years and $25 per year in the second five years. We are not aware of other projects that have delivered this package of core services at lower cost. Assessments of the cost-effectiveness of this spending will depend on estimates of effectiveness in the medium and long term, which returns us to the general point that impacts do not show up consistently in a single site during a short time period, and therefore do not provide the basis for assessing cost-effectiveness.
Conclusions
In this article, we considered several differences between two evaluations of the Millennium Villages Project. Without more data, we cannot identify exactly which study differences explain how the two studies arrive at disparate conclusions. Nevertheless, we think it is useful to clarify researcher degrees of freedom to aid in interpretation and inform future study design.
The two apparently contradictory evaluations of the Millennium Villages Project are both consistent with a larger picture in which the MVP has positive average effects (compared to untreated villages) across a broad range of outcomes, but with effects that are variable across sites and that require several years to take effect, given that the first few years are focused on infrastructure building, and recruitment and training of staff, before systems implementation.
Different policy implications can be derived from evidence for effects that are positive but small on average but variable in particular instances.
First, expectations should be realistic regarding effect sizes and variability over time and across sites. A program should be highly attuned to local contexts, provide the needed time for implementation, and not be expected to provide a one-shot solution to long-term problems.
Second, analysts should be aware of the potential for learning from multiple sites when performing experimental or quasiexperimental evaluations of interventions and policy choices (Mitchell et al. 2018;Meager 2019).
The enduring controversy about the evaluation of the Millennium Villages Project suggests that it was a shortcoming of the project not to include a control group in the design from the beginning. Barnett et al. (2018) and Masset et al. (2020) demonstrate how a prospective control group can be built in from the start in future studies, acknowledging the political, practical, and ethical complexities of including control sites in such intervention projects and the need to receive from project donors an adequate program budget for control groups and program evaluation.
Disclaimer
The authors of this article are affiliated with the Columbia University Earth Institute and are coauthors of one of the studies being evaluated in this article.
|
2021-12-19T16:35:11.663Z
|
2021-12-17T00:00:00.000
|
{
"year": 2022,
"sha1": "fe9d7fa1ad4de507be77a0c37927133f897986e2",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2330443X.2021.2019152?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "b410598c726fdb986b8a9b19d60e8b1f53b928e2",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
247675376
|
pes2o/s2orc
|
v3-fos-license
|
Feature-Adaptive and Hierarchical Subdivision Gradient Meshes
Gradient meshes, an advanced vector graphics primitive, are widely used by designers for creating scalable vector graphics. Traditional variants require a regular rectangular topology, which is a severe design restriction. The more advanced subdivision gradient mesh allows for an arbitrary manifold topology and is based on subdivision techniques to define the resulting colour surface. This also allows the artists to manipulate the geometry and colours at various levels of subdivision. Recent advances allow for the interpolation of both geometry and colour, local detail following edits at coarser subdivision levels and sharp colour transitions. A shortcoming of all existing methods is their dependence on global refinement, which makes them unsuitable for real-time (commercial) design applications. We present a novel method that incorporates the idea of feature-adaptive subdivision and uses approximating patches suitable for hardware tessellation with real-time performance. Further novel features include multiple interaction mechanisms and self-intersection prevention during interactive design/editing.
Introduction
The gradient mesh is a powerful vector graphics primitive that allows for the creation and manipulation of scalable vector graphics; see, e.g.[SLWS07,BLHK18].The traditional gradient mesh is available in several commercial design applications such as Adobe Illustrator as well as in open source alternatives such as Inkscape.The existing interfaces allow positional and colour data to be assigned to mesh vertices, where gradient handles define the curved geometry and colour transitions.The meshes require regular rectangular topology and are represented as a grid of bicubic patches.The regularity requirement is a severe restriction on the artist and consequently the resulting colour surface: adding local detail requires global mesh refinement.
The subdivision gradient mesh primitive improves upon the (traditional) gradient mesh primitive by allowing an arbitrary manifold topology, provided that the faces are convex [LKSD17,SL17].The resulting colour surface is obtained by applying a single ternary subdivision step to the input mesh, followed by Catmull-Clark subdivision [CC78] to the limit.This ensures an almost everywhere C 2 continuous colour surface.Hierarchical editing is possible by allowing the artist to manipulate the geometry and colours at various levels of subdivision.Recent advances allow for the interpolation of both geometry and colour, local detail following edits at coarser subdivision levels and support for sharp colour transitions [VK18].
Methods currently incorporating subdivision gradient meshes require multiple steps of global Catmull-Clark subdivision both to represent the finest edits and to obtain a smooth surface that is close to the actual limit surface.As the number of faces grows exponentially with the number of subdivision steps and current hardware performance is mostly memory restricted, such methods are not suitable for real-time (commercial) design applications.
Our novel subdivision gradient mesh representation and rendering method address the above limitations; see Figure 1.We 1. modify and compare approximating patches for Catmull-Clark subdivision surfaces that are suitable for hardware tessellation to the setting of subdivision gradient meshes; 2. present a novel and real-time method that uses the idea of local feature-adaptive subdivision (FAS) in combination with hierarchical editing; 3. design an index mapping between vertices, (half-)edges and faces across subdivision levels for efficient detail tracking; 4. investigate and discuss trade-offs between visual quality and performance; The edited mesh at level 0 after geometry and colour editing.Level 0 is the mesh after an initial ternary step, here shown tessellated.(c) The corresponding rendering.(d) The edited model at level 4. (e) Back at level 0, we bend the stalk to the right by editing only 6 points; the finer edits at level 4 follow the overall shape change.(f) The locally edited model at level 5.We add some texture at the top of the stalk and adjust the shape of the bottom of the stalk.(g) The underlying mesh required for traditional mesh subdivision [VK18] has 78, 336 polygons.(h) The mesh generated using our method after the same amount of editing has only 1371 polygons.Our method supports true hierarchical yet fully local editing, which allows the artist to manipulate the geometry and colour at any level of subdivision.
5. provide multiple user interface improvements, among which an indication of the region of influence of edits, and real-time selfintersection prevention while editing geometry.
We start by reviewing relevant related work in Section 2. A technical overview of the required building blocks is presented in Section 3. A summary of how these building blocks are adapted and incorporated in the design of our novel method is provided in Section 4. Improvements to the user interface are presented in Section 5. We then showcase the results in terms of performance and visual quality in Section 6.Several trade-offs and choices that were made, limitations and future work are discussed in Section 7. Finally we conclude the paper in Section 8.
Related Work
The basis of our approach is the gradient mesh [BB13].This vector graphics primitive smoothly interpolates colour through the use of bicubic patches that are logically aligned in a rectangular structure.The first appearance of this traditional type of gradient mesh was in Adobe Illustrator [Sys98].Modified versions are now available also in Inkscape and CorelDRAW [BLHK18], and [BHEK21] introduced a version based on mesh colours.However, the regular topology requirement and lack of support for local refinement limit its usability and expressiveness, mostly due to an extensive number of patches that are generated.The regular topology restriction has been addressed and alleviated by using either generalised barycentric coordinates [LJH13,HBK19], loop subdivision surfaces [Loo87] or Catmull-Clark subdivision surfaces [LKSD17].The latter approach, often called the subdivision gradient mesh, is especially useful as it has basically the same functionality as traditional gradient meshes, but with the added advantage of unstructured topology and increased smoothness.
A recent extension [VK18] supports exact geometry interpolation, hierarchical edits and sharp colour transitions, and is therefore more versatile than the earlier mentioned techniques.However, although local edits are possible in their method, they are evaluated and rendered using global subdivision.In contrast, we introduce a truly local approach not only for editing, but also for evaluation and rendering.
As is well known from the context of 3D modelling and animation [NLMD12], naive use of subdivision surfaces drastically influences memory and rendering performance.This naturally applies also in our context of subdivision gradient meshes, where interactive edits are indispensable.We borrow several techniques and ideas for real-time approximation of Catmull-Clark surfaces using hardware tessellation.Patches used in these techniques are called approximate Catmull-Clark (ACC) patches.Prime examples of such approximation schemes are ACC1 [LS08] using bicubic patches and ACC2 [LSNC09] using Gregory patches [Gre74,Lon87].Generalised Gregory patches [HK18] can be be used to create a generalisation of ACC2 for arbitrary valency faces including their GPU treatment [HBK18].The idea of FAS [NLMD12] allows for local subdivision near features.The OpenSubDiv library [Pix21] uses some of these elements to render subdivision surfaces efficiently using both the CPU and the GPU.
Our contribution focuses on real-time rendering of subdivision gradient meshes so that designers can interactively edit and manipulate them.Additionally, we also allow the possibility for hierarchical geometrical and colour edits following Verstraaten and Kosinka [VK18], whilst guaranteeing interactive rates of performance.Further novel features include user interface improvements and self-intersection prevention.
Preliminaries
We now detail some of the main building blocks of our method and its implementation, and define used terminology.
Mesh data structure
A mesh M = (V, E, F ) is defined by a set of vertices V , a set of edges E, and a set of faces F. A vertex v ∈ V generally contains several attributes, such as coordinates and colour.An edge e ∈ E is directional and connects unique vertices v i and v j .More formally, a minimal closed loop of edges and vertices where each subsequent pair of vertices is connected by an edge.In addition, a gradient mesh M built on top of M includes a set of gradient vectors G.At each vertex, a gradient vector is assigned per incident edge.We assume that the mesh is manifold; see elsewhere [LKSD17,VK18]
Ternary subdivision step
Prior to Catmull-Clark subdivision, a ternary subdivision step operator T is applied to M to create the mesh M 0 [LKSD17].This operator T logically trisects each edge, and creates a smaller polygonal face for each original face, surrounded by a layer of quads; see Figure 2.This structure is useful as the colour assigned to a vertex in M is interpolated in the Catmull-Clark limit when the one-ring neighbourhood of vertices in M 0 corresponding to vertices in M is assigned the same colour.
The geometry of M 0 is obtained from M as follows.For an edge given by V i to V j , its two new edge points V i j and V ji simulate the role of gradient handles in the traditional gradient mesh; they are initialised by V i j = (2V i + V j )/3 and similarly for V ji , and then adjusted by the user.Then for each vertex V i and each of its incident faces F j , the two edge-connected neighbours of V i in F j , the centroid of F j and V i itself are bilinearly interpolated to initialise the new face points (see Lieng et al. [LKSD17] for details), which can optionally also be adjusted by the user.
Subdivision gradient meshes
Arbitrary manifold topology gradient meshes can be defined by a modified Catmull-Clark subdivision scheme in both geometry and colour [LKSD17].This method allows artists to manipulate the mesh and associated colour gradients at various levels of subdivision.Additional features such as interpolation of both geometry and colour, local detail following edits at coarser subdivision levels and support for sharp colour transitions were added as well.
The subdivision gradient mesh at subdivision level l is obtained by a single application of the ternary subdivision operator T fol- (1) The colour component is simply stored as the colour chosen by the user.The geometry of the initial mesh M is stored in global coordinates.Geometry edits at any level of subdivision are stored using one of the following approaches; see Figure 3.The first approach, for a vertex V edited towards a sector that corresponds to a face, stores an edited vertex Ṽ as a displacement V from the original vertex V .This displacement is locally expressed using the vectors e 1 and e 2 along the edges of the quadrant in which Ṽ is located as V = ae 1 + be 2 for some a and b, which are then stored.
The second approach, for vertices V edited beyond the mesh boundary, stores the displacement V as a relative angle and length with respect to the vectors e 1 and e 2 as ( φ, ρ ) = φ/α, V 2 / e 1 2 e 2 2 . (2) Geometry edits at any level of subdivision are stored using these two approaches for boundary and non-boundary vertices as in Verstraaten and Kosinka [VK18], Section 4.2].
The meshes C l T M and TC l M are topologically equivalent, and therefore a natural mapping between the vertices of C l M and M l exists.As colour edits should only affect disjoint one-ring neighbourhoods, the user is allowed to edit the colour of all vertices (or vertex sectors when using sharp colour transitions) in M l that are topologically associated to C l M.
Conventional gradient handles for a subdivision gradient mesh allow geometry edits of a subset of the vertices in M l only.This unnecessarily limits the expressive freedom of the user, and therefore we allow the user to edit the geometry of all vertices in M l .
Feature adaptive subdivision
Our method is inspired by a computationally efficient method to evaluate the Catmull-Clark limit surface including boundaries up to machine precision [NLMD12], which adopts the idea of FAS.The mesh is iteratively subdivided only in the affected vicinity of irregular features, while the regular patches are always directly rendered as bicubic patches.Special transition patches are required to avoid cracks between adjacent patches of different subdivision levels.The FAS can be summarised as three stages [SRK*15].
Figure 4: Five possible constellations for transition patches [NLMD12]. Patches belonging to the current subdivision level, next subdivision level and transition patches are coloured in green, red and yellow, respectively.
CPU preprocessing aims to produce the mesh connectivity and identify features to apply adaptive subdivision to.The input to FAS is a base control mesh, which is composed of vertices, faces and optional data containing hierarchical details and semi-sharp crease edge tags.This preprocessing yields the patches to be tessellated and the computation of the relevant control points, which are stored in index buffers.For each level of subdivision, a subdivision table is set to store all the mesh data required for the FAS process.Index buffers store patch data, which describe the patch type and the indices of all the relevant control points.The base control mesh, patch index buffers and generated subdivision tables are then sent to the GPU for further processing.
Regular patches at each subdivision level are categorised as either a full patch that only shares edges with patches of the same subdivision level, or a transition patch that is adjacent to a patch that is further subdivided.Crack-free renderings can be obtained by evaluating adjacent patches at corresponding domain locations.One approach that ensures this splits each transition patch into several subpatches using a simple case analysis.There are five possible constellations for transition patches [NLMD12], Section 4]; see Figure 4.
During FAS, the base mesh is subdivided iteratively by running a number of GPU kernels.At each level, the control points and the subdivision tables of current level are used for the computation of the control points at next subdivision level.These control point data are stored in a control point buffer, which is generated at the preprocessing stage.This process repeats for each subdivision level until it reaches the pre-defined maximum level.
Finally, the patch tessellation stage sends the patches to the GPU tessellator unit using three patch types: regular, transition and irregular.The tessellator tessellates all the patches into triangles, which are then rasterised.
FAS subdivision depth depends on the given tessellation factor t. It performs log 2 t subdivision steps.For each subdivision level l, its factor is set to t = max(1, t/2 l ).When t = 1, the patch is evaluated only at its corners using limit stencils of Catmull-Clark subdivision.
Feature-Adaptive and Hierarchical Subdivision Gradient Meshes
This section presents our novel method, which adapts the idea of FAS and approximate patches to the setting of subdivision gradient meshes.Our algorithm takes a coarse base control mesh data with gradient vectors (Section 3.1), including edges explicitly tagged as being sharp for sharp colour transitions, and hierarchical editing data.The output is an adaptively refined mesh.The mesh refinement algorithm stops when there are no more faces requiring further subdivision.Afterwards, the faces of the refined mesh are sent to the GPU to be rendered using bicubic Bézier patches or Gregory patches.
Approximate feature adaptive rendering
We developed an OpenGL/Qt-based experimental tool ourselves instead of using FAS in OpenSubdiv due to several reasons.Although our method is inspired by FAS, it is different from FAS in the strategy for terminating the subdivision.Furthermore, FAS in Open-Subdiv does not directly support some of the features needed in our context.Finally, our own implementation gives us full control over hierarchical editing, local updates, sharp transitions and selfintersection prevention.
The surface M ∞ obtained after an infinite number of Catmull-Clark subdivision steps is called the limit surface.This surface is almost everywhere C 2 continuous [Hav02], and over regular regions equivalent to C 2 tensor product bicubic B-splines [DS78,LSNC09].Boundary loops converge to uniform cubic B-spline curves [DKT98].Irregular regions containing extraordinary vertices are composed of an infinite set of polynomial patches, and therefore expensive to evaluate.Similar to FAS, at each level, we determine which faces to further subdivide.Faces that do not require further subdivision are rendered using bicubic Bézier patches or Gregory patches at their terminating subdivision level.Other faces are subdivided further.This choice optimises performance while obtaining the best visual quality.See the example in Figure 5: The mesh generated by our method is less dense than by traditionally globally subdividing the whole mesh to the highest used subdivision level.
The memory requirements for global subdivision to level k are proportional to 4 k |F|, where |F| is the number of faces in the control mesh, and can therefore be computationally prohibitive.In our method, the memory used is proportional to the number of patches, which can be at different subdivision levels.This greatly reduces the computations and memory required; see Section 6.Our algorithm is also different to FAS in the subdivision termination condition.FAS keeps subdividing the mesh in the vicinity of irregular faces at each subdivision level until tessellation of the newly generated patches only amounts to evaluating the corner positions.In our algorithm, we introduce the concept of affected faces to determine at which subdivision level to terminate subdivision, as explained next.
Affected faces
A face is labelled as affected when it needs to be further subdivided.This happens for three reasons, as follows: Edited faces.At each subdivision level, we determine which faces are affected by edits corresponding to the same or finer levels.At a specific subdivision level, a geometry edit indirectly affects a two-ring neighbourhood of faces around the edited vertex (this corresponds to the two-ring support of Catmull-Clark subdivision blending functions).Colour edits directly affect the one-ring neighbourhood (due to the colour spread step ensuring colour interpolation in the limit), and therefore indirectly affect the three-ring neighbourhood of faces around the edited vertex.Sharp colour edits produce the same effect, but only in the sector influenced by the edit.
A demonstration for both geometry and colour edits is shown in Figure 6.Note the reduced face count with respect to what global subdivision would produce.The transition patches, connecting faces at different subdivision levels as employed in FAS [NLMD12], ensure that patch edges are evaluated at identical parametric locations.The regular topology in this example ensures exact reproduction of the limit surface, which is simply a finite collection of bicubic patches.Faces affected by finer level edits are further subdivided locally.Taking a submesh of only the affected area and subdividing it produces incorrect results.This is due to the introduced boundaries and the size of Catmull-Clark subdivision stencils.This problem is solved by padding the affected area with a one-ring neighbourhood of faces before taking a submesh and subdividing it.
When taking a submesh, we cannot retain the assignment of its original half-edge indices without a loss in performance.As the edits are indexed via half-edge indices, we either require an explicit bookkeeping of global mesh indices to submesh indices, or we require the edits to be mapped to the submesh.Our index mapping strategy, detailed in Appendix A, allows the latter to be accomplished in constant time.
In the hierarchical setting, a face is considered affected when either the face itself is affected by an edit at its subdivision level, or the face topologically contains (i.e. is an ancestor of) an affected face of a finer level edit.
Irregular faces.Due to our mesh structure given by M l = C l T M, newly generated non-quadrilateral faces are only contained in M 0 .These non-quadrilateral faces and their one-ring neighbourhoods are both considered affected (again owing to the Catmull-Clark subdivision rules applied there).In Figure 7, left, the mesh is M 0 with one irregular face (bottom right), which has six direct neighbours.Thus, all of the seven faces are considered affected and then subdivided; see Figure 7, right.
Affected face cascading.The used approximating (bicubic and Gregory) patches tend to create minor artefacts along edges incident with extraordinary vertices; see Figure 8. FAS of one of these sectors may create discontinuity artefacts along the edges with faces at different levels.After obtaining the affected faces, we therefore iteratively cascade the affected faces around such irregular vertices.A demonstration of affected face cascading in the vicinity of irregular vertices is given in Figure 8.There are five sectors around the extraordinary vertex in the middle.We move the red control point close to the bottom left, which affects one sector around the extraordinary vertex.This one sector is refined more, while the other sectors are not refined.To prevent the artefacts, this level mismatch may cause, we refine all the sectors around the extraordinary vertex to the same level of subdivision.Then as we can see in Figure 8, far right, the sharp artefacts along edges incident with the extraordinary vertex are greatly reduced.
Patch rendering
We use the ACC2 scheme [LSNC09] to approximately render the Catmull-Clark limit surface given by the control (gradient) mesh using hardware tessellation.Our choice optimises performance while obtaining high visual quality.For each regular face in the mesh, a bi-cubic geometry/colour patch is constructed.These patches reproduce the Catmull-Clark limit surface exactly.As these patches are parametric, they are efficient to evaluate on GPU architectures with a programmable tessellation unit.For each irregular face in the mesh, a Gregory patch is constructed.These patches meet with G 1 continuity along edges incident with extraordinary vertices (in contrast to the C 0 continuity that is provided by ACC1 there).This method improves upon surface quality (Section 6) near extraordinary vertices at the expense of being slightly more computationally expensive compared to ACC1 [LSNC09].Transition patches are handled as Niessner et al. [NLMD12], Section 4], see Figure 4.
Sharp transitions
All vertices along sharp transitions (often specified along chains of edges) are subdivided using the boundary rules, except its endvertices (if any), often called darts, which are subdivided using the smooth rules [DKT98].The boundary rules are only applied to the colour components of the control mesh whereas the geometry is smoothly subdivided.The approximating patches of ACC1/ACC2 were never intended to support such (colour) transitions.Although the associated artefacts turn out to be virtually invisible, a slightly improved approximation is achieved by treating the boundary/sharp edges adjacent to a dart as a complete boundary when updating its inner control points in a subdivision step.Figure 9 shows an example of sharp colour editing and the tiny artefacts resulting from using approximate patches to render them.
User Interface
Hierarchical editing in the context of gradient meshes was proposed in Lieng et al. [LKSD17] and further developed in Verstraaten and Kosinka [VK18].It is worth noting that OpenSubdiv also supports hierarchical editing for subdivision meshes [Pix21], but for the reasons mentioned in Section 4.1, we rely on our own implementation.Improving upon Verstraaten and Kosinka [VK18], we have designed various user interface improvements that allow the artist to intuitively edit the gradient meshes in a hierarchical manner.
Editing
Handles.Existing gradient handles for a subdivision gradient mesh allow geometry edits of a subset of the vertices in M l only.This unnecessarily limits the expressive freedom of the user, and therefore, we allow the user to edit the geometry of all vertices in M l .To this end, we require visual cues for vertices that can be edited in different ways; see Figure 10.Green bullets denote the vertices for which only geometry can be edited.Dark blue bullets define the vertices via/at which both geometry and colour can be edited.To facilitate sharp colour edits per sector, we introduce slightly smaller light blue bullets around the colour handles, which are offset in the direction of each incident sector.In our implementation, we only allow the smaller colour handles to be actually colour edited, which works well with our brushes as explained below.
The ternary subdivision step introduces an implicit three-ring separation between colour edit points, which remains the case after further Catmull-Clark subdivision.A naive approach is to use the mesh data structure and subdivision functions to explicitly keep track of these colour edit points.We present a better alternative that directly extracts the colour handles from a mesh after an arbitrary number of subdivision steps.We exploit the above-mentioned colour separation and the fact that existing vertices retain their indices during subdivision.We perform a depth-first search that is initiated with the indices of the vertices of M 0 , to account for potentially disconnected components.Only steps of three consecutive (half)edges in a single direction along the edges incident with the vertices are allowed.As a result, the points visited by the algorithm indicate the colour handles.
Brushes.As an alternative to individual editing at handles by direct selection, we introduce a more advanced brush; see Figure 10, left.All handles within a specified radius (adjustable by the user) from the cursor are automatically selected.A colour (that can be set interactively for the brush) is assigned to all brushed/selected colour handles while editing.Geometry edits are applied to the geometry handle closest to the cursor in the centre of the brush.
Region of influence.
To provide the artist with an indication of the potentially edited region, we outline the region that would be influenced by editing the currently selected geometry (in green) and colour handles (in blue), as shown in Figure 10, right.All currently brushed colour handles and the closest geometry handle contribute to this affected area.As intended, sharp colour edits affect only specific sectors.
Interpolation and self-intersection prevention
A conventional user interface allows the user to directly edit handles at specific subdivision levels [LKSD17].However, this approach affects the resulting limit surface in a counter-intuitive way, which is unlikely to fulfil the intentions of the artist.The reason is that, in general, control points are not interpolated in Catmull-Clark subdivision, and naive attempts to force interpolation lead to a global system of linear equations.However, as already utilised in Verstraaten and Kosinka [VK18], the initial ternary step allows for separation of these equations, which can then in turn be solved completely locally.Our improved user interface allows the user to directly edit the corresponding limit positions of the control points.Owing to unfortunate mistakes in the limit stencil inversion process in Verstraaten and Kosinka [VK18], we derive the correct formulas in Appendix B. This (desirable) interpolation property and/or careless geometry editing may lead to self-intersections (fold-overs) in the limit surface, thereby compromising the validity of the intended design.We prevent self-intersections by employing the sufficient injectivity test of Gain and Dodgson [GD01], applied to bicubic patches in the mesh.A demonstration of the self-intersection prevention feature is presented in Figure 11.Although this feature works well, slight overlaps may occur at extraordinary vertices where ACC2 (Gregory) patches are rendered, as these are approximated by ACC1 (bicubic) patches for the injectivity tests.
Results
We have designed several models using our subdivision gradient mesh, presented throughout the paper and the supplementary video.Some of them are simple and academic in nature to reveal the workings and features of our representation, and some are realistic examples to showcase the capabilities of our method.To evaluate the quality of our method, we compare our results to the corresponding Catmull-Clark limit surface both visually and statistically.The pixel-level differences are computed in the unit RGB cube, scaled up by 200 for visualisation purposes, and shown using the plasma colour map, as in Figure 8.When reporting the maximum difference as a percentage, it is with respect to the theoretical maximum of √ 3.
The cherry model presented in Figure 1 showcases the main features of our method: hierarchical editing.The stalk can be bent by moving only a few control points and a detail (water drop) can be easily added without having to refine the model globally.Along with the tulip model in Figure 5, it also shows the significantly reduced face count our method offers compared to previous techniques that rely on global subdivision.Further examples include the leek in Figure 12, the bowling pin in Figure 13, the beach ball in Figure 14, the butterfly in Figure 15 and the sunglasses in Figure 16.
Table 1 shows the generated face counts for the different models featured throughout this paper.Although this gives a good indication of the expected performance of our method, it is not only dependent on the number of faces.We make a distinction between bicubic and Gregory patches, and also list the number of transition patches (in brackets).To compare our method with previous methods [LKSD17,VK18], we also list the number of faces these global methods would require to represent the same models.We show simple triangular, quadrilateral and pentagonal meshes in Figure 18.On purpose, to demonstrate how our method smoothly blends colours, we assigned very distinctive colours to the colour handles (such as red, green, blue, cyan, magenta and yellow).
Sharp colour transitions and dart vertices were discussed in Section 4.4 and demonstrated in Figure 9.In cases where multiple sharp transitions meet, the difference with respect to the Catmull-Clark limit surface may become a bit larger, as shown in Figure 19.However, as these occur in very specific cases only, the difference is still relatively small, and the resulting colour surface is still smooth around the sharp transitions, we do not consider these problematic.
Performance.We list performance statistics in Figure 20 on two models: a simple hexagon model with no edits, and the complex and hierarchically edited cherry model of Figure 1.We compare Bottom left: The edited result up to level 6 using our method.Bottom right: The zoomed detail of the result using our method.
surements were taken on a machine with an NVIDIA TITAN V GPU, 64 GB of RAM and an Intel XEON E5-2630 CPU.
Further performance statistics, showing both geometry and colour editing on the butterfly model (Figure 15) and the pear model (Figure 17) are shown in Figure 21.The figure shows that our algorithm's advantage increases with the subdivision levels used for
Discussion
The design of our method mainly focused on reducing the needed face count and number of generated control points to achieve real-time performance, while preserving high visual quality.The achieved visual quality, which takes Catmull-Clark limit surfaces as ground truth, can be further improved at the expense of more rendered faces by marking all irregular faces as affected up to some desired level.In our experience and based on the examples presented throughout the paper, this is not required.
The triangular version [LS08] and the multisided version of ACC2 [HK18] are useful to fill the holes in the M 0 meshes, but after a single subdivision step, no multisided holes remain, and thus these patches have limited usage in our setting.As they are also more computationally intensive, we have decided not to use them.
A comparison between bicubic and Gregory patches at extraordinary vertices is shown in Figure 22.While it may be the case that bicubic patches constructed by ACC1 provide a better Catmull-Clark limit surface approximation at vertices of valency 3, Gregory patches (of ACC2) provide better approximation at valencies 5 and higher.But more importantly, as discussed in Section 4.3, Gregory patches provide smoother colour surfaces with real-time performance, which justifies our choice to use them for all extraordinary vertex valencies.
The user interface shortcuts we introduced in our implementation and the brush editing functionality have allowed for a significantly quicker design process as whole swathes of control points can be edited at once.The visualisation of the affected area, explicit sector colour edit points and brushing improve user experience.
Limitations
One inconvenience of our method is creating the initial mesh of a model, which is also a necessary step in creating gradient meshes with other tools, such as Adobe Illustrator.The difference is that our approach allows for arbitrary manifold mesh connectivity, not restricted to regular rectangular arrays.This might be initially seen as a disadvantage, in that users may need to adjust their approach to (subdivision) gradient mesh design.At the same time, our approach opens the door for flexible (and locally adaptive) gradient mesh designs and more importantly image vectorisation techniques, potentially using only a single mesh.
As is the case with most, if not all, patch-based approaches, ours too may lead to incorrect geometries when the input mesh faces are not convex.One solution is to split such faces into convex ones; this is perfectly fine in our method since it supports arbitrary manifold topologies.Furthermore, our automatic fold-over detection system, when enabled, does not allow the user to create such meshes.
Future work
Concerning performance, the following topics require further attention.Our implementation recomputes the meshes and fully updates the graphics buffers after each edit, whereas a better alternative would update these structures only locally.The shaders could be optimised using for example table-driven approaches or the recent half-edge approach of Dupuy and Vanhoey [DV21].Computationally demanding functionality like subdivision may significantly benefit from multicore processing.As these optimisations tend to significantly affect code complexity and maintainability, we have chosen to leave this as future work.
For geometry edits, a bulk editing feature may be convenient and is theoretically possible [GD01].One would have to solve a local system of equations such that the limit projections of the vertices in © 2022 The Authors.Computer Graphics Forum published by Eurographics -The European Association for Computer Graphics and John Wiley & Sons Ltd the original mesh are the desired vertices in the limit mesh.One potential drawback of such approaches is that the original mesh could be deformed quite severely in order to conform to the constraints.
Our work can be used to address the image vectorisation problem since our meshes exhibit a lot of (topological) flexibility, similar to that of meshes based on curved triangles [HEK21].It would be a breakthrough if we could generate (feature-adaptive and hierarchical) subdivision gradient meshes automatically from raster images.
Conclusion
We have developed feature-adaptive and hierarchical subdivision gradient meshes, which support interactive editing and real-time rendering using hardware tessellation.Our method drastically reduces face counts while offering a nearly indistinguishable approximation with respect to global subdivision.
As a faster alternative for global subdivision, we implemented and adapted the ACC1 and ACC2 methods for the setting of subdivision gradient meshes, and borrowed ideas from FAS.A key aspect here was the design of a convenient and consistent index mapping between subdivision levels, allowing edits to be mapped to submeshes, which are subdivided only locally using our concept of affected faces.
In order to make sure that the colour surface is always valid, we have integrated a real-time self-intersection prevention mechanism to the geometry editing process.The improved user interface allows the designer to edit groups of control points at once and gives useful feedback of the effects of editing at different subdivision levels.All combined, our subdivision gradient meshes provide a significantly improved user experience over existing methods.where mi = m i − v/2 and ci = c i − v/n i with n i is the corresponding face valency.
Figure 1 :
Figure 1: Different subdivision levels and hierarchical edits of our feature-adaptive and hierarchical subdivision gradient mesh showcased on a cherry model.(a) The initial mesh consists of ten polygons.(b) The edited mesh at level 0 after geometry and colour editing.Level 0 is the mesh after an initial ternary step, here shown tessellated.(c) The corresponding rendering.(d) The edited model at level 4. (e) Back at level 0, we bend the stalk to the right by editing only 6 points; the finer edits at level 4 follow the overall shape change.(f) The locally edited model at level 5.We add some texture at the top of the stalk and adjust the shape of the bottom of the stalk.(g) The underlying mesh required for traditional mesh subdivision[VK18] has 78, 336 polygons.(h) The mesh generated using our method after the same amount of editing has only 1371 polygons.Our method supports true hierarchical yet fully local editing, which allows the artist to manipulate the geometry and colour at any level of subdivision.
©
2022 The Authors.Computer Graphics Forum published by Eurographics -The European Association for Computer Graphics and John Wiley & Sons Ltd
Figure 2 :
Figure 2: An illustration of the ternary subdivision operator T .
Figure 3 :
Figure 3: Alternatives for expressing the refinement of geometry in a local frame.
Figure 5 :
Figure5: A tulip model.Far left: A mesh at level 0 (after the ternary subdivision step).Left: Edited mesh at level 3 using global refinement leads to 5472 patches.Right: In contrast, using our featureadaptive and hierarchical method, the mesh at level 3 requires only 699 patches.Far right: The rendered result (using our method).
Figure 6 :
Figure 6: Meshes resulting from feature adaptive rendering, after either a geometry edit (left) or magenta colour edit (right) of the top-left vertex in M 1 .Geometry edits affect their two-ring neighbourhood, and colour edits affect their three-ring neighbourhood.Quads are by default tessellated as two triangles.The bottom row shows the results at a higher level of tessellation.
Figure 7 :
Figure 7: Left: The mesh M 0 obtained from an input mesh M with one triangle with a quad next to it, after the initial ternary subdivision step.Right: M 1 , i.e.,M 0 after one subdivision step.Note that only faces incident with the vertices of the extraordinary face are affected and thus subdivided.
Figure 8 :
Figure 8: Far left: The result of feature adaptive subdivision without irregular face cascading and minimal tessellation; the red control point close to the bottom left in M 2 has been edited.Left: The resulting transition artefacts around the extraordinary vertex in the centre, shown via the plasma colour map [HDF*20].The perpixel difference in [0, 1] 3 RGB space between our method and the Catmull-Clark limit surface is magnified 200 times to make it visible.Right: The mesh with irregular face cascading and an increased tessellation factor.Far right: Artefacts are greatly reduced by irregular face cascading.
Figure 9 :
Figure 9: Left: A colour surface with a sharp colour transition edit.Middle: A zoomed version of the edited area.Right: Artefacts corresponding to dart vertices are visible around the colour edit points in the top-left and bottom-right corner.The maximum difference in this example is 0.067% (wrt.the theoretical maximum of √ 3); the colour map and difference scaling are the same as in Figure 8.
Figure 10 :
Figure 10: Left: A visualisation of our brush editing functionality.The brush is indicated by the grey circle.The selected geometry and colour handles are shown as red bullets.Right: A visualisation of the region of influence of the currently selected handles: in green for geometry and in blue for colour.
Figure 11 :
Figure 11: A demonstration of the self-intersection prevention feature at tessellation level 10.A geometry handle is dragged in one direction as far as allowed by the method.For the quadrilateral model (left), the faces are just not overlapping at this point.For the triangular model (right), a very slight overlap might occur due to the ACC2 surface approximation.The bottom row shows insets.
Figure 12 :
Figure 12: An example of our method used to model a leek.Top left: A mesh of level 0 (after the initial ternary step).Top right: The mesh after 2 levels of hierarchical editing.Bottom: The final rendering of the leek model.
Figure 13 :
Figure13: A bowling pin model.Left: M 0 after a ternary subdivision step on the initial mesh M. Middle: The edited result up to level 3. Right: A visual representation of the colour difference between our method and the Catmull-Clark limit surface; the maximum is 0.31%.
Figure 14 :
Figure 14: A beach ball model.Left: M 0 after a ternary subdivision on the initial mesh M. Middle: The edited result up to level 3. Right: Difference visualisation with the maximum of 0.53%.
Figure 15 :
Figure 15: A butterfly model.Top left: M 0 after a ternary subdivision on the initial mesh M. Top right: The edited result up to level 5. Bottom left: The mesh using our method.Bottom right: The mesh of global subdivision.
Figure 16 :
Figure 16: A sunglasses model.Top left: The edited result up to level 3 using global subdivision.Top right: The edited result up to level 3 using our method.Bottom left: The zoomed detail of the result using global subdivision.Bottom right: The zoomed detail of the result using our method.
Figure 17 :
Figure 17: A pear model.Top left: The final mesh up to level 6 using global subdivision.Top right: The final mesh using our method.Bottom left: The edited result up to level 6 using our method.Bottom right: The zoomed detail of the result using our method.
Figure 18 :
Figure 18: From left to right are the control meshes and colour surfaces corresponding to a triangular, quadrilateral and pentagonal input mesh M, respectively.The top two rows present the limit projections of the control meshes M 0 and M 1 , respectively.The bottom row presents the colour surfaces, including the M 0 handles for visual guidance.
Figure 19 :
Figure 19: Our result in the vicinity of multiple sharp colour edits.The magenta and yellow sharp colour edits create boundaries within the colour component of the surface that meet in the centre, creating an irregular vertex there.The maximum difference is 1.50%.
Figure 20 :
Figure 20: Average frame time (in milliseconds) comparison between global subdivision and our method.The first six measurements correspond to a simple hexagon, for which the number of generated triangles is exactly the same for both methods.The last two measurements correspond to the cherry model of Figure1(f)with approximately the same number of triangles (we set the tessellation levels in our method so that the numbers of generated triangles just exceed those for global subdivision at levels 5 and 6).
Figure 21 :
Figure 21: Average frame time (in milliseconds) comparison between global subdivision (blue) and our method (red) on the butterfly model (left; subdivision level 5, more than 188k triangles) and the pear model (right; subdivision level 6, more than 626k triangles).Both geometry and colour editing timings are shown.
Figure 22 :
Figure 22: A visual comparison between bicubic (top row) and Gregory patches (bottom row) at extraordinary vertices.The meshes come from Figure 18.The values below the difference images wrt. the Catmull-Clark limit surfaces report the maximum differences as percentages of √ 3.
Figure B. 1 :
Figure B.1: A schematic for a vertex v of valency n [LSNC09].Edge midpoints are denoted by m i and face centroids are denoted by c i .
for more details.
Table 1 :
Generated patch counts for our models, broken down by the type of rendered patches.The numbers in brackets report the counts of transition patches.The last column gives the number of patches needed in case global/uniform subdivision is used.
|
2022-01-24T16:15:41.864Z
|
2022-01-22T00:00:00.000
|
{
"year": 2022,
"sha1": "4baa3d73a0a7b18adf9a1285823b7cfb8358d00b",
"oa_license": "CCBY",
"oa_url": "https://pure.rug.nl/ws/files/215367621/Computer_Graphics_Forum_2022_Zhou_Feature_Adaptive_and_Hierarchical_Subdivision_Gradient_Meshes.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b3b84bd5ed14b2d2611749df7b74dd99c9f9aa01",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
14393
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Multi-Server Allocation to Parallel Queues With Independent Random Queue-Server Connectivity
We investigate an optimal scheduling problem in a discrete-time system of L parallel queues that are served by K identical, randomly connected servers. Each queue may be connected to a subset of the K servers during any given time slot. This model has been widely used in studies of emerging 3G/4G wireless systems. We introduce the class of Most Balancing (MB) policies and provide their mathematical characterization. We prove that MB policies are optimal; we de?ne optimality as minimization, in stochastic ordering sense, of a range of cost functions of the queue lengths, including the process of total number of packets in the system. We use stochastic coupling arguments for our proof. We introduce the Least Connected Server First/Longest Connected Queue (LCSF/LCQ) policy as an easy-to-implement approximation of MB policies. We conduct a simulation study to compare the performance of several policies. The simulation results show that: (a) in all cases, LCSF/LCQ approximations to the MB policies outperform the other policies, (b) randomized policies perform fairly close to the optimal one, and, (c) the performance advantage of the optimal policy over the other simulated policies increases as the channel connectivity probability decreases and as the number of servers in the system increases.
Introduction, Model Description and Prior Research
Emerging 3G/4G wireless networks can be categorized as high-speed, IPbased, packet access networks. They utilize the channel variability, using data rate adaptation, and user diversity to increase their channel capacity. These systems usually employ a mixture of Time and Code Division Multiple Access (TDMA/CDMA) schemes. Time is divided into equal size slots, each of which can be allocated to one or more users. To optimize the use of the enhanced data rate, these systems allow several users to share the wireless channel simultaneously using CDMA. This will minimize the wasted capacity resulting from the allocation of the whole channel capacity to one user at a time even when that user is unable to utilize all of that capacity. Another reason for sharing system capacity between several users, at the same time slot, is that some of the user equipments at the receiving side might have design limitations on the amount of data they can receive and process at a given time.
The connectivity of users to the base station in any wireless system is varying with time and can be best modeled as a random process. The application of stochastic modeling and queuing theory to model wireless systems is well vetted in the literature. Modeling wireless systems using parallel queues with random queue/server connectivity was used by Tassiulas and Ephremides [3], Ganti, Modiano and Tsitsiklis [6] and many others to study scheduler optimization in wireless systems. In the following subsection, we provide a more formal model description and motivation for the problem at hand.
Model Description
In this work, we assume that time is slotted into equal length deterministic intervals. We model the wireless system under investigation as a set of L parallel queues with infinite capacity (see Figure 1); the queues correspond to the different users in the system. We define X i (n) to represent the number of packets in the i th queue at the beginning of time slot n. The queues share a set of K identical servers, each server representing a network resource, e.g., transmission channel. We make no assumption regarding the number of servers relative to the number of queues, i.e., K can be less, equal or greater than L. The packets in this system are assumed to have constant length, and require one time slot to complete service. A server can serve one packet only at any given time slot. A server can only serve connected, non-empty queues. Therefore, the system can serve up to K packets during each time slot. Those packets may belong to one or several queues.
The channel connectivity between a queue and any server is random. The state of the channel connecting the i th queue to the j th server during the n th time slot is denoted by G i,j (n) and can be either connected (G i,j (n) = 1) or not connected (G i,j (n) = 0). Therefore, in a real system G i,j (n) will determine if transmission channel j can be used by user i or not. We assume that G i,j (n), for all i = 1, 2, . . . , L, j = 1, 2, . . . , K and n, are independent, Bernoulli random variables with parameter p.
The number of arrivals to the i th queue during time slot n is denoted by Z i (n). The random variables Z i (n), ∀i, n is assumed to have Bernoulli distribution. We require that arrival processes to different queues be independent of each other; we further require that the random processes {Z i (n)} be independent of the processes {G i,j (n)} for i = 1, 2, . . . , L, j = 1, 2, . . . , K. The symmetry and independence assumptions are necessary for the coupling arguments we use in our optimality proofs. The rest are simplifying assumptions that can be relaxed at the price of a more complex and maybe less intuitive proof.
A scheduling policy (or server allocation policy or scheduler) decides, at the beginning of each time slot, what servers will be assigned to which queue during that time slot. The objective of this work is to identify and analyze the optimal scheduling policy that minimizes, in a stochastic ordering sense, a range of cost functions of the system queue sizes, including the total number of queued packets, in the aforementioned system. The choice of the class of cost functions and the minimization process are discussed in detail in Section 5.
Previous Work and Our Contributions
In the literature, there is substantial research effort focusing on the subject of optimal scheduling in wireless networks with random connectivity. Tassiulas and Ephremides [3] studied the problem of allocating a single, randomly connected server to a set of parallel queues. They proved, using stochastic coupling arguments, that a LCQ (Longest Connected Queue) policy is optimal. In our work we investigate a more general model that studies the G L,K (n) G L,1 (n) G 1,K (n) G 2,K (n) X 1 (n) G 2,1 (n) Z 2 (n) Z 1 (n) Z L (n) X L (n) X 2 (n) 1 K G 1,1 (n) Scheduler K Y(n) Figure 1: Abstraction of downlink scheduler in a multi-server wireless network.
optimal allocation of K > 1 randomly connected servers to parallel queues. We show that LCQ is not always optimal in a multi-server system where multiple servers can be allocated to each queue at any given time slot. Bambos and Michailidis [4] worked on a similar model (a continuous time version of [3] with finite buffer capacity) and proved that under stationary ergodic input job flow and modulation processes, both 'Maximum Connected Workload' and LCQ dynamic allocation policies maximize the stability region for this system. Furthermore, they proved that a policy that allocates the server to the connected queue with the fewest empty spaces, stochastically minimizes the loss flow and maximizes the throughput [5].
Another relevant result is that reported by Ganti, Modiano and Tsitsiklis [6]. They presented a model for a satellite node that has K transmitters. The system was modeled by a set of parallel queues with symmetrical statistics competing for K identical, randomly connected servers. At each time slot, no more than one server is allocated to each scheduled queue. They proved, using stochastic coupling arguments, that a policy that allocates the K servers to the K longest connected queues at each time slot, is optimal. This model is similar to the one we consider in this work, except that in our model one or more servers can be allocated to each queue in the system. A further, stronger difference between the two models is that we consider the case where each queue has independent connectivities to different servers. We make these assumptions for a more suitable representation of the 3G/4G wireless systems described earlier. These differences make it substantially harder to identify (and even describe) the optimal policy (see Section 3). A more recent result that has relevance to our work is the one reported by Kittipiyakul and Javidi in [7]. They proved, using dynamic programming, that a 'maximum-throughput and load-balancing' policy minimizes the expected average cost for a two-queue, multi-server system with random connectivity. In our research work, we prove optimality of the most balancing policies in the more general problem of a multi-queue (more than two queues) and multi-server system with random channel connectivity. A stronger distinction of our work is that we proved the optimality in a stochastic ordering sense which is a stronger notion of optimality compared to the expected average cost criterion that was used in [7]. Lott and Teneketzis [8] investigated a multi-class system of N weighted cost parallel queues and M servers with random connectivity. They also used the same restriction of one server per queue used in [6]. They showed that an index rule is optimal and provided conditions sufficient, but not necessary, to guarantee its optimality.
Koole et al [9] studied a model similar to that of [3] and [5]. They found that the 'Best User' policy maximizes the expected discounted number of successful transmissions. Liu et al [10], [11] studied the optimality of opportunistic schedulers (e.g., Proportional Fair (PF) scheduler). They presented the characteristics and optimality conditions for such schedulers. However, Andrews [12] showed that there are six different implementation algorithms of a PF scheduler, none of which is stable. For more information on resource allocation and optimization in wireless networks the reader may consult [13], [14], [15], [16], [17], and [18].
The model we present in this work can be applied to many of the previous work described above. In section 9 we discuss this applicability for three key publications, namely [3], [6] and [7], that are strongly related to our own. We also show how our model can be reduced to their models and used to describe the problems they investigated.
In summary, the main contributions of our work are the following: 1. We introduce and show the existence of the class of Most Balancing (MB) scheduling policies in the model of Figure 1 (see Equations (7) and (8)). Intuitively, an MB policy attempts to balance all queue sizes at every time slot, so that the total sum of queue size differences will be minimized.
2. We prove the optimality of MB policies for minimizing, in stochastic ordering sense, a set of functionals of the queue lengths (see Theorem 1).
3. We provide low-overhead, heuristic approximations for an MB policy.
At any time slot, such policies allocate the "least connected servers first" to their "longest connected queues" (LCSF/LCQ). These policies have O(L × K) complexity and thus can be easily implemented. We evaluate the performance of these approximations via simulations.
The rest of the article is organized as follows. In section II, we introduce notation and define the scheduling policies. In section III, we introduce and provide a detailed description of the MB policies. In section IV, we introduce and characterize balancing interchanges, that we will use in the proof of MB optimality. In section V, we present the main result, i.e., the optimality of MB policies. In section VI, we present the Least Balancing (LB) policies, and show that these policies perform the worst among all work conserving policies. MB and LB policies provide upper and lower performance bounds. In section VII, we introduce practical, low-overhead approximations for such policies, namely the LCSF/LCQ policy and the MCSF/SCQ policy, with their implementation algorithms. In section VIII, we present simulation results for different scheduling policies. In section IX, we give some final remarks that show the applicability of our model to problems studied in previous work. We present proofs for some of our results in the Appendix.
Scheduling Policies
Recall that L and K denote the number of queues and servers respectively in the model introduced in Figure 1. We will use bold face, UPPER CASE and lower case letters to represent vector/matrix quantities, random variables and sample values respectively. In order to represent the policy action that corresponds to "idling" a server, we introduce a special, "dummy" queue which is denoted as queue 0. Allocating a server to this queue is equivalent to idling that server. By default, queue 0 is permanently connected to all servers and contains only "dummy" packets. Let 1 {A} denote the indicator function for condition A. Throughout this article, we will use the following notation: • G(n) is an (L + 1) × K matrix, where G i,j (n) for i > 0 is the channel connectivity random variable as defined in Section 1. By assumption, G 0,j (n) = 1 for all j, n.
. . , K} denotes the number of packets withdrawn from queue i (and assigned to servers) during time slot n.
• Z(n) = (Z 0 (n), Z 1 (n), Z 2 (n), . . . , Z L (n)) T is the vector of the number of exogenous arrivals during time slot n. Arrivals to queue i = 0 are as defined in Section 1.
• For ease of reference, we call the tuple (X(n), G(n)) the "state" of the system at the beginning of time slot n.
For any (feasible) control (Y(n)), the system described previously evolves according to We assume that arrivals during time slot n are added after removing served packets. Therefore, packets that arrive during time slot n have no effect on the controller decision at that time slot and may only be withdrawn during t = n + 1 or later. For convenience and in order to ensure that X 0 (n) = 0 for all n, we define Z 0 (n) = Y 0 (n). We define controller policies more formally next.
Feasible Scheduling and Withdrawal Controls
The withdrawal control defined earlier does not provide any information regarding server allocation. Such information is necessary for our optimality proof. To capture such information, we define the vector Q(n), where Q j (n) ∈ {0, 1, . . . , L} denotes the index of the queue that is selected (according to some rule) to be served by server j during time slot n. Note that serving the "dummy" queue, i.e., setting Q j (n) = 0 indicates that server j is idling during time slot n. For future reference, we will call Q(n) the scheduling (or server allocation) control.
Using the previous notation and given a scheduling control vector Q(n) we can compute the withdrawal control vector as: We say that a given vector Q(n) ∈ {0, 1, . . . , L} K is a feasible scheduling control (during time slot n) if: (a) a server is allocated to a connected queue, and, (b) the number of servers allocated to a queue (dummy queue excluded) cannot exceed the size of the queue at time n. Similarly, we say that a vector Y(n) ∈ {0, 1, . . . , K} L+1 is a feasible withdrawal control (during time slot n) if there exists a feasible scheduling control Q(n) that satisfies Equation (2).
Conditions (a) and (b) above are also necessary for feasibility of a scheduling control vector Q(n). From Equation (2), a feasible withdrawal control Y(n) satisfies the following necessary conditions: For the rest of this article, we will refer to Q(n) as an implementation of the given feasible control Y(n). We denote the set of all feasible withdrawal controls while in state (x, g) by Y(x, g).
Note from Equation (2) that, given a feasible scheduling control Q(n), a feasible withdrawal control Y(n) can be readily constructed. Note, however, that, for any feasible Y(n), the feasible scheduling control Q(n) may not be unique. Furthermore, given a feasible Y(n), the construction of the scheduling control Q(n) may not be straightforward 1 and will not be examined in this article.
Definition of Scheduling Policies
A scheduling policy π (or policy π for simplicity) is a rule that determines feasible withdrawal vectors Y(n) for all n, as a function of the past history and current state of the system H(n). The state history is given by the sequence of random variables H(1) = (X(1)), and H(n) = (X(1), G(1), Z(1), . . . , G(n−1), Z(n−1), G(n)), n = 2, 3, . . .
Let H n be the set of all state histories up to time slot n. Then a policy π can be formally defined as the sequence of measurable functions u n (H(n)) ∈ Y(X(n), G(n)), n = 1, 2, . . .
where Z + is the set of non-negative integers and Z L+1 + = Z + × · · · × Z + , where the Cartesian product is taken L + 1 times.
At each time slot, the following sequence of events happens: first, the connectivities G(n) and the queue lengths X(n) are observed. Second, the packet withdrawal vector Y(n) is determined according to a given policy. Finally, the new arrivals Z(n) are added to determine the next queue length vector X(n + 1).
We denote the set of all scheduling policies described by Equation (6) by Π. We introduce next a subset of Π, namely the class of Most Balancing (MB) policies. The goal of this work is to prove that MB policies are optimal (in a stochastic ordering sense).
The Class of MB Policies
In this section, we provide a description and mathematical characterization of the class of MB policies. Intuitively, the MB policies "attempt to minimize the queue length differences in the system at every time slot n". For a more formal characterization of MB policies, we first define the following: Given a state (x(n), g(n)) and a policy π that chooses the feasible control y(n) ∈ Y(x, g) at time slot n, define the "updated queue size"x i (n) = x i (n) − y i (n) as the size of queue i, i = 0, 1, . . . , L, after applying the control y i (n) and just before adding the arrivals during time slot n. Note that because we let z 0 (n) = y 0 (n), we havex 0 (n) ∈ Z where Z is the set of all integers, i.e., we allowx 0 (n) to be negative.
We define κ n (π), the "imbalance index" of policy π at time slot n, as the following sum of differences: where [k] denotes the index of the k th longest queue after applying the control y(n) and before adding the arrivals at time slot n. By convention, queue '0' (the "dummy queue") will always have order L + 1 (i.e., the queue with the minimum length). This definition ensures that the differences are nonnegative and a pair of queues is accounted for in the summation only once; moreover, as we shall see in Lemma Appendix .2.1, this definition allows for a straightforward calculation and comparison of various policies 2 . It follows from Equation (7) that the minimum possible value of the imbalance index is equal to L ·x [L] (i.e., all L queues have the same length which is equal to the shortest queue length) which is indicative of a fully balanced system. It also follows that the maximum such value is equal to . This value is attained when the L − 1 longest queues have the same size.
Let Π M B denote the set of all MB policies, then we define the elements of this set as follows: Definition: A Most Balancing (MB) policy is a policy π ∈ Π M B that, at every n = 1, 2, . . ., chooses feasible withdrawal vector y(n) ∈ Y(x, g) such that the imbalance index at that time slot is minimized, i.e., The set Π M B in Equation (8) is well-defined and non-empty, since the minimization is over a finite set. Note that the set of MB policies may have more than one element. This could happen, for example, when at a given time slot n, a server k is connected to two or more queues of equal size, which happen to be the longest queues connected to this server after allocating all the other servers. To illustrate this case, consider a two-queue system with a single, fully-connected server at time slot n. Let x(n) = (5, 5). Assume that policy π 1 (respectively π 2 ) chooses a withdrawal vector y(n) = (1, 0) (respectively y * (n) = (0, 1)). Then both policies minimize the imbalance index, and κ n (π 1 ) = κ n (π 2 ) = 10.
Given X(t) and G(t), one can construct an MB policy using a direct search over all possible server allocations. For large L and K, this can be a challenging computational task and is not the focus of this work. In Section 7, we provide a low-complexity heuristic algorithm (LCSF/LCQ) to approximate MB policies.
Remark 1. Note that the LCQ policy in [3] is a most balancing (MB) policy for K = 1 (i.e., the one server system presented in [3]). Extension of LCQ to K > 1 (i.e., allocating all the servers to the longest queue in the multiserver model) may not result in a MB policy, as the following example demonstrates.
Consider a system of three queues with three fully-connected servers during time slot n. Let x(n) = (6,5,4). An LCQ policy in the spirit of [3] that allocates all servers to the longest connected queue results in queue size vector x(n) = (3,5,4). Moreover, an LCQ policy in the spirit of [6] that allocates the three servers to the three longest connected queues results in queue size vectorx(n) = (5,4,3). Both policies have κ n (π) = 16. An MB policy results in queue size vectorx(n) = (4, 4, 4) and κ n (π) = 16.
Comparing arbitrary policies to an MB policy
When comparing various policies to an MB policy, the definition in Equation (8) is cumbersome since it involves all time instants n. The subsets Π n we introduce next define policies that are related to MB policies and allow us to perform comparisons one single instant at a time.
Consider any fixed n ≥ 1; we say that a policy π ∈ Π "has the MB property" at time n, if π achieves the minimum value of the index κ n (π).
Definition: For any given time n ≥ 0, Π n denotes the set of policies that have the MB property at all time slots t ≤ n (and are arbitrary for t > n).
We have that Π = Π 0 . Note that the set Π n is not empty, since MB policies are elements of it. We can easily see that these sets form a monotone sequence, with Then the set Π M B in Equation (8) can be defined as The vector D defined in Equation (10) is a measure of how much an arbitrary policy π differs from a given MB policy during a given time slot n.
Definition: Consider a given state (x(n), g(n)) and a policy π that chooses the feasible withdrawal vector y(n) during time slot n. Let y M B (n) be a withdrawal vector chosen by an MB policy during the same time slot n.
We define the (L + 1) × 1-dimensional vector D ∈ Z L+1 as Note that, for notational simplicity, we omit the dependence of D on the policies and the time index n. Intuitively, a negative element D i of vector D indicates that more packets than necessary (compared to a policy that has the MB property) have been removed from queue i under policy π.
The following lemma quantifies the difference between an arbitrary policy and an MB policy (at time n). Its proof is given in Appendix Appendix .1.
Lemma 1. Consider a given state (x(n), g(n)) and a policy π ∈ Π. Then, (a) if D = 0, the policy π has the MB property at time n, and, (b) if π has the MB property at time n, the vector D has components that are 0, +1, or −1 only.
In view of Lemma 1, h π can be seen as a measure of "how close" the policy π is to having the MB property at time n.
Definition: For any given time n and integer h, where 0 ≤ h ≤ K, define the set Π h n as the set that contains all policies π ∈ Π n−1 , such that h π ≤ h. From Lemma 1, we can see that Π 0 n = Π n . We can easily check that Π K n = Π n−1 , so π ∈ Π K n by default. {Π h n } K h=0 forms a monotone sequence, with We exploit the monotonicity property of the Π h n sets in the next section, when we show how balancing interchanges reduce the imbalance index of a given policy.
Note that the set Π of all policies can be denoted as It follows from the last two equations that an arbitrary policy π ∈ Π will also belong to a set Π n−1 , for some n ≥ 1. The proof of optimality in Section 5 is based on comparisons of π to a series of policies that belong to the subsets Π h n (see Lemma 5).
Balancing Interchanges
In this section, we introduce the notion of "balancing interchanges". Intuitively, an interchange I(f, t) between two queues, f and t, describes the action of withdrawing a packet from queue f instead of queue t (see Equations (15) and (16)). Such interchanges are used to relate the imbalance indices of various policies (see Equation (23)); balancing interchanges are special in two ways: (a) they do not increase the imbalance index (see Lemma 2) and thus provide a means to describe how a policy can be modified to obtain the MB property at time n, and, (b) they preserve the queue size ordering we define in the next section (see relations R1-R3 in Section 5.1). This ordering is crucial in proving optimality.
Interchanges can be implemented via server reallocation. Since there are K servers, it is intuitive that at most K interchanges suffice to convert any arbitrary policy to a policy that has the MB property at time n. The crux of Lemma 4, the main result of this section, is that such interchanges are balancing.
Interchanges between two queues
Let f ∈ {0, 1, . . . , L}, t ∈ {0, 1, . . . , L} represent the indices of two queues that we refer to as the 'from' and 'to' queues. Define the (L + 1) × 1dimensional vector I(f, t), whose j-th element is given by: Fix an initial state (x(n), g(n)) at time slot n; consider a policy π with a (feasible) withdrawal vector y(n). Let be another withdrawal vector. The two vectors y(n), y * (n) differ only in the two components t, f ; under the withdrawal vector y * (n), an additional packet is removed from queue f , while one packet less is removed from queue t. Note that either t or f can be the dummy queue. In other words, In the sequel, we will call I(f, t) an interchange between queues f and t. We will call I(f, t) a feasible interchange if it results in a feasible withdrawal vector y * (n). It follows immediately from Equations (1) and (14) that the I(f, t) interchange will result in a new vector,x * (n), of updated queue sizes, such that: We are interested next in describing sufficient conditions for ensuring feasible interchanges.
Feasible Single-Server Reallocation
Given the state (x(n), g(n)), let y(n) be any feasible withdrawal vector at time slot n that is implemented via q(n). We define a "feasible, single-server reallocation" (from queue t to queue f ) as the reallocation of a single server k from queue t to queue f , such that the new scheduling control q * (n) is also feasible. The conditions g f,k (n) · g t,k (n) · 1 {q k (n)=t} = 1 andx f (n) ≥ 1 are sufficient for the reallocation of server k (from queue t to queue f ) to be feasible.
A feasible, single-server reallocation from queue t to queue f results into a feasible interchange I(f, t). However, the reverse may not be true, as we detail in the following section.
Sufficient conditions for a feasible interchange
Consider again the state (x(n), g(n)) and feasible scheduling control q(n). The feasible interchange I(f, t) in Equation (14) may result from a sequence of m feasible, single-server reallocations among several queues, as demonstrated in Figure 2, where 1 ≤ m ≤ K.
Let r ∈ {0, 1, . . . , L} m+1 denote a sequence of queue indices, where r 1 = f and r m+1 = t. Let k i : k i ∈ {1, 2, . . . , K} denote the server reallocated from queue r i+1 to queue r i . Then the following are sufficient conditions for the feasibility of the interchange operation of Equation (14): for some integer 1 ≤ m ≤ K and r ∈ {0, 1, . . . , L} m+1 . Constraint (19) ensures that connectivity conditions allow for the feasibility of all m intermediate single-server reallocations. The sequence of server reallocations starts by reallocating server k 1 to queue f = r 1 . In this case, queue r 1 is reduced by one packet (i.e., an extra packet is withdrawn from queue f ) and queue r 2 is increased by one packet. Constraint (20) ensures that a packet can be withdrawn from queue f . The reallocation of server k 1 insures that queue r 2 contains at least one packet for the second intermediate single-server reallocation to be feasible even whenx r 2 (n) = 0. Same is true for any queue r i : i ∈ {2, 3, . . . , m}. Therefore, constraints (19) and (20) are also sufficient for the feasibility of the interchange I(f, t).
"Balancing" interchanges
Balancing interchanges result in policies that may reduce the imbalance index, as the following lemma states.
Lemma 2. Consider two policies π * and π, related via the balancing interchange at time slot n. Then the imbalance indices for the two policies are related via where l (respectively s) is the order of queue f (respectively t) inx(n) when ordered in descending order, such that, s > l; x l > x a , ∀a > l and The proof is a direct consequence of Lemma Appendix .2.1 in Appendix Appendix .2 and the fact that, by definition of the balancing interchange, we have s > l.
In words, Equation (23) states that an interchange I(f, t), when balancing, results in: either a cost reduction of 2(s − l) (whenx f (n) =x [l] (n) ≥ x [s] (n) + 2 =x t (n) + 2) or an unchanged cost (whenx f (n) =x t (n) + 1). The latter case agrees with intuition, since the balancing interchange in this case will result in simply permuting the lengths of queues f and t; this permutation does not change the total sum of differences (and hence the imbalance index) in the resulting queue length vector.
We determine next conditions that characterize what interchanges are balancing. We also describe how balancing interchanges transform an arbitrary policy to an MB policy.
How to determine balancing interchanges
Lemma 3 provides a selection criterion to systematically select balancing (and hence improving) interchanges. Lemma 4 provides a bound on the number of interchanges needed to convert any policy into one that has the MB property at time n. The proofs of the two lemmas are given in Appendix Appendix .3 and Appendix .4 respectively.
Lemma 3. Consider a given state (x(n), g(n)) and a feasible withdrawal vector y(n). Any feasible interchange I(f, t) with indices f and t such that We denote by π * the policy that chooses the withdrawal vector y * (n). In other words, π * denotes the policy that results from applying this sequence of interchanges.
Lemma 4. For any policy π ∈ Π n−1 , h π balancing interchanges suffice to determine a policy π * such that π * ∈ Π n . Lemma 3 can be used to identify queues f i and t i during time slot n such that the interchange I(f i , t i ) is balancing. Lemma 4 shows that performing a sequence of such interchanges, determines a policy that has the MB property for one more time slot. Both lemmas are crucial for the proof of our main result, since they indicate how a given policy can be improved using one balancing interchange at a time.
Optimality of MB Policies
In this section, we present the main result of this article, that is, the optimality of the Most Balancing (MB) policies. We will establish optimality for a range of performance criteria, including the minimization of the total number of packets in the system. We introduce the following definition.
Definition of Preferred Order
Let's define the relation on Z (L+1) + first; we sayx x if: R2-x is obtained from x by permuting two of its components; the two vectors differ only in two components i and j, such thatx i = x j and R3-x is obtained from x by performing a "balancing interchange", in the sense of Equation (21), i.e., the two vectors differ in two components i > 0 and j ≥ 0 only, where x i ≥ x j + 1, such that: To prove the optimality of MB policies, we will need a methodology that enables comparison of the queue lengths under different policies. Towards this end, we define a "preferred order" as follows: Definition: (Preferred Order). The transitive closure of the relation defines a partial order (which we call preferred order and use the symbol ≺ p to represent) on the set Z For example, ifx = (3, 4, 5) and x = (4, 5, 3) thenx ≺ p x sincex can be obtained from x by performing the following two consecutive two-component permutations: first swap the second and third components of x, yielding x 1 = (4, 3, 5) then swap the first and second components of x 1 , yielding Suppose thatx, x represent queue size vectors for our model. Statement R3 in this case describes moving a packet from one real, large queue i to another smaller one j (note that the queue with index j = 0 is not excluded since a balancing interchange may represent the allocation of an idled server). We say thatx is more balanced than x when R3 is satisfied. For example, if L = 2 and x = (0, 5, 2) then a balancing interchange (where i = 1 and j = 2) will result inx = (0, 4, 3).
The class F of cost functions
Letx, x ∈ Z (L+1) + be two vectors representing queue lengths. Then we denote by F the class of real-valued functions on Z (L+1) + that are monotone, nondecreasing with respect to the partial order ≺ p ; that is, f ∈ F if and only From (24) and the definition of preferred order, it can be easily seen that the function f (x) = x 1 +x 2 +· · ·+x L belongs to F. This function corresponds to the total number of queued packets in the system 4 .
For two real-valued random variables A and B, A ≤ st B defines the usual stochastic ordering [2]. In the remainder of this paper, we say that a policy σ dominates another policy π if for all cost functions f ∈ F. We will need the following lemma to complete the proof of our main result presented in Theorem 1.
Lemma 5. Consider an arbitrary policy
The full details of the proof for Lemma 5 are given in Appendix Appendix .5. The proof involves two parts. First, we construct a policyπ by applying a balancing interchange to π; using Lemmas 3 and 4, we show thatπ ∈ Π h−1 τ . Second, we prove thatπ dominates policy π (see Equation (25)); this part employs coupling arguments.
The main result
In the following, X M B and X π represent the queue sizes under a MB and an arbitrary policy π.
Theorem 1. Consider a system of L queues served by K identical servers, as shown in Figure 1 with the assumptions of Section 1. A Most Balancing (MB) policy dominates any arbitrary policy when applied to this system, i.e., for all π ∈ Π and all cost functions f ∈ F.
Proof. From (24) and the definition of stochastic dominance, it is sufficient to show that X M B (t) ≺ p X π (t) for all t and all sample paths in a suitable sample space. The sample space is the standard one used in stochastic coupling methods [1]; see Appendix Appendix .5 for more details.
To prove the optimality of an MB policy, π M B , we start with an arbitrary policy π and apply a series of modifications that result in a sequence of policies (π 1 , π 2 , . . .). The modified policies have the following properties: (a) π 1 dominates the given policy π, (b) π i ∈ Π i , i.e., policy π i has the MB property at time slots t = 1, 2, . . . , i, and, (c) π j dominates π i for j > i (i.e., π j has the MB property for a longer period of time than π i ).
Denote the limiting policy as n −→ ∞ by π * . One can see that π * is an MB policy. Furthermore, π * dominates π i , for all i < ∞, as well as the original policy π.
Remark 2. The optimal policy may not be unique. Our main objective is to prove the optimality of the MB policy not its uniqueness. The optimality of MB policies makes intuitive sense; any such policy will tend to reduce the chance that any server idles. This is because an MB policy distributes the servers among the connected queues in the system such that it keeps packets spread among all the queues in a "uniform" manner.
The Least Balancing Policies
The Least Balancing (LB) policies are the scheduling policies, among all work-conserving (non-idling) policies, that at every time slot (n = 1, 2, . . .), choose a packet withdrawal vector y(n) ∈ Y(x, g) that "maximizes the differences" between queue lengths in the system (i.e., maximizes κ n (π) in Equation (7)). In other words, if Π LB is the set of all LB policies and Π W C is the set of all work conserving policies then, Maximizing the imbalance among the queues in the system will result in maximizing the number of empty queues at any time slot, thus maximizing the chance that servers are forced to idle in future time slots. This intuitively suggests that LB policies will be outperformed by any work conserving policy. The next theorem states this fact. Its proof is analogous to that of Theorem 1 and will not be given here.
Remark 3.
A non-work conserving policy can by constructed such that it will perform worse than LB policies, e.g., a policy that idles all servers.
Theorem 2. Consider a system of L queues served by K identical servers, under the assumptions described in Sections 1. A Least Balancing (LB) policy is dominated by any arbitrary work conserving policy when applied to this system, i.e., f (X π (t)) ≤ st f (X LB (t)), ∀ t = 1, 2, . . .
for all π ∈ Π W C and all cost functions f ∈ F.
An LB policy has no practical significance, since it maximizes the cost functions presented earlier. Intuitively, it should also worsen the system stability region and hence the system throughput. However, it is interesting to study the worst possible policy behavior and to measure its performance. The LB and MB policies provide lower and upper limits to the performance of any work conserving policy. The performance of any policy can be measured by the deviation of its behavior from that of the MB and LB policies.
Heuristic Implementation Algorithms For MB and LB Policies
In this section, we present two heuristic policies that approximate the behavior of the MB and LB policies respectively. We present an implementation algorithm for each one of them.
Approximate Implementation of MB Policies
We introduce the Least Connected Server First/Longest Connected Queue (LCSF/LCQ) policy, a low-overhead approximation of MB policy, with O(L× K) computational complexity. The policy is stationary and depends only on the current state (X(n), G(n)) during time slot n. The LCSF/LCQ implementation during a given time slot is described as follows: The least connected server is identified and is allocated to its longest connected queue. The queue length is updated (i.e., decremented). We proceed accordingly to the next least connected server until all servers are assigned. In algorithmic terms, the LCSF/LCQ policy can be described/implemented as follows: Let Q j = {i : i = 1, 2, . . . , L; g i,j (t) = 1} denote the set of queues that are connected to server j during time slot t; we omit the dependence on t to simplify notation. Let Q (X k |X k > 0) 6.
Note that in line 5 of Algorithm 1, if the set Q [j] is empty, then the argmax returns the empty set. In this case, the j th order server will not be allocated (i.e., will be idle during time slot t). Algorithm 1 produces two outputs, when it is run at t = n: y(n) and q(n) as shown in line 9 of the algorithm. In accordance to the definition of a policy in Equation (6), the LCSF/LCQ policy can be formally defined as the sequence of time-independent mappings u(x(n), g(n)) that produce the withdrawal vector y(n) described in line 9 above.
Lemma 6. LCSF/LCQ is not an MB policy.
To prove lemma 6 we present the following counter example. Consider a system with L = 4 and K = 7. At time slot n the system has the following configuration: The queue state at time slot n is x(n) = (5, 5, 5, 4). Servers 1 to 6 are connected to queues 1, 2 and 3 and server 7 is connected to queues 1 and 4 only.
The LCSF/LCQ policy is of particular interest for the following reasons: (a) It follows a particular server allocation ordering (LCSF) to their longest connected queues (LCQ) and thus it can be implemented using simple sequential server allocation with low computation complexity, (b) the selected server ordering (LCSF) and allocation (LCQ) intuitively attempt to reduce the size of the longest connected queue thus reducing the imbalance among queues, and, (c) as we will see in Section 8, the LCSF/LCQ performance is statistically indistinguishable from that of an MB policy (implying that the counterexamples similar to the one in Lemma 6 proof have low probability of occurrence under LCSF/LCQ system operation). Therefore, LCSF/LCQ can be proposed as an approximate heuristic for the implementation of MB policies.
Approximate Implementation of LB Policies
In this section, we present the MCSF/SCQ policy as a low complexity approximation of LB policies. We also provide an implementation algorithm for MCSF/SCQ using the same sequential server allocation principle that we used in Algorithm 1 above.
The Most Connected Server First/Shortest Connected Queue (MCSF/SCQ) policy is the server allocation policy that allocates each one of the K servers to its shortest connected queue (not counting the packets already scheduled for service) starting with the most connected server first. The MCSF/SCQ implementation algorithm is analogous to Algorithm 1 except for lines 4 and 5 which are described next: (X k |X k > 0) . . .
; End of Algorithm 2.
Comments analogous to the ones valid for Algorithm 1 are also valid for Algorithm 2.
Performance Evaluation and Simulation Results
We used simulation to study the performance of the system under the MB/LB policies and to compare against the system performance under several other policies. The metric we used in this study is EQ E( L i=1 X i ), the average of the total number of packets in the system.
We focused on two groups of simulations. In the first, we evaluate the system performance with respect to number of queues (L) and servers (K) as well as channel connectivity (Figures 3 to 7). Arrivals are assumed to be i.i.d. Bernoulli. In the second group (Figures 8(a) to 8(c)) we consider batch arrivals with random (uniformly distributed) burst size.
The policies used in this simulation are: LCSF/LCQ, as an approximation of an MB policy; MCSF/SCQ, as an approximation of an LB policy. An MB policy was implemented using full search, for the cases specified in this section, and its performance was indistinguishable from that of the LCSF/LCQ. Therefore, in the simulation graphs the MB and LCSF/LCQ are represented by the same curves. This statement is also true for LB and MCSF/SCQ policies performances. Other policies that were simulated include the randomized, Most Connected Server First/Longest Connected Queue (MCSF/LCQ), and Least Connected Server First/Shortest Connected Queue (LCSF/SCQ) policies. The randomized policy is the one that, at each time slot, allocates each server randomly and with equal probability to one of its connected queues. The MCSF/LCQ policy differs from the LCSF/LCQ policies in the order that it allocates the servers. It uses the exact reverse order, starting the allocation with the most connected server and ending it with the least connected one. However, it resembles the LCSF/LCQ policies in that it allocates each server to its longest connected queue. The LCSF/SCQ policy allocates each server, starting from the one with the least number of connected queues, to its shortest connected queue. The difference from an LCSF/LCQ policy is obviously the allocation to the shortest connected queue. This policy will result in greatly unbalanced queues and hence a performance that is closer to the LB policies. Figure 3 shows the average total queue occupancy versus arrival rate under the five different policies. The system in this simulation is a symmetrical system with 16 parallel queues (L = 16), 16 identical servers (K = 16) and i.i.d. Bernoulli queue-to-server (channel) connectivity with parameter The curves in Figure 3 follow a shape that is initially almost flat and ends with a rapid increase. This abrupt increase happens at a point where the system becomes unstable. In this case, the queue lengths in the system will grow fast and the system becomes unstable. The graph shows that LCSF/LCQ, the MB policy approximation outperforms 5 all other policies. It minimizes EQ and hence the queuing delay. We also noticed that it maximizes the system stability region and hence the system throughput as well. The MCSF/SCQ performed the worst. As expected, the performance of the other three policies lies within the performance of the MB and LB policies.
The MCSF/LCQ and LCSF/SCQ policies are variations of the MB and LB policies respectively. The performance of MCSF/LCQ policy is close to that of the MB policy. The difference in performance is due to the order of server allocation. On the other hand, the LCSF/SCQ policy shows a large performance improvement on that of the LB policy. This improvement is a result of the reordering of server allocations. Figure 3 also shows that the randomized policy performs reasonably well. Moreover, its performance improves as the number of servers in the system decreases, as the next set of experiments shows.
The Effect of The Number of Servers
In this section, we study the effect of the number of servers on policy performance. Figure 4 (K = 8) and Figure 5 (K = 4) show EQ versus arrival rate per queue under the five policies, in a symmetrical system with L = 16 and p = 0.2. Comparing these two graphs to the one in Figure 3, we notice the following: First, the performance advantage of the LCSF/LCQ (and hence of an MB policy) over the other policies increases as the number of servers in the system increases. The presence of more servers implies that the server allocation action space is larger. Selecting the optimal (i.e., MB) allocation, over any arbitrary policy, out of a large number of options will result in a better performance as compared to the case when the number of server allocation options is less.
Second, the stability region of the system becomes narrower when less servers are used. This is true because fewer resources (servers) are available to be allocated by the working policy in this case.
Finally, we notice that the MCSF/LCQ performs very close to the LCSF/LCQ policy in the case of K = 4. Apparently, when K is small, the order of server allocation does not have a big impact on the policy performance.
The Effect of Channel Connectivity
In this section we investigate the effect of channel connectivity on the performance of the previously considered policies. Figures 6 and 7 show this effect for two choices of L and K. We observe the following: First, we notice that for larger channel connection probabilities (p ≥ 0.9), the effect of the policy behavior on the system performance becomes less significant. Therefore, the performance difference among the various policies is getting smaller. The LCSF/LCQ policy still has a small advantage over the rest of the policies, even though it is statistically difficult to distinguish. MCSF/SCQ continues to have the worst performance. As p increases, the probability that a server will end up connected to a group of empty queues will be very small regardless of the policy in effect. In fact, when the servers have full connectivity to all queues (i.e., p = 1.0) we expect that any work conserving policy will minimize the total number of packets in a symmetrical homogeneous system of queues since, any (work-conserving) policy will be optimal in a system with full connectivity.
Second, from all graphs we observe that there is a maximum input load that results in a stable system operation (maximum stable throughput) 6 . An upper bound (for stable system operation) for the arrival rate per queue α is given by i.e., the average number of packets entering the system (αL) must be less than the rate they are being served. When p = 1.0, the stability condition in Inequality (29) will be reduced to αL < K, which makes intuitive sense in such a system. Finally, we observe that the MCSF/LCQ policy performance is very close to that of LCSF/LCQ. However, its performance deteriorates in systems with higher number of servers and lower probabilities for queue-server connectivity. It is intuitive that with more servers available, the effect of the order of server allocations on the policy performance will increase. Since MCSF/LCQ differs from LCSF/LCQ only by the order of server allocation, therefore, more servers implies larger performance difference. Also, the lower the connectivity probability, the higher the probability that a server will end up with no connectivity to any non-empty queue, and hence be forced to idle.
Batch Arrivals With Random Batch Sizes
We studied the performance of the presented policies in the case of batch arrivals with uniformly distributed batch size, in the range {1, . . . , U }. Figure 8 shows EQ versus load for three cases with U = 2, 5, 10, and hence average batch sizes 1.5, 3, and 5.5. The LCSF/LCQ policy clearly dominates all the other policies. However, the performance of the other policies, including MCSF/SCQ (LB approximation) approaches that of the LCSF/LCQ pol- icy as the average batch size increases. The performance of all the policies deteriorates when the arrivals become burstier, i.e., the batch size increases.
Final Remarks
The model and the results presented in this article can be regarded as a generalization (with the obvious added complexity as well as utility of our model) of the models and results reported by [3], [6], and [7]. In [3], the authors investigated the optimal scheduling policy for a model of L parallel queues and one randomly connected server. This model is a special case of the model we presented in this article, i.e., when K = 1. Using stochastic dominance techniques, they proved that LCQ is optimal in that it minimizes the total number of packets in the system. In our work, we also use stochastic dominance techniques to prove the optimality of MB policies for a wide range of cost functions (cost functions that are monotone, nondecreasing with respect to the partial order ≺ p ) including the total number of packets in the system. It can be easily shown that for the case of a single server (i.e., K = 1) the LCQ policy minimizes the imbalance index and therefore, LCQ belongs to the set of MB policies. In [6], the authors investigated the optimal policy for a model of L parallel queues with a stack of K servers. Each queue is randomly connected to the entire server stack. Only one server can be allocated to a queue at any time slot. In contrast, our model assumes independent queue-server connectivity, i.e., a queue can be connected to a subset of the K servers and not connected to the rest at any given time slot. We also allow for multiple servers to be allocated (when connected) to any queue. Therefore, the model in [6] can also be considered as a special case of our model, i.e., by letting g i,j (t) = g i (t), ∀i, j, t and by adding the feasibility constraint y i (t) ≤ 1. They proved that a policy that allocates the K servers to the K longest connected queues (LCQ) is optimal. Under the constraints above, this policy would also minimize the imbalance index among all feasible policies, i.e., this policy belongs to the set of MB policies. In [7] the authors proved that, in a model of two parallel queues (L = 2) and multiple randomly connected servers, a MTLB (maximum throughput/load balancing) policy minimizes the expected total cost. They defined the cost as a class of functions of the queue lengths for the two queues in the system. In our work, we generalize the model in [7] as follows: (a) we extend the model to L > 2, (b) we optimize the cost function in the stochastic order sense which implies the expected total cost used in [7], and (c) we relax the supermodularity and convexity constraints that they enforced on the cost function, i.e., we prove our results for a larger set of cost functions that includes theirs.
The authors of [7] defined the MTLB policy as the one that minimizes the lexicographic order of the queue length vector while maximizing the in-stantaneous throughput. We can show that MTLB policy belongs to the set of MB policies. To do that, we have to show that a policy which minimizes the lexicographic order: (a) also minimizes the imbalance index, i.e., it belong to the set of MB policies, and (b) is a work-conserving policy. A work-conserving policy minimizes the number of idling servers and hence, it maximizes instantaneous throughput (by the definition of instantaneous throughput). Lemma 7 states these results formally.
Lemma 7. Given the state (x(n), g(n)) during time slot n. Let λ * be a vector resulted from the feasible withdrawal vector y * (n) ∈ Y(x(n), g(n)). Suppose that λ * ≤ lex λ for all feasible λ. Then: a) the vector λ * achieves the minimum imbalance index among all feasible vectors, and b) A policy that selects y * (n) is a work-conserving policy.
Proof. a) Assume to the contrary that λ * does not minimize the imbalance index. Then there must exist a y (n) ∈ Y(x(n), g(n)), y (n) = y * (n) such that the imbalance index of the vector λ is strictly less than that of λ * . This implies that a policy π * that results in the withdrawal vector y * (n), and therefore the vector λ * , belongs to the set Π h n \ Π n for some h > 0, i.e., it does not have the MB property during time slot n. For any given state, a policy that minimizes the imbalance index must exist, 'minimization on a finite set'. According to Lemma 4, h π * balancing interchanges (BIs are feasible interchanges) are required to make any policy in Π h n belongs to Π n . Lemma D-1 shows that such balancing interchanges are feasible. Therefore, the following balancing interchange is both feasible and enhancing (it reduces the imbalance index): for any l, s ∈ {0, 1, . . . , L} and λ * s < λ * l − 1. In other words, we perform a feasible server reallocation from the s th longest queue to the l th longest queue in the system during time slot n. The resulted leftover vector λ is related to the vector λ * as follows: Since l < s by definition, then it is clear that λ ≤ lex λ * . This contradicts the initial assumption. Therefore, λ * must have the minimum imbalance index.
b) A feasible interchange y (n) = y(n) + I(f, 0) is a balancing one, since by definition of queue 0 and the interchange feasibility conditions, we have x f (n) >x 0 (n) + 1. Queue 0 is permanently connected to all servers by assumption. According to Lemma B-1 this interchange will definitely reduce the imbalance index. Therefore, any policy that intentionally idles servers can always be improved (i.e., reduce its imbalance index) by using the balancing interchange I(f, 0) for some queue f ∈ {1, . . . , L}.
From part a) of this lemma, we showed that a policy that minimizes the lexicographic order will also minimize the imbalance index. We also showed that a policy that idles servers intentionally can not achieve the minimum imbalance index. Therefore, only a work-conserving policy can minimize the lexicographic order.
From the above, we conclude that the MTLB belongs to the class of MB policies.
Conclusion
In this work, we presented a model for dynamic packet scheduling in a multiserver systems with random connectivity. This model can be used to study packet scheduling in emerging wireless systems. We modeled such systems via symmetric queues with random server connectivities and and Bernoulli arrivals. We introduced the class of Most Balancing policies. These policies distribute the service capacity between the connected queues in the system in an effort to "equalize" the queue occupancies. A theoretical proof of the optimality of MB policies using stochastic coupling arguments was presented. Optimality was defined as minimization, in stochastic ordering sense, of a range of cost functions of the queue lengths. The LCSF/LCQ policy was proposed as good, low-complexity approximation for MB policies.
A simulation study was conducted to study the performance of five different policies. The results verified that the MB approximation outperformed all other policies (even when the arrivals became bursty). However, the performance of all policies deteriorate as the mean burst size increases. Furthermore, we observed (through simulation) that the performance gain of the optimal policy over the other policies is reduced greatly in this case. Finally, we observed that a randomized policy can perform very close to the optimal one in several cases.
Appendix .1 Proof of Lemma 1
Proof. To prove part (a), assume that D = 0; then, using Equation (10), we have: From Equations (A-1) and (7), we have that κ n (π) = κ n (π M B ) and thus π has the MB property during time slot n.
To prove part (b), assume that π has the MB property at time slot n. Therefore, κ n (π) = κ n (π M B ). From Lemma Appendix .2.1 this is only possible if either: (i)x(n) =x M B (n), or (ii)x(n) is obtained by performing a balancing interchange between the pair of the l th and the s th longest queues (l < s) inx M B (n) such thatx [l] (n) =x [s] (n) + 1, is satisfied; note that there may be multiple such queue pairs. The balancing interchange in case (ii) will affect the length of two queues only (call them i and j) such that and j = [s] (for each given pair). Therefore, and, while withdrawals from the remaining queues will be the same, i.e., From Equations (A-2) through (A-4), we conclude that the vector D has components that are 0, +1, or −1 only.
Appendix .2 Balancing Interchanges and the Imbalance Index
In this section, we present a lemma that quantifies the effect of performing a balancing interchange on the imbalance index κ n (π).
Lemma Appendix .2.1. Let x and x * be two L + 1-dimensional ordered vectors (in descending order); suppose that x * is obtained from x by performing a balancing interchange I(l, s) between two components, l and s, of x, where x l > x s , such that, s > l; x l > x a , ∀a > l and x s < x b , ∀b < s. Then Proof. We generate the vector x * by performing a balancing interchange of two components, l and s (i.e., the l th and the s th largest components), in the vector x and reorder the resulted vector in descending manner. The resulted vector x * is characterized by the following: where l (respectively s ) is the new index (i.e., the order in the new vector x * ) of component l (respectively s) in the original vector x. From Equation (A-6) we can identify L − 2 elements that have the same magnitude in the two vectors x and x * . Therefore, the sum of differences between these L − 2 elements in both vectors will also be the same, i.e., We calculate the sums for the remaining terms (i.e., when at least one of the indices i, j belongs to {l, s} and/or i , j belongs to {l , s }) next. We first assume that x l ≥ x s + 2; in this case, we can easily show that l ≤ s . Then, we have the following five, mutually exclusive, cases to consider: 1. When i = l , i = l, j = s and j = s. This case occurs only once, i.e., when decomposing the double sum in Equation (A-5) we can find only one term that satisfies this case. From Equation (A-6) we have 2. When i = l , i = l, j = s and j = s. There are L − l terms that satisfy this case. Analogous to case 1) we determined that 3. When i = l , i = l, j = s and j = s. There are s − 2 terms that satisfy this case. In this case we can show that 4. When i = l , s , i = l, s, j = l and j = l. There are l − 1 terms that satisfy this case. In this case we can show that 5. When i = s , i = s, j = l , s and j = l, s. There are L − s + 1 terms that satisfy this case. In this case we have The above cases (i.e., Equations (A-7)-(A-12)) cover all the terms in Equation (A-5) when x l ≥ x s + 2. Combining all these terms yields: Furthermore, if x l = x s + 1, then from Equation (A-6) it is clear that x * l = x s and x * s = x l , i.e., the resulted vector is a permutation of the original one. Therefore, the sum of differences will be the same in both vectors and Equation (A-5) will be reduced to Equation
Appendix .3 Proof of Lemma 3
We first introduce a few intermediate lemmas that describe properties of I(f, t) and D.
Lemma Appendix .3.2. For a given policy π ∈ Π n−1 and a time slot n, i.e., the sum of all positive elements of D equals the sum of all negative elements of D. Moreover, L i=0 |D i |/2 is an integer between 0 and K. Proof. For any feasible withdrawal vector y(n), we have from Equation (4) that Lemma Appendix .3.3. Consider a given state (x(n), g(n)) during time slot n. Let f, t ∈ {0, 1, . . . , L} be any two queues such that I(f, t) is feasible. A policy π ∈ Π that results inx t (n) ≤x f (n) − 2 does not have the MB property at time n.
Proof. The interchange I(f, t) is a balancing interchange by definition. Sincê x t (n) ≤x f (n) − 2, then the balancing interchange I(f, t) reduces the imbalance index by a factor of two according to Equation (23). Therefore, π does not achieve the minimum imbalance index during time slot n, q.e.d.
Lemma Appendix .3.4. Given the state (x(n), g(n)) and a feasible withdrawal vector y(n) then a withdrawal vector y (n) that results from performing any sequence of feasible, single-server reallocations on y(n) is feasible.
The proof of Lemma Appendix .3.4 is straightforward and therefore it is not included here.
Lemma Appendix .3.5. Consider the state (x(n), g(n)) and any two feasible withdrawal vectors y(n) and y (n). Then, starting from y(n), the vector y (n) can be obtained by performing a sequence of feasible, single-server reallocations.
Proof. To prove this lemma we construct one such sequence next.
Let q(n), q (n) denote two server allocations for the implementation of y(n), y (n) respectively. Then we can relate y(n) and y (n) as follows: since we assume both y(n) and y (n) to be feasible, server k must be connected to both queues q k (n) and q k (n). Therefore, each interchange I(q k (n), q k (n)) is equivalent to a feasible, single-server reallocation. Note that q k (n) = q k (n) is possible, for some k, in which case I(q k (n), q k (n)) = 0. By construction, all the interchanges in the right hand side of Equation (A-16) are feasible.
We are now ready to prove Lemma 3 of Section 4.5.
Proof (Lemma 3). We consider the following three cases assuming f = t: Case 1: f = 0. This case is not possible by contradiction. By assumption, D 0 ≥ +1, which means that y M B 0 (n) ≥ y 0 (n)+1. This case states that an MB policy idled at least one more server than π. Therefore,x M B 0 (n) ≤ −1. This makes queue 0 the shortest queue. Allocating the idled server to queue t, i.e., the interchange I(t, 0), is both feasible (since y(n) is feasible by assumption) and balancing (by Lemma Appendix .3.1). The interchange I(t, 0) will result in a withdrawal vector y (n) = y M B (n) + I(t, 0). Let s be the order of queue f = 0 when ordering the vectorx M B (n) in a descending manner. Therefore, s = L + 1. Furthermore, in order for I(t, 0) to be feasible queue t must not be empty (according to feasibility constraint (20)) which implies thatx M B t (n) ≥ 1 and the order of queue t is l < s. Therefore,x M B f (n) ≤x M B t (n) − 2 and the interchange I(t, 0) will reduce the imbalance index by 2(s − l) according to Equation (23). This implies that the new policy has a smaller imbalance index than an MB policy. This contradicts the fact that any MB policy minimizes the imbalance index.
Case 2: t = 0. When t = 0 then the interchange I(f, t) is the process of allocating an idled server to queue f > 0. This, according to Lemma Appendix .3.1, is a balancing interchange. Case 3: t, f > 0. We will show that this case will also result in a balancing interchange. Let y(n) be the original withdrawal vector. Let y * (n) be the withdrawal vector resulted from the feasible interchange I(f, t), i.e., y * (n) = y(n) + I(f, t) Using the assumption D t ≤ −1 and Equation (16), we arrive at the following: Similarly, using the assumption D f ≥ +1 and Equation (15), we have Proof. Let π * ∈ Π M B be an MB policy that selects the withdrawal vector y * (n) during time slot n. Let D = y * (n) − y(n). Furthermore, let q * (n) and q(n) be two implementations of y * (n) and y(n) respectively. From Lemma Appendix .3.5 we have: The summation on the right-hand side of Equation (A-23) is composed of K terms, each of which represents a reallocation of a server k from queue q k (n) to queue q * k (n). Such server reallocation can be formulated as an interchange I(q * k (n), q k (n)). In the following, we will selectively use i out of the K terms of the summation in Equation (A-23) to construct a feasible interchange I(r 1 , r i+1 ) = I(r 1 , r 2 )+I(r 2 , r 3 )+· · ·+I(r i , r i+1 ) for some i ≤ K, with r 1 ∈ F and r i+1 ∈ T . We will show that such a queue r i+1 , i ≤ K that belongs to T does exist and the interchange I(r 1 , r i+1 ) is feasible.
In words, there is at least one more server allocated to queue r 1 under π * than the servers allocated to queue r 1 under π. Let k 1 be one such server. From (A-26) we conclude that one of the K terms in Equation (A-23) must be I(q * k 1 (n), q k 1 (n)) such that q * k 1 (n) = r 1 , q k 1 (n) = r 2 , k 1 ∈ {1, 2, . . . K}. In other words, a server k 1 and two queues r 1 = q * k 1 (n) and r 2 = q k 1 (n) must exist such that the interchange I(r 1 , r 2 ) is feasible.
The feasibility of I(r 1 , r 2 ) stems from the fact that server k 1 is allocated to queues r 1 and r 2 under two different policies, namely π * and π. This is possible only if g r 1 ,k 1 (n) = g r 2 ,k 1 (n) = 1 (A-27) Furthermore, using Equation (A-25) we can write From Equation (A-28) we concludê Equations (A-27) and (A-29) are sufficient for the feasibility of the interchange y(n) + I(r 1 , r 2 ).
Consider queue r 2 above. One of the following two cases may apply: Case (1) r 2 ∈ T : The proof of the lemma in this case is completed by letting f = r 1 and t = r 2 . The resulted interchange I(f, t), f ∈ F, t ∈ T is feasible by construction and the lemma follows.
Repeating the previous argument i times, 1 ≤ i ≤ K, we arrive at the following relationship: where by construction, each one of the i terms in Equation (A-34) above corresponds uniquely to one of the terms of the summation in Equation (A-23). For every i we check to see whether r i+1 ∈ T (in which case the lemma is proved) or not. If not then we have Repeating the argument K times (one for each term of the summation in Equation (A-23)) we will show that a queue r i+1 ∈ T , where I(r i , r i+1 ) is one of the terms in Equation (A-23), must exist.
In order to do that, we assume to the contrary that r i+1 / ∈ T, ∀i = 1, 2, . . . K. The K th (last) server reallocation I(r K , r K+1 ), r K = q * k K (n), r K+1 = q k K (n) will result in the withdrawal vector y K (n), such that, y K (n) = y(n) + K j=1 I(r j , r j+1 ) (A-36) Since there is one-to-one correspondence between the summation terms in Equation (A-36) and those in Equation (A-23) by construction, then we can write y K (n) = y(n) + K k=1 I(q * k (n), q k (n)) (A-37) hence y K (n) = y * (n). However, since r K+1 / ∈ T then y * r K+1 (n) ≥ y r K+1 (n) ≥ y K r K+1 (n) + 1 (A-38) have a contradiction. We conclude that there must exist a queue r i+1 ∈ T such that server k i reallocation I(r i , r i+1 ), r i = q * k i (n), r i+1 = q k i (n) is feasible. Let f = r 1 and t = r i+1 . It follows that the interchange I(f, t) = I(r 1 , r i+1 ) is feasible and the lemma follows.
Proof for Lemma 4. From its definition and Lemma Appendix .3.2, h π = L i=0 |D i |/2 is an integer between 0 and K. If h π = 0, then π has the MB property during time slot n according to Lemma 1.
So, suppose that h π > 0. From Equation A-15, we can pair queues f i and t i , 1 ≤ i ≤ h π , such that for every i, D f i ≥ +1 and D t i ≤ −1. From Lemmas Appendix .4.1 and 3, the interchange I(f i , t i ) is feasible and balancing.
Since we have h π such pairs of queues, then applying the h π balancing interchanges I(f i , t i ) described by Lemma 3 to policy π will result in a policy π * for which D * = 0, i.e., y * (n) = y M B (n). Hence the resulting policy π * ∈ Π n .
Appendix .5 Coupling Method and the Proof of Lemma 5
Appendix .
The Coupling Method
If we want to compare probability measures on a measurable space, it is often possible to construct random elements [1], with these measures as their distributions, on a common probability space, such that the comparison can be carried out in terms of these random elements rather than the probability measures. The term stochastic coupling (or coupling) is often used to refer to any such construction. In the notation of [1], a formal definition of coupling of two probability measures on the measurable space (E, E) (the state space, e.g., E = R, R d , Z + , etc.) is given below; further details regarding coupling method and its application can be found in [1]. A random element in (E, E) is a quadruple (Ω, F , P, X), where (Ω, F , P) is the sample space and X is the class of measurable mappings from Ω to E (X is an E-valued random variable, s.t. X −1 (B) ∈ F for all B ∈ E).
Definition: A coupling of the two random elements (Ω, F , P, X) and (Ω , F , P , X ) in (E, E) is a random element (Ω,F ,P, (X,X )) in (E 2 , E 2 ) such that Remark 4. The above definition makes no assumption about the distribution of the collection of random variables X; for example, X may be a sequence of non-i.i.d. random variables.
In the area of optimal control of queues, coupling arguments have been used extensively to prove characteristics of the optimal policies for many queuing systems, c.f. [19], [20], [3], [6] and many others.
We apply the coupling method to our proof as follows: Let ω and π be a given sample path of the system state process and scheduling policy. The values of the sequences {X(n)} and {Y (n)} can be completely determined by ω and π. We denote the ensemble of all random variables as system S. A new sample path,ω and a new policyπ are constructed as we specify in the proof. We denote the ensemble of all random variables (in the new construction) as systemS. Then, in the coupling definition,ω = (ω,ω) and the "coupled" processes of interest in Equation (A-39) will be the queue sizeŝ X = {X(n)} andX = {X(n)}.
The new policyπ is constructed (by showing howπ chooses the withdrawal vectorỹ(·)) as detailed in the proof. Then using Equation (1), the new states x(·),x(·) are determined under π andπ. The goal is to prove that the relationx (t) ≺ p x(t) (A-40) is satisfied at all times t. Towards this end, the preferred order (introduced in Section 5.1) can be described by the following property: Property 1:x is preferred over x (x ≺ p x) if and only if one of the statements R1, R2 or R3 (that we introduced in Section 5.1) holds. We restate these statements here for the sake of presentation: (R1)x ≤ x: the two vectors are component-wise ordered; (R2)x is a two-component permutation of x as described in Section 5.1.
Case (3)x(n) is obtained from x(n) by performing a balancing interchange for queues i and j as defined in property (R3). In this case x i (n) ≥ x j (n) + 1, by the definition in (R3) 8 . There are three cases to consider: (3.a) x i (n) = x j (n) + 1. Therefore,x i (n) = x j (n) andx j (n) = x i (n), i.e., the vectors x(n) andx(n) have components i and j permuted and all other components are the same. This case corresponds to case (2) above.
(3.b) x i (n) > x j (n) + 1 and y i (n) ≤ y j (n). We constructω as in case (1) above, and we letỹ m (n) = y m (n), ∀m = j. Note that it is not feasible for policy π to empty queue i in this case. Depending on whether π empties queue j or not at t = n, the construction ofπ will follow one of the following two cases: (i) y j (n) < x j (n), i.e., π does not empty queue j at t = n, then letỹ j (n) = y j (n) (i.e.,π is identical to π at t = n). In this case, property (R3) will be preserved regardless of the arrivals pattern 9 , hence (A-40) is satisfied at t = n + 1.
(ii) y j (n) = x j (n), i.e., π empties queue j at t = n. Then if under policy π all the servers connected to queue j are allocated, then letỹ j (n) = y j (n). As in case (i) above, property (R3) holds and (A-40) satisfied at t = n + 1.
In the event that π empties queue j without exhausting all the servers connected to queue j, thenπ will be constructed such that one of these idling servers is allocated to queue j, i.e.,ỹ j (n) = y j (n) + 1, so thatπ preserves the work conservation property at t = n. Sincex j (n) = x j (n) + 1 by property (R3) andz j (n) = z j (n) by construction, then we havẽ x j (n + 1) = x j (n + 1) = z j (n) Sincex i (n) = x i (n) − 1 by property (R3),z i (n) = z i (n) andỹ i (n) = y i (n) by construction, we havex i (n + 1) = x i (n + 1) − 1 The rest of the queues will have the same lengths in both systems at t = n + 1. Therefore, (R1) holds with strict inequality and (A-40) is satisfied at t = n + 1. This case shows that a "more" balancing policy results in a strict enhancement of the original policy.
Cases (i) and (ii) are the only possible ones, since π cannot allocate more servers to queue j than its length.
Note that policyπ belongs to Π h−1 τ by construction in Part 1; its dominance over π follows from relation (24).
|
2011-04-07T18:43:03.000Z
|
2011-03-07T00:00:00.000
|
{
"year": 2011,
"sha1": "6527271553afa2756a844d4b817cd6d41b63b5ec",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6527271553afa2756a844d4b817cd6d41b63b5ec",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
231800801
|
pes2o/s2orc
|
v3-fos-license
|
The Development of Professional Identity and Professional Mentality of Youth
This article represents a theoretical analysis, systematization and generalization of the views of various scholars on understanding the content, structure and development of professional identity and professional mentality of university youth in the process of vocational training. It is proved that the current social, political, economic situation has led to the blurring of the guidelines necessary for both personal and professional self-determination, and as a consequence, the problem of finding professional identity is extremely important for modern young people. Professional identity is considered as a dynamic creation that includes a well-established, consistent, real and ideal professional image of the self, providing self-realization, development, inner integrity, personality determination, adequacy and stability of its self-concept regardless of situation changes, identity with profession and professional community, mature solution of professional tasks. It is shown that the content-forming goal of professional education, the result of professionalization is the development of a special professional mentality of the future specialist, which determines the peculiarities of perception of professionally significant objects, professional social attitudes and values of the individual and becomes a special form of their life and deep existence.
Introduction
The professional development of a person, taking place in youth, is intensive in the process of vocational training in a secondary vocational or higher educational establishment. At this time, young people purposefully master the system of knowledge and gain practical skills in the chosen career, acquire special personal traits and values, necessary in their professional activities.
Researchers of higher school problems note that it is during the student years that there is an intensive professional self-determination of the individual (V. Bodrov, R. Havighurst, E. Erikson, A. Markova, E. Sapogova, N. Tolstykh, E. Zeer). Stages of professional self-determination in youth, says V. Bodrov, are "characterized by considerable uncertainty" (Bodrov, 2006, p. 168). The fifth stage, the stage of professional training, has age limits either of 15-18 years or 17-23 years. At this stage, young people learn a system of knowledge, gain practical skills in the chosen career and have a conception of values in their activities. The sixth stage, the stage of professional adaptation, has an age range from 19-20 to 24-27 years. At this stage there is an adaptation to social and professional norms, working conditions, further development of self-determination in the chosen profession, awareness of the correct choice of a career path, coordination of life and professional goals and attitudes, formation of significant personality traits, development of professionally important qualities, special abilities, emotional and volitional qualities of personality. The next, seventh, stage is the stage of professional development; its limits are from 21-27 years to 45-50 years. According to this classification, the age of youth (the third decade of life) accounts for the fifth, the sixth, and the beginning of the seventh stage, which makes it difficult to generalize the characteristics of the process of professional selfdetermination at this age. In addition, this classification is more or less satisfactory to describe the professional development of people if they, having chosen a profession in adolescence, remain in it until the end of their lives. But this classification does not take into account the factor of professional mobility of young people.
N. Samoukina notes that the choice of profession is not always an indicator of professional selfdetermination and "professional self-determination may coincide with the choice of profession if a young person chooses a profession in accordance with their interests, aptitudes and abilities. The choice of profession may not coincide with the process of professional self-determination in cases where a young person "chooses" a profession by chance, for example, by the factor of proximity of work to a place of residence, fashion for the profession, work received by pulling strings, etc." (Samoukina, 2003, p. 29). Many modern young people are rigid about not linking the initial choice of profession with a possible future career. Today, some undergraduate students often seek to pursue a master's degree in another major. Therefore, a young person's choice of profession acquires a different personal meaning than it was even two or three decades ago.
Fewer modern students, as noted by N. Tolstykh, connect vocational education with their future work. The choice of the latter is less and less influenced by the contents of training but more by such motives as salary, career prospects, etc… Young people aged 20-24 often combine study and work. Among the reasons that motivate students to work "a large place is occupied by the needs associated with future professional employment, such as networking, self-realization in the profession, communication" (Tolstykh, 2016, p. 445). Today, with a large number of people with higher but not very good education, the degree certificate itself is valued less than professional experience. At the same time, not only students, but also employers often do not take into account such an important point as the formation of professional identity and professional mentality at the stage of study at the university. Whatever the basis for a young person to choose the profession and the educational institution, in which they master the profession, is, the process of this training leaves a serious imprint on the whole personality of the young person.
Methodology
Theoretical understanding of research approaches to the problem of professional identity and professional mentality requires the implementation of general scientific methods of theoretical knowledge, including analysis, synthesis, disengagement, generalization, which allow deepening the understanding of the investigated concepts. With the help of content analysis, the structural components of professional identity and professional mentality are identified and the content of these concepts is determined. Of particular importance for the development of theoretical and methodological basis of the study were: theoretical concepts of identity (E. Erickson, J. Marcia, T. Parsons, C. Cooley, J. Mead, P. Berger, T. Lukman, etc.), theory and concepts of social identity (E. Durkheim, S. Huntington, A. Kovalenko, N. Tajfel, J.S. Turner, V. Yadov, etc.), theoretical concepts of professionalization and professional self-determination (R. Havighest, E. Zeer, A. Markova, Y. Povarenkov, N. Samoukina, N. Elman, J. Illfelder-Kaye, W. Robiner, etc.), the concepts of professional identity (K. Adams, S. Hean, P. Sturgis, J. Clark, L. Schneider, E. Ermolaeva, D. Isayeva, etc.), theoretical concepts of professional mentality (E. Klimenko, D. Oborina, E. Sapogova, N. Tolstykh and others).
Results and Discussion
The question of the place of professional identity in the general structure of identification processes of the person is solved ambiguously. T. Parsons considers identity as a characteristic of the individual, which is formed in the process of interiorisation of social norms and values and passed on to subsequent generations in the process of socialization (Parsons, 1998). E. Durkheim developed a theory of transmission of social identities, according to which in traditional societies a person forms their identity directly from the culture, while in modern ones they are guided by general and specific for the type of social organization norms and values (Durkheim, 1997, p. 73-75). C. Cooley and J. Mead viewed identity as a result of social interaction, an ability to perceive oneself and the social world as a whole (Cooley, 2000). J. Mead distinguishes between conscious and unconscious identity: the unconscious one is a set of expectations emanating from the social environment of the individual; the conscious one is formed in the process of reflection by the personality of their self, their behaviour. Thus the conscious identity is formed by means of the categories fixed in a language as a result of social interactions (Mead, 1996, p. 225). I. Hoffman identified three types of identities: a social identity reflects the typification of the individual with others -"a social self"; a personal identity, a unique set of individual qualities of a particular person, which characterize them as an object in time and space -"a physical self"; a self-identity, the individual's subjective perception of his life situation and his own peculiarity -"a reflexive self" (Hoffman, 2000). J. Habermas defines personal and social identity as "two inseparable dimensions in which the balancing self-identity is realized: a personal identity provides a person's way of life and a social one -the ability to meet the requirements of all role systems to which a person belongs. In the interaction, a person clarifies their identity, striving to meet the normative expectations and expectations of the partner. At the same time, a person strives to express their uniqueness" (Habermas, 2002, p. 369). P. Berger and T. Lukman understand identity as a holistic "selfimage" composed by an individual about themselves, which can be transformed under the influence of changes occurring in society and in the individual (Berger & Lukman, 1995, p. 279). According to E. Giddens, identity should be associated with a "social position that fixes the range of rights and responsibilities that a person can activate or perform" in different societal communities. E. Giddens considers identity as cultural phenomena of modern society that arise and are maintained in the daily life of the individual. The general identity is characterized by him as often unconscious confidence of the individual in belonging to any collective, general feelings and percepts reflected in consciousness (Giddens, 2005, p. 142). S. Huntington identified several key points in the study of social identity: both individuals and groups have an identity, notably, individuals acquire and can change their identity only in groups; identities are determined by "self", being the result of the interaction of a particular person or group with other people or groups; identities are constructs formed by people willingly or under duress; both groups and individuals have multiple identities (economic, cultural, political, national, professional) (Huntington, 2004, p. 50-53).
In Ukrainian and Russian science, the topic of social identity is revealed by V. Ageev, A. Kovalenko, O. Lytvynchuk, G. Kisla, V. Yadov and others. The concept of "social identity" is defined by V. Yadov as "awareness and feeling of belonging to different social communities ... a small group, class, family, territorial community, ethno-national group, people, social movement, state, humanity as a whole" (Yadov, 1995, p. 159). The sense of belonging to a social community performs important social and socio-psychological functions: it ensures the subordination of the individual to a social group, group protection, evaluation and self-evaluation criteria. As a result of research, the role of identity in the course of adaptation of the individual in the conditions of social changes, peculiarities of formation and integration into the integral structure of ethnic, professional and other significant identities of the individual has been studied.
The concept of identity in psychology is traditionally associated, above all, with the name of E. Erickson. Identity is a person's equivalence with themselves, a firmly mastered and personally accepted image of themselves in all the richness of the individual's attitude to the world, a sense of adequacy and stable ownership of one's own "self" regardless of its changes and the situation itself, the individual's ability to solve problems, arising before them at each stage of their development, integrity (continuity of personality over time). Identity is, first of all, an indicator of a mature personality, the origins of the organization of which are hidden in the previous stages of ontogenesis. Erickson identifies eight stages of identity development, in each of which a person chooses between two alternative phases of solving age and situational development problems. The fifth and sixth stages are characteristic of the period of youth. From Erickson's point of view, the fifth stage of 11-20 years is crucial for gaining a sense of identity. At this time, the adolescent oscillates between the positive pole of self-identification and the negative pole of role confusion. The teenager faces the task of combining everything he knows about himself as a son or daughter, schoolboy, friend, etc. He must unite everything into a single whole, comprehend, and connect with the past and project for the future. In the favorable course of the crisis, boys and girls develop a sense of identity. In the unfavorable, a confused identity, painful doubts about themselves, their place in the group, in society, the uncertainty of life prospects may form. Erickson calls such a crisis period between adolescence and adulthood, during which the individual undergoes complex processes of acquiring an adult identity and a new attitude to the world, a "psychological moratorium". Under certain conditions, a mental moratorium can last for years and form a state of "identity diffusion." The sixth stage of 21-25 years, according to Erickson, marks the transition to the solution of adult problems as such on the basis of the formed psychosocial identity. The question of the principle choice between establishing friendship or family ties and isolationism, characteristic of people with confused identities, is resolved (Erikson, 1996, p. 12-18).
The status model of identity is proposed by J. Marcia. Identity is the structure of the ego (self), the internal self-forming, dynamic organization of needs, abilities, beliefs and individual history. In the works by J. Marcia and his followers, considerable emphasis is placed on how a young person in the period of formation and search for the ego-identity wants and can meet the requirements of social reality, how effectively and in which way they "fit" into society. There are four options: diffusion of identity -there is neither effort to make a decision, nor the decision itself; resolved identity -the decision was made by someone else (suggested by parents, friends, etc.), there was no crisis as such; moratorium -postponement of decision-making in the presence of a crisis, active search for a solution; achieved identity -as a result of their own thoughts, efforts, decisions and commitments are made, certain life strategies are developed. For the period of adolescence and youth, the statuses of moratorium and achieved identity are the best (Marcia, 1980).
With all the variety of research approaches, the concept of identity remains one of the most complex and ambiguous, which requires new developments and applied research aimed at identifying the main tendencies and features of the formation of professional identity.
The key scientific works devoted to the study of professional identity belong to L. Schneider. Under professional identity, the author understands a complex integrative psychological phenomenon, the leading characteristic of a person's professional development, which indicates the degree of acceptance of the chosen professional activity as a means of self-realization and development, awareness of their identity with the group and assessment of their membership significance. It is the result of professional self-determination of a person who perceives the profession as a vocation. Possessing a formed professional identity, people identify themselves with the profession and consider themselves representatives of the professional community (Schneider, 2001).
From the point of view of E. Ermolaeva, professional identity is a product of long-term personal and professional development. It is formed at high levels of mastery of the profession, when there is a coordination of real and ideal professional images of "self". E. Yermolaeva believes that "values regulate the direction, the degree of effort of the subject, determine the motives and goals of the professional activity organization. Values motivate activity and behavior, because the orientation of a man in the world and the desire to achieve certain goals are correlated with the values included into the structure of personality" (Ermolaeva, 2001, p. 51-59). A. Markova defines professional identity as a multilevel personal dynamic structure that includes conscious and unconscious aspects, which ensures inner integrity, identity and certainty of the individual at all stages of professional development, as well as their continuity and stability over time (Markova, 1996)..
O. Nor-Arevyan, A. Shapovalova distinguish three groups of factors influencing the formation of professional identity: individual-personal (system of value orientations, motivation of the individual, the idea of the possibilities of the individual as a subject of activity, a high degree of responsibility for professional work), educational (professional training; practice-aimed orientation of education), socioprofessional (formation of the professional community, professional culture; demand for specialists in the field in the labor market, high prestige of the profession in society, stable working conditions, a sufficient salary for the future specialist and high social guarantees), which have both positive and negative aspects. In order to achieve professional identity and overcome crises of professional development, the following psychological qualities of personality play an important role: taking responsibility for professional work, establishing constructive relationships with colleagues, achieving goals, ability to respond to changes and adapt to them, tolerance to change, adequate perception of reality and themselves in the professional community (Nor-Arevyan & Shapovalova, 2016, p. 102-113).
Criteria for a successful process of formation of professional identity from the point of view of T. Malyutina are a positive self-esteem, a level of demands, satisfaction with professional tasks, positive attitude to professional activity, satisfaction of needs, responsibility for professional norms, requirements to the personality of the professional, acceptance of norms and values characteristic of the professional community, academic progress (Malyutina, 2014).
L. Schneider identified 4 types of professional identity: achieved professional identity -the most developed form of identity, which indicates that the identity crisis passed successfully; a person realizes what they want to achieve in the profession, has own professional ambitions, feels part of the professional community; premature professional identity is formed by the mechanism of imitation of parents and other important people; diffuse professional identity is characteristic of people with uncertain professional interests and goals; moratorium is lack of identity, because a person is in a state of identity crisis (Schneider, 2001).
The process of formation of professional identity has an uneven crisis nature, which, according to some authors, can lead to a professional crisis. At the stages of mastering the profession there is a conflict between the elements of the already existing human identity and the situation, changing in the course of mastering a new profession by the human. To overcome the professional crisis, the individual must accept the new values of the professional community, adopt professional skills and qualities, find ways to develop in professional activities.
In the work by N. Annenkova it is shown that the issue of identity search for modern youth both in our country and in other countries continues to be acutely relevant. Another important conclusion from this research lies in fact that the formation of identity is fluctuating: "On the one hand, the spheres of identity formation act as areas of life development, in which it is impossible to move simultaneously and at an equal pace ... On the other hand, structural unevenness of identity formation manifests itself in its individual variants, where the statuses of identity, learned in the relevant areas of self-determination, can be intertwined in the most unusual way" (Annenkova, 2004, p. 22-23). If the status of a moratorium and a resolved identity are dominant for senior pupils in this area, then students have a clear predominance of the achieved identity.
Y. Povarenkov identifies three stages of professional identity: school, which is manifested in the inability of freshmen to realize themselves as students, student, accompanied by increased selfesteem, fixing the social status of young people in the group, and educational and professional, formed under the influence of industrial practice. The formation of the actual professional identity, according to the author, occurs only 3-4 years after the beginning of independent professional activity, after realizing the ineffectiveness of educational and professional identity (Povarenkov, 2002). D. Isayeva studied the peculiarities of the formation of personal and professional identity in adolescence at the stage of choosing a profession and in early adulthood at the stage of professional training of students. The age dynamics of the formation of personal identity in adolescence and early adulthood is reflected in the transition from the status of Diffusion in adolescence to the status of Moratorium in early adulthood. The most common status of professional identity at all the stages of development is Diffusion. The author suggested that the formation of personal identity is determined by the age stage of development, and the formation of professional identity is to a greater extent caused by the stage and conditions of professional development. The age of 19-20 years was identified by the researcher as sensitive to the formation of personal identity. During the same period, the preconditions for a crisis of a professional identity are created, which is manifested in increasing the importance of "a professional self" in the structure of "a self-concept", a significant increase of anxiety about the professional future, increasing the intensity of crisis experiences of professional self-determination. Indicators of achieving identity in adolescence and early adulthood are satisfaction with the chosen profession, awareness of the possibility of professional self-realization, independence of professional choice, increasing importance of "a professional self" in the structure of "self-concept", reducing fear, anxiety and indifference to the professional future (Isaeva, 2013, p. 84).
The results of the investigation carried out by K. Adams, S. Hean, P. Sturgis and J.M. Clark on a sample of 1254 freshmen from different professions in health and social care (H&SC), showed that professional identification begins at the stage of choosing a future profession, before a young person enters a higher education establishment. The investigators identified a few factors that influenced the formation of professional identity: gender, profession and young people's perception of it, experience of work in the H & SC environment, understanding of the peculiarities of interpersonal cooperation and team work, cognitive flexibility (Adams, Hean, Sturgis, & Clark, 2006). N. Elman, J. Illfelder-Kaye and W. Robiner believe that professional development (PD) is a broad, albeit vaguely defined, construct that underlies psychologists' education and training and is intrinsic to professional functioning, or professionalism, throughout psychologists' careers. They arrive at the conclusion that professionalism is the outcome of PD, and focus on 2 elements of professionalism: interpersonal functioning and thinking like a psychologist and training for professionalism as a foundation for competent practice in psychology (Elman, Illfelder-Kaye, &Robiner, 2005).
Many studies have been conducted comparing identity status with various other variables. In particular, it has been shown that young people with a resolved ego-identity tend to have warmer relationships with their parents, closer ties with the family than young people with other identity statuses. Boys and girls with resolved identities tend to go to their parents for advice and support when making vital decisions. Those who are in a state of moratorium or achieved identity are more critical of their parents and do not seek advice from them. Young people with a diffuse identity report the largest distance between them and their parents (Hjell, & Ziegler, 2005).
Thus, professional identity is a multidimensional, dynamic structure that includes a firmly mastered, consistent, real and ideal professional image of the self, the ability to fully solve professional problems at every stage of professional development. In the process of professional training in the period of youth there is a transformation from unstable, diffuse, narrowly localized identity into a more stable and conscious identity, aimed at self-realization in a wide professional community, combined with a higher level of awareness of their professional qualities.
The result of professionalization of the individual, carried out at high school, apart from the ability to perform professional actions competently, professional identity, is also a professional mentality, i.e. a system of personal characteristics of a human, engaged in professional activities. Professional mentality is a complex and multidimensional formation, not yet sufficiently studied.
I. Kartushina understands the professional mentality as an integral quality of a specialist, which determines the choice of a way to solve professional issues on the basis of professional value orientations, traditional for a professional group attitudes and professional thinking (Kartushina, 2006, p. 69).
N. Tolstykh believes that the concept of professional mentality reflects the fact that when a man is involved into professional activities and into professional training for such activities, their attitude to the world, perception, thinking, and behaviour become professional. People are guided by the ideas that function at the level of mentality consciously or more often unconsciously not only in professional but also in everyday activities (Tolstykh, 2016, p. 439). D. Shtrikova understands professional mentality as "a system of conscious and unconscious socio-psychological attitudes of a man, including stereotypical thoughts, judgments, assessments that underlie collective ideas of professional activities, and individual ideas of their place in professional activities" (Shtrikova, 2012)/ The concept of "mentality" is defined by D. Oborina as "a set of deep, often unconscious and not reflected personal qualities that feature a person's attitude to the world and determine the choice of a particular behavior in everyday life situations" (Oborina, 1994, p. 42). Professional mentality is a number of common professional social attitudes, values, peculiarities of perception of professionally significant objects and behavior in relation to them, which characterizes the professionals of a particular field. She believes that mentality refers not only to the intellectual sphere, but also to the emotional, motivational and behavioral.
In the structure of professional mentality E. Klimenko identifies the following components: axiological, which is a system of moral values, social attitudes, moral and semantic constructs that are realized in professional activities; perceptual-mental ones are peculiarities of practical thinking and perception of situations of professional activity; regulatory that reflects cognitive styles that determine the manifestation of arbitrary intellectual control, metacognitive awareness, open cognitive position in professional activities (Klimenko, 2018, p. 39).
E. Sapogova believes that the professional mentality of a counseling psychologist acts as asensemaking component of "a psychologist's being "and is expressed by the ability to feel and comprehend reality psychologically, by categories and concepts of psychological science based on knowledge and understanding of the phenomenology of mental facts. In the training of professional psychological counseling not only the interiorization of special knowledge and skills is achieved, but the development of special structures of consciousness -professional "functional organs", professionally painted "image of the world" with its professional concepts and discourse, a system of attitudes to reality. Professionalization is associated with a kind of system of "tuning" the student's consciousness to the perception of special psychological aspects of reality, which depends on understanding the essence of the activity for which higher education prepares a professional psychologist. The author defines such a "tuning" of consciousness through the category of empathy, going through fragments of the client's life, their problems and feelings, which may result in their personal development, increase in cultural productivity. Formed during the vocational training the ability to empathize, co-develop and co-create with the client makes the psychologist a kind of psychological tool for professional work with other people and forms a special type of personal and professional identity. Notably, the main points, the essence of the professional counseling activity of a psychologist become understanding and interpretation of the forms of identity of the people he consults. And it is such psychohermeneutic work that a psychologist must first be taught: a system of attitudes toward tolerant and positive perceptions of others and himself; deep reflexivity; system of humanistically oriented internal norms, values and meaningful life orientations; a set of unconditional attitudes of self-acceptance, selfattitude, self-esteem, self-presentation, etc.; dialogic communication culture; a set of skills and abilities that make it possible to "catch" significant moments of psychological reality, understand and summarize them, formulate hypotheses in the language of psychology, and, as a consequence, stimulate the personal growth of the client, update their internal psychological resources. It is consulting psychology, says O.Y. Sapogova, that can claim the status of a system-forming subject in the process of learning the professional mentality and development of the professional identity of a psychologist. This process depends on the degree of identification with the profession, because in itself being a psychologist is a special form of a human's life. It is not just about the possibility to consult, correct, carry out therapy, but about the implementation of complex internal hermeneutic work to understand and interpret the forms of identity of people as a specifically organized quasi-reality, filled for them with real emotions and true meanings (Sapogova, 2018, p. 200-204).
From the point of view of V. Chupina, L. Gavrilenko, Т. Serdyuk, professional mentality is determined by a special socio-psychological type of personality, the structure of which includes both typical forms of mental reflection of reality and specific systems of values, relationships, social attitudes, revealed in the focus on professional activity. The formation of the professional mentality of the student is presented by the authors as a dynamic process of changing the personality under the pedagogical influence and the student's own activity. This process is aimed at creating an image of a professional world in students' minds, the idea of themselves as a part of a professional community, the feeling of belonging to this professional community and unity with it. All these factors allow you to assess the social reality and build a strategy of behavior in society in accordance with the internal position and beliefs about the profession (Chupina, Gavrilenko, & Serdyuk, 2015).
Conclusions
The results of the theoretical analysis of the problem of professional identity and professional mentality of students in their youth allow to arrive at the following conclusions.
The content-forming goal of professional education is the formation of professional identity and special professional mentality of the future specialist.
We can assume that professional identity is a multidimensional, dynamic structure that includes a well-mastered, consistent, real and ideal professional image of the self, the ability to fully solve professional problems at each stage of professional development. Professional identity provides selfrealization, development, inner integrity, personality determination, identity with the profession and the professional community, the adequacy and stability of the self-concept regardless of the changes of the situation. Professional identity gives a person an idea of their place in a professional group, the place of a professional group in the system of social relations. In the process of professional training in youth there takes place a transformation from unstable, diffuse, narrowly localized identity to a more stable and conscious identity, aimed at self-realization in a wide professional community, combined with a higher level of awareness of their professional qualities.
Professional mentality is a system of personal characteristics of a person carrying out professional activities: professional social attitudes, values, peculiarities of perception of professionally significant objects and behavior in relation to them. Professional competencies act as practical means of implementing the professional mentality. The profession becomes a special form of the human's life, a deep existence, which radically changes the attitude of the professionalized subject to the fundamental phenomena of human existence. Having started to engage in certain activities, a person gradually acquires the traits inherent in these specialists. Equality of conditions, mode of work, rest and chores leads to the formation of a certain way of life inherent in the professionals of a certain group, which, in turn, largely determines the development of interests, attitudes, personal values, emotional orientation, special abilities, manner of behavior and communication.
Having studied the content of all components of professional mentality and professional identity, it is possible to make empirical measurements and trace their development and structural substantive transformations of students of different specialties and people of different professional groups.
|
2021-01-07T09:10:52.333Z
|
2020-12-22T00:00:00.000
|
{
"year": 2020,
"sha1": "9d070b28dc1d6bcea2f059b4db5106ebc307a88a",
"oa_license": "CCBY",
"oa_url": "http://kutaksam.karabuk.edu.tr/index.php/ilk/article/download/2736/1855",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4a8d00a7475dce540bcdd8099660f0c07cbe508c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
13720867
|
pes2o/s2orc
|
v3-fos-license
|
Applied hepatobiliary scintigraphy in chronic gallbladder diseases
Description Educational gaps are responsible for a significant variability in methodological practice in the use of CCK-CS with calculation of GBEF in patients with suspected gallbladder diseases. In addition, emerging evidence continues to refine the methodology of CCK-CS. Clinicians must remain abreast of current recommendations and innovative practices in order to provide the most specific CCK-CS test results in patients with suspected FGBD. This article will address the role of hepatobiliary scintigraphy (HBS) in the diagnosis and management of this often challenging clinical conundrum.
SA-CME
C hronic gallbladder-related conditions often manifest in abdominal biliary-type pains -also called biliary colic.As with any symptom, biliary colic is subjective and challenging to distinguish from abdominal pain originating from other organs.Hence, multiple imaging tests are used in what often ends up as a protracted diagnostic pursuit.This article will address the role of hepatobiliary scintigraphy (HBS) in the diagnosis and management of this often challenging clinical conundrum.
The terminology and disease nomenclature applicable in this context could be confusing.The starting point is differentiation of chronic gallbladder (GB) diseases based on their main pathophysiologcal sine qua non, which can be either structural (anatomical) or functional.In the former group one can directly visualize the culprit abnormality or its sequelae on anatomical imaging, such as ultrasound (US), computed tomography (CT), and magnetic resonance imaging (MRI).The classic example of a structural condition would be chronic calculous cholecystitis (CCC) where US plays a central role by demonstrating stones in the gallbladder and other associated findings.However, gallstones seen on US could also be totally asymptomatic and demonstration of poor GB contractility could differentiate incidental gallstones from those associated with symptomatic CCC. 1,2In the functional group, there is typically no anatomical abnormality on imaging or even on the pathological specimen, such as in the case of functional gallbladder disorder (FGBD).This is where the diagnosis is based on characteristic symptoms and abnormal GB function, typically revealed by cholecystokinin HBS (CCK-HBS). 3Other chronic conditions could be responsible for abdominal pain that is often clinically indistinguishable from the biliary colic.Some of them can be discerned from careful examination of CCK-HBS for non-GB clues.
Radiopharmaceuticals, pharmaceuticals and physiology
Development of Tc-99m labeled iminodiacetic acid (IDA) derivatives 4 provided us with an elegant way to study major elements of hepatobiliary physiology, pathophysiology and response to various stimuli.We can depict liver blood flow, hepatocellular function, bile formation and excretion into the bowel, as well as biliary tract dynamics of its bowel transit.
Hepatobiliary radiopharmaceuticals underwent remarkable evolution that is detailed elsewhere. 5,6Perfection of IDA derivatives led to development of the modern analogues that are probably the closest we have come in nuclear medicine to the concept of an ideal radiopharmaceutical. 7 They are actively taken up and transported intracellularly by hepatocytes' organic anion-transporting polypeptide (similar to non-conjugated bilirubin).They are later excreted into canaliculi unchanged via apical ATP-dependent export pump.Tc-99m Disofenin or 2,6-diisopropylacetanilido iminodiacetic acid (DISIDA) and Tc-99m Mebrofenin or bromo-2,4,6-trimethylacetanilido iminodiacetic acid (BromIDA) are the two most commonly used today.Considering a patient's total bilirubin level, a clinical question addressed by the test, availability and the cost, one can make the specific choice.If the latter two are of no concern, Tc-99m Mebrofenin is the ideal choice, for it has the best hepatic uptake and washout, while displaying the least activity outside the hepatobiliary system (vicarious excretion). 5 typical adult dose is 200 MBq (5 mCi) of either compound injected as an intravenous bolus.BromIDA should be used in jaundiced patients, escalating the dose to 7.5 mCi and 10 mCi if the total bilirubin level reaches 4 mg/dl and 8 mg/ dl, respectively.Pediatric patients should receive 7 MBq/kg (0.2 mCi/kg), but no less than 37 MBq (1 mCi).The gallbladder wall is the target radiation exposure organ, receiving 0.11 mGy/MBq dose. 8he existence of cholecystokinin (CCK), a gastrointestinal hormone, was first suggested by physiologist Joy Simcha Cohen in 1905.Since then it became clear that CCK represents a family of peptides that vary in number of amino acids.It is the C-terminus with sulfate-group attached to the tyrosine in position 7 that is common to all of the family members and responsible for their hepatobiliary actions upon binding to CCK-A receptor.CCK is synthesized and released by the mucosal lining cells of the duodenum and jejunum.The stimulus for its release is the entry of partially digested fats and proteins into the duodenum.CCK then hematogenously reaches the receptors of the liver (increasing blood flow and production of bile), pan-
SA-CME
creas (increasing production of pancreatic juice), the gallbladder (contracting its smooth muscle), and the sphincter of Odd (relaxing it to allow bile and pancreatic juice to flow into the duodenum).When bile and pancreatic juice digest fats and proteins in the small intestine, the stimulation for release of CCK ends.Sincalide is a synthetic C-terminal octapeptide analogue of CCK that is readily available commercially for parenteral use, which is used almost exclusively for diagnostic use in CCK-HBS.
Patient preparation
Ensuring an optimal scintigraphic result starts with patient preparation.The biliary flow and GB motility is a complex process that can be dramatically altered by ingested food and medications.The patient must fast for at least four hours prior to the test.As explained above, most meals would contract the GB that could last a long time and radioactive bile would not be able to enter against the outgoing flow and high pressure generated by the contracted GB, possibly resulting in non-visualization (non-viz) of GB.The last meal before a routinely scheduled morning test should occur 9-10 PM and should contain a significant fatty and/or protein component that the patient should be able to tolerate.This should empty the GB overnight, rendering it in the state of refilling at the start of scintigraphy in the morning.This ensures a prompt and optimal filling of GB with radioactive bile during the test.Fasting for 12 hours or longer, on the other hand, the patient will have a partially filled GB and tracer appearance may be delayed and/or suboptimal.By 24 hours of fasting, the GB would be filled to capacity and may not accept anymore secreted bile, causing GB non-viz during the test.This can be avoided by pretreating such patients with sincalide, infusing the total dose of 0.02 μg per kilogram over 15 to 60 minutes intravenously.The longer the infusion duration, the greater the emptying. 9Greater emptying is expected to prompt greater refilling with radioactive bile during the test.Because sincalide has a short duration of action, the HBS can start 15-20 min after sincalide infusion is completed.This preparation maneuver does not change subsequent sincalide stimulated GBEF. 10 Patients' medications must be screened for interaction and/or interference with CCK-HBS.The most impactful are opiates and opioid drugs, which must be discontinued for at least 4 half-lives of a given medication.They interfere with GB contractility by constricting the sphincter of Oddi (SO), which in turn increases resistance to GB emptying, preventing its effective contraction.They may also interfere directly with GB smooth muscle contractility.Other medications that interfere with GB contraction and should be withheld, if possible, include anticholinergic drugs, calcium channel blockers, oral contraceptive agents, histamine-2 receptor antagonists, and benzodiazepines. 11
Imaging specifications
HBS includes an optional rapid blood flow (scintiangiography) phase and a slower dynamic (hepatobiliary) baseline phase.Optimal resolution and counting statistics can be obtained by acquiring images in a 128 x 128 matrix matrix.The framing rate for the scintiangiography is one frame per second for 60 seconds, while the subsequent images for the parenchymal phase are acquired at one frame per 15 seconds for one hour.The flow is best viewed by re-framing the rapid phase into 3-5 seconds per displayed frame, while the slower dynamic parenchymal and biliary phases are reframed into 2-4 minutes per displayed frame.If further dynamic imaging is required following intervention, such
FIGURE 1. The left upper image was taken in anterior projection at the end of the 60 min of the first hour dynamic phase. There is a superimposition of the gallbladder (GB) activity (long dashes outline) and duodenal activity (dotted outline). The schematic of gamma camera tak-
ing the anterior image off of the same patient is depicted below.Notice that the duodenal activity crosses the middle of the abdomen and the proximal small bowel curlers on the left side of the abdomen.The axial CT taken from the same patient shows how the gamma rays emitted from the GB (red) and duodenum (green) project onto the camera with significant overlap.The images on the right side depict how projected activity from the GB is separated from the duodenal activity (no overlap) in the 40-degree left anterior oblique (LAO) view.
APPLIED HEPATOBILIARY SCINTIGRAPHY IN CHRONIC GALLBLADDER DISEASES SA-CME
as for imaging during sincalide stimulation of the GB (stimulation phase), it is typically acquired and displayed similar to the hepatobiliary baseline phase.To minimize duodenal activity interference with GB activity measurements for ejection fraction (GBEF) calculation, the images are usually acquired in 35 -40° left anterior oblique projection, which renders the best separation of GB from duodenal activity (see Figure 1).Individualization on the basis of initial imaging may be needed in those who may have unusual GB position (intrahepatic, vertical-posterior, etc.).SPECT, and particularly SPECT/CT, could sometimes be useful in resolving equivocal findings on delayed phases of HBS; especially in rare cases of ascertaining that visualized activity accumulation is located within atypically positioned GB. 12
Qualitative assessment of hepatobiliary scintigraphy
The scintiangiography phase may reveal gross abnormalities of the heart and the aorta, such as cardiomegaly or aneurisms. 13Liver blood flow via the hepatic artery is typically faint, as it represents only 25% of overall blood circulation through the organ.Activity in the liver begins to accumulate more rapidly upon recirculation, as blood returns via the portal vein 3 to 5 seconds later.It is just before the portal phase that a focal blush signals a lesion with arterial hypervascularization, such as hepatocellular carcinoma, adenoma, or focal nodular hyperplasia.Conversely, decreased flow can be seen in hypovascular lesions exemplified by a cyst. 14ext is the hepatocellular or parenchymal imaging phase.The first 8-10 minutes of imaging offers a window into the functional hepatocellular integrity.Normally, the blood pool activity in the heart clears completely by the 8th minute (it may be faintly seen on the first 4-minute image), with the tracer concentrated densely in the liver and complete disappearance of cardiac activity.In cases of severe hepatocellular disease (hepatitis, cirrhosis, etc.) the cardiac blood pool activity can persist for hours following the injection.Avid tracer uptake allows one to examine the liver for a hepatocyte-replacing, space-occupying lesion (such as metastatic disease, hemangioma, liver abscess, liver cyst, etc.).An appearance of the tracer in the ductal system signals the beginning of the biliary dynamic phase.While the left and right hepatic ducts are typically seen, visualization of the segmental hepatic ducts is rare in the normal individual and may indicate
A B FIGURE 2. A 45-year-old female patient with history of chronic abdominal pain and normal ultrasound was referred for gallbladder (GB) function testing. (A) The first hour of hepatobiliary scintigraphy is shown. There is excellent tracer extraction, signifying excellent hepatocellular function. The biliary tree is beginning to fill with activity on the third frame, which is obtained at minute 8. It shows subtle activity in the GB (arrow) and the common bile duct (CBD) (arrowhead). This early appearance is secondary to the excellent patient preparation with a fatty meal ingested 8 hours
prior that has emptied the GB overnight, rendering the GB in the re-filling phase at the time of testing.The sphincter of Oddi is physiologically closed in this phase, which in turn raises backpressure that drives the bile into the relaxing GB.The activity in the GB progressively increases while the activity in the liver parenchyma gradually decreases, and the activity of the common bile duct mildly fluctuates.Notice that no activity is entering the duodenum or proximal small bowel.This is a physiological variant that should raise no concern for CBD obstruction nor require any waiting before proceeding with sincalide stimulation.(B) Selected static images obtained at the indicated times on corresponding labels in a different patient who had documented CBD obstruction.The liver takes up the radiotracer avidly, similar to the prior case, but it does not clear the activity into the biliary tree.There is no visualization of any part of the biliary tree nor is there activity in the GB or the bowel.At 24 hours there is vicarious excretion of some tracer into the urine, as seen in the bladder at the inferior edge of the field of view (dashed stem arrow).pathology.Biliary tree dilation can range anywhere from a slight residual prominence in a previously obstructed system to a more pronounced appearance in a partial obstruction, such as sphincter of Oddi stricture or dysfunction.When the bile enters the duodenum it signals the beginning of the intestinal phase.The time from the radiotracer injection to this phase is commonly called a "biliary-to-bowel transit time."Depending on the bile production rate and driving pressure gradients in the hepatobiliary system, this time may be as short as 10-15 minutes or as long as 1-2 hours.A recent meal, stimulating a lasting GB contraction with increasing bile flow, represents an example of the former. 15In counter distinction, CCK administration 20 min before the study or stimulating meal given 4-10 hours before the study usually causes preferential bile flow into the relaxing GB with some activity in CBD to the point of physiologically still constricted SO, which does not allow bile into the duodenum. 16In some such cases activity in the duodenum may not be seen for up to 2 hours.A common response this observation in practice is to wait for activity in the intestine before giving sincalide for CCK-HBS, supposedly to avoid stimulating GB in cases of pathological CBD obstruction.The imaging characteristics of physiological and pathological delay of duodenal activity are distinctly different, as shown in Figure 2.There is no reason to delay CCK administration by waiting for activity appearance in the duodenum or small bowel if the imaging demonstrates pattern of physiological delay.It is safe to proceed to CCK stimulation, particularly if the patient has no acute symptoms.Finally, the bile production rate plays an important role, explaining severe transit time delays in cases of nonobstructive intrahepatic cholestasis or severe hepatocellular disease.
SA-CME FIGURE 3. A 32-year-old woman with abdominal pain after meals that she remembers suffering from for most of her life. The symptoms recently became more pronounced and she was referred for hepatobiliary scintigraphy with gallbladder (GB) ejection fraction. (A) The first-hour images show early GB visualization (arrow) with progressive accumulation of activity. The activity in the bowel is faintly seen towards the end of the imaging set because of intensity is set for optimal visualization of the GB that is much more intense than the bowel. (B) The same image as shown in (A), but with increased intensity setting to scale the activity for optimal bowel visualization (instead of the optimal liver and GB visualization). There is a subtle visualization of blood pool activity in the abdominal aorta (arrow), which identifies the center of the abdomen. Using it as a guide to mark the center of the body on later images (green vertical lines
As the tracer fills the biliary system and the bowel, a concomitant parenchymal washout occurs that can be expressed numerically by T1/2.The washout is normally homogeneous throughout the liver.However, abnormally slow washout can be seen diffusely or focally.The former is best exemplified by hepatocellular dysfunction with intrahepatic cholestasis.The latter is typically seen in nodular hyperplasia (FNH), but also in hepatic adenoma and, rarely, in hepatocellular carcinoma.These three entities cannot be definitively differentiated by scintigraphy, but some general rules do apply.A typical triad for FNH consists of increased flow on scintiangiography phase (76% of all FNH cases), increased or normal uptake of Tc-99m Sulfur Colloid and IDA compound, with frequent (92%) delayed focal washout on HBS. 17,18Adenoma typically has unremarkable flow and reduced Tc-99m Sulfur Colloid uptake.Hepatocellular carcinoma usually has increased arterial flow, reduced Tc-99m Sulfur Colloid and IDA uptake, with rare instances of very slow washout that shows up as a "hot spot" on delayed HBS when normal parenchymal activity clears.However, the clinical value of these observations is limited, as there are no reliable data on sensitivity and specificity for these patterns.
Appearance of activity in the duodenum heralds the beginning of the intestinal phase on HBS.It is important to assess the pattern of bowel activity on all images.The most clinically consequential abnormal pattern of small bowel activity is visualizing activity in the second part of the duodenum on the patient's right side with failure to cross the midline, which strongly suggests the diagnosis of malrotation, as demonstrated in Figure 3.Such patients may present with abdominal pain that is clinically indistinguishable from the biliary colic.Symptomatic intestinal malrotation is a surgically curable cause of abdominal pain and can be identified on HBS, if a reader makes checking for duodenal activity going across the midline a habit.
Enterogastric reflux is a common incidental finding on HBS, depicting bile activity in the stomach.One study suggested that it is associated with chronic gastritis, which would be important to mention as it may explain abdominal symptoms in some patients. 19Much less common is a finding of photopenic defects in the GB, which is most likely due to larger gallstones 20 and rarely could be mimicked by a bowel loop compressing a GB. 21Another rare finding is a large abdominal mass that appear as a persistent focus of photopenia in the midst of activity-filled bowel loops. 22An unusually dilated small bowel loop that fills only to a point may signal the intestinal obstruction, 23 and a markedly dilated stomach filled with refluxed radioactive bile could be a secondary finding of a small bowel obstruction. 24
Quantitative Analysis in Hepatobiliary Scintigraphy
By far the most common analytical application to HBS is quantitation of GBEF.It is the difference in GB counts, corrected by the background activity, between its maximal and minimal intensity, as a percent of the former.While it is a simple calculation, care must be taken to confirm that the GB region of interest (ROI) contains the GB throughout this imaging segment.First, the images should be obtained in projection that has least likelihood of GB overlap with other structures containing activity of which duodenum is the most important.Whenever available, examination of axial tomographic images, such as CT or MRI, should provide sufficient guidance in selecting projection.In the typical anatomic arrangement of the abdominal organs the best projection for such imaging is around 35 to 40 degree left anterior oblique view (Figure 1).
It is common for the GB to change orientation with CCK infusion without any movement on the part of the patient, usually with the GB fundus moving up in the cranial direction and the GB assuming a more horizontal position.Such motion may cause a partial escape of GB outside the ROI, which is commonly drawn on the early image and applied to the entire image set, leading to an erroneously higher GBEF.Patient motion can have a similar effect by moving the GB outside a stationary ROI, as shown 2 and 3.
The images showed excellent accumulation of activity in the gallbladder (GB) (arrow), small amount of activity in the biliary ducts, subtle activity in the liver parenchyma, and activity in the duodenum (arrowheads) that leads to activity in the proximal small bowel. (B) The result page after the processing of post-sincalide stimulation study was done in the standard manner. The individual image frames are shown at the top as 2 minutes per frame formatting. The composite image is created for drawing of the regions of interest (ROIs). The first 16 frames were added for composite image. Shown are the ROI for the GB in green and for the background in blue. These ROIs produced the background corrected time-activity curve for the GB (in green) and the background curve (in blue). The GB curve is somewhat noisy that should suggest some interference from outside transiting activity or motion. The GB ejection fraction (GBEF) is 65%, which would be considered normal. The deficit of the display is in that it does not show the ROIs displayed on each representative frame throughout the study. (C) This is another processing of the same data as in image (B), but using the program that displays the ROIs on each frame. In this processing the GB ROI is drawn on the very first image (not a composite) and then applied to the entire study. The quality control display is useful as it showed clearly how the GB moves outside of the ROI. This happens when the patient slides towards the feet on the table. In this attempt to process the GBEF was even higher at 98%. The correct approach to such case is to either apply variable ROIs on each frame to outline GB as it moves or to create the composite from the first to the last image and include the entire trajectory of the moving GB into a larger all-encompassing GB ROI. (D) This is an example of creating a composite image including all of the frames and drawing all inclusive GB ROI that takes into account all of the GB positions throughout its motion. In this case the fact that there was no interference from the bowel activity made it possible. If the bowel activity interfered with widening the GB ROI, the only other choice would be to draw adjusted ROI on each frame of the study separately. The final alternative is to draw GB ROIs on the first and the last frames to calculate the EF.
in Figure 4.It is important for quality assurance to apply individual GB and background ROIs to each representative frame of post CCK phase of HBS to avoid making such a mistake.Inappropriately positioned background ROI that includes bowel activity could also occa-sionally lead to an erroneous result.Another common problem with stationary GB ROI is an unintentional inclusion of nearby bowel activity that moves into the ROI towards the end of the study.All of these problematic and misleading results can be avoided by visually inspecting post-CCK images with ROIs displayed.This must be available on all contemporary gamma camera systems along with quantitative GBEF application.Interestingly, some colleagues advocate only visual (qualitative) assessment of images for characterization of gallbladder APPLIED HEPATOBILIARY SCINTIGRAPHY IN CHRONIC GALLBLADDER DISEASES SA-CME emptying. 25Visual inspection is an important component of evaluation, but in contemporary nuclear medicine practice it is difficult to find justification for omitting computer processing for GBEF in favor of visual inspection alone.
It is critical to standardize administration of sincalide to accurately assess GBEF.The SNMMI and inter-specialty groups have endorsed administration of 0.02 micrograms of sincalide over 60 minutes. 11,26The normal GBEF with the above recommended infusion rate is 38% or greater.This provides the least variability and the highest specificity as compared to shorter infusion times. 9Taking a historical GBEF cutoff of 35% as normal, there are 27% and 10% false positive rates when the total dose of sincalide is infused over 15 and 30 min in normal volunteers, respectively. 9The other finding of the study on infusion methodology conducted in 60 normal subjects is that all of those subjects showed GB activity by 60 min after radiotracer administration, and they all did so on three different occasions. 9By inference, this should tell us that if GB is not visualized by 60 min in a well-prepared patient, it is evidence for abnormally functioning GB.Therefore, it is recommended to not pursue those cases with delayed imaging or especially the administration of morphine sulfate in order to visualize activity in the GB.It serves no purpose, except maybe in a rare instance of incidentally occurring acute cholecystitis in a patient referred for chronic symptoms for the GBEF study.Such patient should be easily identifiable by a basic bedside evaluation.
It is common to image dynamically for the first hour after injection of the radiotracer.However, the first-hour imaging is of limited additive information when the goal is to determine GB function, and some have substituted one or few static views to confirm activity in the GB prior to injecting CCK. 27e recently presented retrospective blinded reviews of our clinical experience, demonstrating that substituting the first-hour dynamic anterior view imaging with 2 static views (anterior and right lateral, Figure 4) obtained one hour after radiotracer injection that resulted in no significant diagnostic loss or misses. 28
Clinical applications
In principle, the GBEF is used to identify abnormally functioning GB (decreased GBEF; ie, GB hypokinesia) irrespective of the pathology causing the chronic symptoms.This group of conditions is rather heterogeneous and the key symptom is a chronic, periodic abdominal pain, often biliary-like in character ("colicky").First described with the help of cholecystokinin cholecystography, [29][30][31][32] it remains a highly debated and evolving entity. 33t is attractive to consider GBEF as the objective differentiator between those who may respond to cholecystectomy from those who are not likely to be helped by this surgery.Most studies done to investigate this clinical application do not sub-select the subjects on the basis of pathology or lack thereof, but instead take all comers with abdominal symptoms that could have been biliary in origin.The other limitation is that all investigations on this topic, except for one study of patients with acalculous biliary pain, 34 are retrospective.There are multiple factors, described below, that can influence GB contractile function and need to be considered as possible causes of false positive or false negative results, which would be difficult to identify in retrospective studies.
Application of GBEF in hospitalized patients who are typically acute, undergoing active medical treatment, and often suffering from nausea, abdominal pain and other gastrointestinal symptoms warrants a word of caution.It is likely that in such cases GBEF may be abnormal as a result of pharmacological, hormonal, or neural influences, causing significant reduction in specificity of the test. 35,36Therefore, experts recommend using this test in clinically stable outpatients only 37 For example, increased GB contractility is observed with cholinergic agonists, 38,39 hypercalcemia 40 erythromycin 41 nonsteroidal anti-inflammatory drugs 42 and those with vagotomy. 43These circumstances may promote false-negative results.But false-positive studies are probably even more common and can result from reduced GB contractility secondary to opioids 44 endotoxins associated with severe intercurrent illness 45 hyperglycemia, 6 somatostatin 47-51 diabetic neuropathy, 52 spinal cord injury, 53 achalasia 54 inflammatory bowel syndrome, 55 liver cirrhosis 56 and progesterone therapy 57 It is because of this often inconspicuous complexity that one finds such a diversity of results in the overwhelmingly retrospective literature.However, the majority of studies report high value of abnormal GBEF in predicting success of cholecystectomy for the pain relief, 34,[58][59][60][61][62][63][64][65][66] while a minority offer opposing views. 1,67,68
Functional gallbladder disorder (FGBD)
The current principal criteria for the diagnosis of FGBD includes biliary pain and absence of GB stones or other structural pathology. 3The low GBEF was included as the supportive criteria for the diagnosis of FGBD in 2016. 3he other supportive criteria includes normal liver enzymes, normal conjugated bilirubin, and normal amylase/ lipase. 3This disorder is of yet unknown etiology.It is important to recognize that designation of FGBD a disorder emphasizes abnormality or disturbance of function.There is a great deal of confusion in the literature that contains numerous names for it that were developed over the years and by different specialties.These include "biliary dyskinesia," "GB dyskinesia," "chronic acalculous cholecystitis," "acalculous cholecystopathy," "chronic acalculous biliary disease," "acalculous biliary disease," and probably some others.It is therefore not unusual to find one of the above names in reports of CCK-HBS.The multispecialty consensus statement of 2011 offered the following recommendation for the CCK-HBS impression statement in cases with abnormal GBEF and normal anatomical imaging findings: "Abnormal GBEF of X% is consistent with functional APPLIED HEPATOBILIARY SCINTIGRAPHY IN CHRONIC GALLBLADDER DISEASES SA-CME gallbladder disorder in the proper clinical setting." 11It is important to adhere to this recommendation in order to improve consistency of our reporting.
Chronic calculous cholecystitis
Presence of gallstones (cholelithiasis) in the general population is as high as 1 in every 5 people. 69They are classified into asymptomatic and symptomatic.Establishing asymptomatic cholelithiasis is obvious when there are no abdominal complaints.But the seemingly simple division is complicated in many patients with abdominal pain because of inherent challenges in eliciting and interpreting subjective symptoms.The combination of typical chronic biliary symptoms and anatomical demonstration of cholelithiasis is a reliable evidence of chronic cholecystitis, requiring no further diagnostic evaluation prior to cholecystectomy.However, additional diagnostic testing may be useful in patients with atypical abdominal symptoms and cholelithiasis in order to affirm causal relationship by demonstrating abnormal GBEF.Administration of sincalide in patients with cholelithiasis could be viewed by some professionals as unsafe for the concern of dislodging a stone and precipitating biliary tract obstruction and/or pain.The fact remains that there are studies that used sincalide in patients with known gallstones and none reported obstructive complications.Abdominal pain was reported in 1/67 patients with gallstones during sincalide infusion. 1The consensus of specialists also found no evidence for this concern and considered sincalide testing safe. 11The literature experience shows that majority of patients (>75%) with gallstones and abdominal symptoms have normal GBEF. 2 This means that their abdominal pain is of non-GB etiology.On the other hand, abnormal GBEF was a strong predictor of biliary pain recurrences.
Non-gallbladder findings leading to cause of abdominal pain
It is important to carefully examine the images for other potential causal findings.The finding of the malrotation, as shown in Figure 3, is one such causal finding that while very rare is definitely most consequential.Another finding to watch out for is increased peristalsis that may indicate irritable bowel syndrome when activity transits rapidly into the colon after sincalide administration. 70This finding needs to be clinically correlated by the referring service.In many cases one can observe some duodenogastric reflux of bile, which when prominent should be suggested as a potential cause of bilious gastritis. 71
Practical interpretation algorithm
In patients with chronic abdominal pain and abnormal GBEF on CCK-HBS, the pertinent information should be queried for the absence or presence of gallstones and/or sludge.If an anatomical study is normal, the most appropriate interpretation would be to suggest the diagnosis of FGBD.If, on the other hand, there is presence of stones and/or sludge, the interpretation should implicate chronic calculous cholecystitis.While this represents a simple algorithm, it is probably too simple to fully capture the clinical reality.It is well understood that with time the poor motility of the GB observed in FGBD may lead to formation of sludge and later probably results in GB stones.Yet the above interpretational algorithm offers a reasonable and a logical approach.In those with normal CCK-HIDA, it is important to scrutinize the study for causal non-GB findings.
Conclusion
HBS continues to enjoy frequent application in clinical gastroenterology, particularly in the workup of chronic biliary pain.The most appropriate indication remains suspected FGBD in patients with biliary-type or atypical chronic abdominal pains and negative findings on anatomical imaging.The preponderance of evidence is in favor of using abnormal GBEF as a pathophysiological rationale for identifying abnormal GB function in these patients and using this finding for selecting patients for cholecystectomy.Adherence to the standard GB stimulation methodology is critical for preventing false-positive results and calls for administration of 0.02 micrograms of sincalide per kilogram of body weight that should be infused over 60 minutes.However, more evidence is needed to establish utility of this test in patients with cholelithiasis in the era of optimized sincalide infusion standardization.
FIGURE 3. A 32-year-old woman with abdominal pain after meals that she remembers suffering from for most of her life.The symptoms recently became more pronounced and she was referred for hepatobiliary scintigraphy with gallbladder (GB) ejection fraction.(A) The first-hour images show early GB visualization (arrow) with progressive accumulation of activity.The activity in the bowel is faintly seen towards the end of the imaging set because of intensity is set for optimal visualization of the GB that is much more intense than the bowel.(B) The same image as shown in (A), but with increased intensity setting to scale the activity for optimal bowel visualization (instead of the optimal liver and GB visualization).There is a subtle visualization of blood pool activity in the abdominal aorta (arrow), which identifies the center of the abdomen.Using it as a guide to mark the center of the body on later images (green vertical lines), it shows that the duodenum and the proximal small intestine (arrowheads) are staying on the right side of the abdomen without expected transition of the duodenum activity to the left side of the abdomen.(C) The patient underwent sincalide stimulation over 60 minutes with standard dose of 0.02 microgram per kilogram of body weight.The images obtained dynamically in 40 degree left anterior oblique view show moderate GB emptying with calculated ejection fraction of 45% (the result page is not shown).Notice that it is considerably more difficult to appreciate the lack of bowel activity in the left abdomen in this projection (the first image in the left upper corner).The bowel findings are always more conspicuous in the anterior projection.(D) This drawing shows normal 270 degrees rotation and fixation of midgut that results in the normal positioning of the bowel.The mesenteric attachment is normally broad (dotted line) and provides for normal small bowel mobility.(E)This drawing shows position of the bowel in malrotation that also results in a much narrower mesenteric fixation (dotted line), which makes the midgut prone to volvulus.Abnormal fibrous peritoneal bands of Ladd (the 4 thin lines) that attach to right colon predispose to internal hernia in older patients.Notice similarity in the pattern of duodenal and small bowel layout on this schematic and the image (B) where the bowel activity is annotated with arrowheads.
|
2017-12-29T07:40:48.377Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "5cdbc263d9e702dc55d6524cb156470a9c1c042c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.37549/ar2311",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5cd93f71861431e76c095652c19820cab0e26bb1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
10608747
|
pes2o/s2orc
|
v3-fos-license
|
Association of a variant in the gene encoding for ERV1/ChemR23 with reduced inflammation in visceral adipose tissue from morbidly obese individuals
Obesity comorbidities are closely associated with chronic low-grade adipose tissue inflammation. A number of SNPs associated with inflammation has been identified, underscoring the impact of genetic determinants on this process. Here, we screened SNPs in genes with pro-inflammatory (IL-1β, IL-6, STAT3 and JAK2), anti-inflammatory (IL-10 and SOCS3) and pro-resolving (ERV1/ChemR23) properties in 101 obese and 99 non-obese individuals. Among the SNPs analyzed, we identified that individuals carrying a C allele in the rs1878022 polymorphism of the ERV1/ChemR23 gene, which encodes for the receptor of the pro-resolving mediator RvE1, had increased ERV1/ChemR23 protein expression and reduced levels of the inflammatory cytokine IL-6 in adipose tissue. Moreover, patients carrying the C allele in homozygosity had lower plasma levels of IL-6, IFN-α2, IL-15, IL-1ra, IL-10, GM-CSF, G-CSF and VEGF and enhanced leukocyte responsiveness to RvE1. C-carriers also exhibited decreased TAG to HDL ratio, a surrogate marker of insulin resistance and a predictor of incident fatty liver. Finally, we confirmed in vivo that the ERV1/ChemR23 receptor regulates systemic and tissue inflammation since mice lacking ERV1/ChemR23 expression showed increased IL-6 levels in adipose tissue and peritoneal macrophages. Together, our study identified an ERV1/ChemR23 variant that protects patients with obesity from excessive inflammatory burden.
levels interferes with the timely resolution of inflammation as recently demonstrated in patients with chronic metabolic diseases and experimental models of obesity 7,[9][10][11][12] . Although the biological actions of SPMs have been a subject of ample research, whether these lipid mediators and their role in the resolution process are influenced by genetic factors is currently unknown.
In the current study, we identified a functional SNP in the gene encoding for ERV1/ChemR23, the receptor that recognizes the anti-inflammatory and pro-resolving mediator resolvin E1 (RvE1), a SPM endogenously derived from the long-chain highly-unsaturated fatty acid, eicosapentaenoic acid (EPA) 13,14 . In particular, in this study we provide evidence that in comparison to obese individuals carrying the ancestral allele T, those individuals carrying the C variant in the ERV1/ChemR23 rs1878022 SNP exhibit a higher expression of this receptor in visceral adipose tissue, a reduced degree of adipose tissue inflammation and hepatic insulin resistance and significantly lower levels of circulating inflammatory cytokines and chemokines. In addition, LPS-stimulated leukocytes from obese individuals carrying the C variant were more responsive to RvE1, suggesting that this variant enhances RvE1-induced anti-inflammatory responses. The role of ERV1/ChemR23 in the regulation of the inflammatory tone was confirmed in mice lacking this receptor (ChemR23 −/− ), which, as compared to wild-type (WT) mice, displayed a greater degree of inflammation in visceral adipose tissue, liver tissue and peritoneal macrophages. Altogether, our data provide evidence of the influence of genetic factors on SPM actions and resolution of inflammation in patients with obesity.
Results
The demographic and clinical characteristics of the patient cohorts are shown in Table 1. One hundred one out of 200 participants were morbidly obese and 68% were women. The obese cohort was significantly younger than the non-obese group (63 ± 3 vs. 45 ± 1 years, P < 0.001). Non-obese patients had a body mass index (BMI) of 26.4 ± 0.9 kg/m 2 whereas patients from the obese cohort had a significantly increased BMI (45.3 ± 0.7 kg/m 2 ). There were no statistically significant differences in serum glucose, triglycerides (TG), total cholesterol, gamma-glutamyl transferase (GGT), alanine aminotransferase (ALT) and aspartate aminotransferase (AST) levels between the two study cohorts (Table 1). No statistical significant differences were observed in blood leukocyte, monocyte and platelet counts (Table 1).
Using TaqMan ® allelic discrimination we first analyzed seven SNPs in genes involved in the regulation of the inflammatory response: interleukin (IL)-1β (rs1143634), IL-6 (rs1800795), signal transducer and activator of transcription 3 (STAT3, rs8069645), Janus kinase 2 (JAK2) (rs7849191), IL-10 (rs1800871), suppressor of cytokine signaling 3 (SOCS3, rs8064821) and ERV1/ChemR23 (rs1878022). These SNPs have previously been associated with inflammation or insulin resistance in the setting of metabolic, hepatic or proliferative disorders (Supplementary Table 1). All the selected candidate gene variants had minor allele frequencies (MAF) consistent with those reported in the HapMap project CEPH-CEU (Utah Residents with Northern and Western European Ancestry) ( Table 2). The distribution of genotypes for all the SNPs was consistent with Hardy-Weinberg equilibrium (Table 2). Single locus analysis under the dominant inheritance model in the whole group of patients of the study identified a minor allele variant (C) in the ERV1/ChemR23 DNA gene sequence in association with the cohort of obese individuals (Table 3). This SNP (rs1878022) is an intronic variant located within ERV1/ChemR23 on human chromosome 12 ( Supplementary Fig. 1). This gene encodes for a seven transmembrane G-protein receptor that binds the specialized pro-resolving lipid mediator RvE1 14 as well as chemerin, a protein that participates in numerous cellular processes such as adipogenesis 15 . No other SNP studied was detected in association with the obese cohort (Table 3). TaqMan ® SNP genotyping results of the ERV1/ChemR23 rs1878022 SNP were confirmed by Sanger DNA sequencing (Supplementary Fig. 2A). The genotype analysis revealed that the CC and TC genotypes were more frequent in obese individuals whereas the TT genotype was more frequent in non-obese individuals ( Supplementary Fig. 2B). We next determined whether the ERV1/ChemR23 variant had a functional role in omental adipose tissue from obese individuals. The morphometric assessment of ERV1/ChemR23 protein levels as detected by immunohistochemistry revealed increased positive staining for this receptor in adipose tissue from obese patients carrying the C allele, reaching statistical significance in individuals that were homozygous (CC) (Fig. 1A). The adipose tissue expression of ERV1/ChemR23 at the mRNA level was significantly higher in heterozygotes of obese patients carrying the C allele (Fig. 1B). Notably, IL-6, a marker of inflammation, was significantly reduced in omental adipose tissue from obese individuals carrying the C allele (Fig. 1C). We were able to confirm the inverse relationship between the expression of ERV1/ChemR23 and IL-6 in 3T3-L1 adipocytes, which levels of ERV1/ChemR23 expression increase during the process of differentiation ( Supplementary Fig. 3). Of interest, no differences in CD68, a macrophage surface marker, were observed in omental adipose tissue from obese individuals carrying the C allele (Fig. 1D), suggesting that variations in the degree of inflammation associated with the ERV1/ChemR23 variant were probably related to changes in the expression of inflammatory genes rather than to the number of infiltrated macrophages.
We next explored whether changes in the degree of adipose tissue inflammation in obese patients carrying the different ERV1/ChemR23 rs1878022 genotypes were mirrored by differential circulating levels of cytokines. As shown in Fig. 2A, obese patients carrying the minor C allele had lower plasma IL-6 levels than those carrying the TT genotype, reaching statistical significance in homozygous CC individuals. Other inflammatory and immunomodulatory cytokines such as interferon alpha-2 (IFN-α2) and IL-15 were also significantly reduced in plasma from CC individuals (Fig. 2B). Of interest, decreased levels of anti-inflammatory (interleukin-1 receptor antagonist (IL-1ra) and IL-10) (Fig. 2C), hematopoietic (granulocyte/macrophage colony stimulating factor (GM-CSF) Table 3. Analysis of association for each SNP with the cohort of obese individuals according to dominant, recessive and overdominant inheritance models. and granulocyte colony stimulating factor (G-CSF)) ( Fig. 2D) and pro-angiogenic (vascular endothelial growth factor (VEGF)) ( Fig. 2E) cytokines were also seen in obese patients carrying the C allele. No differences were found in IL-8 and monocyte chemoattractant protein-1 (MCP-1) ( Supplementary Fig. 4). In view of these findings, we hypothesized that circulating leukocytes from individuals carrying the C variant in the ERV1/ChemR23 receptor, which recognizes the pro-resolving mediator RvE1, would exhibit a more favorable anti-inflammatory environment than leukocytes from non-C carriers. To address this, we triggered LPS-induced inflammation in leukocytes from C-carrier and non-C carrier individuals in the presence or absence of RvE1 and quantified the inflammatory response by means of IL-6 expression. As shown in Fig. 2F, leukocytes from C-carrier patients were more sensitive to RvE1 and showed a higher capacity to attenuate the LPS induction of IL-6 than those isolated from non-C carriers (∼34% vs 15% reduction). Obesity-induced inflammation is a well-established risk factor for metabolic comorbidities, such as insulin resistance and non-alcoholic fatty liver disease. As shown in Fig. 3A, obese patients carrying the minor C allele in homozygosity exhibited a significantly reduced TG to the high-density lipoprotein cholesterol (HDL-c) ratio, a surrogate marker of insulin resistance and a predictor of incident fatty liver and increased risk for cardiovascular disease in human subjects independently of obesity [16][17][18][19] . Plasma insulin and GGT showed a trend towards reduced levels in patients carrying the minor C allele ( Fig. 3B and C). Together, these findings identify obese patients carrying the minor C allele as a subset of subjects with a lower-risk of metabolic syndrome.
To confirm that changes in the expression of ERV1/ChemR23 are linked to variations in the degree of systemic and tissue inflammation we next explored the inflammatory phenotype of mice lacking ERV1/ChemR23 (ERV1/ChemR23 −/− mice). As compared to WT mice, no differences were observed in ERV1/ChemR23 −/− mice with respect to body weight and white adipose tissue and liver to body weight ratios (Supplementary Table 2). No changes were seen in total serum cholesterol, HDL-c, low-density lipoprotein cholesterol (LDL-c) and TGs (Supplementary Table 2). As expected, mice lacking ERV1/ChemR23 showed decreased expression of this receptor in peritoneal macrophages, visceral adipose tissue and liver (Fig. 4A). Compared to their WT controls and consistent with the view that ERV1/ChemR23 is a key regulator of inflammation, ChemR23 −/− mice had significantly increased expression of the pro-inflammatory cytokine IL-6 in peritoneal macrophages and adipose tissue (Fig. 4B). MCP-1 was also up-regulated in adipose tissue of ERV1/ChemR23 −/− mice ( Supplementary Fig. 5). Expression of IL-6 were slightly higher in livers from ERV1/ChemR23 −/− mice, although differences did not reach statistical significance (Fig. 4B). The absence of changes in hepatic IL-6 in ChemR23 −/− mice were confirmed at the protein level ( Supplementary Fig. 6). ERV1/ChemR23 −/− mice had increased levels of ALT, a surrogate serum marker of liver injury, and showed an increasing trend in insulin resistance and serum glucose levels ( Fig. 5A and B). However, no evidence of steatosis was observed in these mice, since the hepatic expression of CD36 and SREBP-1c, two genes invariably up-regulated during the steatotic process, remained unchanged (Fig. 5C).
Discussion
Inflammation plays a critical role in host defense against invasive pathogens and tissue and wound repair. Inflammation not only occurs in response to pathogens but can also be induced without active infection (sterile inflammation). This is the case of obesity, a condition in which the immune system is engaged in low-grade inflammatory response in several insulin-sensitive tissues, especially adipose tissue 20 . The stressors of this inflammatory response are diverse including over-nutrition, high-levels of lipids, free fatty acids and glucose, oxidative stress and hypoxia secondary to the expansion of adipose tissue volume 21 . A particular feature of obesity-induced inflammation is that it is chronic and of low intensity 20 . Importantly, this state of chronic low-grade inflammation is central in the pathogenesis of obesity-associated comorbidities such as insulin resistance 3,4 . For example, insulin resistance closely correlates with visceral obesity, ectopic fat deposition in muscle and liver, hypertension, dyslipidemia, endothelial dysfunction and elevated levels of adipokines such as TNFα, IL-6 and leptin 22,23 . This inflammatory environment progresses to an overproduction of reactive oxygen species and adipokines, which chronify the inflammatory response ultimately leading to obesity-related complications 3,4 .
Given that persistent unresolved inflammation is detrimental to the host, higher organisms have evolved protective mechanisms to ensure resolution of the inflammatory response in a specific time-limited manner 24 . Among the mechanisms that facilitate resolution, the biosynthesis of SPMs, a class of endogenous lipid mediators which includes, among others, resolvins, protectins and maresins generated from the omega-3 fatty acids EPA and docosahexaenoic acid (DHA), has been described to efficiently resolve inflammation with minimal damage to the surrounding tissue 8,25 . In particular, RvE1 is formed from EPA during the resolution phase of acute inflammation via cell-cell interactions such as endothelial cell-leukocyte interactions 13,14 . RvE1 biosynthesis involves cells bearing acetylated cyclooxygenase (COX)-2 and cells that possess 5-lipoxygenase (5-LOX), although it can also be generated by cytochrome P450 processing of EPA 26 . RvE1 blocks and counterregulates the production of inflammatory mediators, inhibits polymorphonuclear leukocytes transendothelial migration and stimulates macrophages to enhance phagocytosis and clearance of apoptotic leukocytes 27,28 . RvE1 actions are mediated by its binding to ERV1/ChemR23, a G-protein coupled receptor expressed in monocytes/macrophages, immature dendritic cells, and adipocytes 14,28 . Therefore, our observation that cells and tissues from patients carrying the C variant in the rs1878022 polymorphism of the ERV1/ChemR23 gene exhibit a more favorable anti-inflammatory environment in the context of higher expression of this receptor can be explained by the fact that these individuals are more responsive to the anti-inflammatory actions of RvE1. Indeed, our data demonstrate that leukocytes from C carriers show enhanced responsiveness to RvE1 in blocking LPS-induced inflammation than those from non-C carriers. Similar results were obtained in differentiating adipocytes in which higher expression of the RvE1 receptor ERV1/ChemR23 inversely correlated with lower expression of the inflammatory cytokine IL-6. However, it must be considered that ERV1/ChemR23 has other ligands, such as the peptide chemerin, which possesses antimicrobial and immunomodulatory properties 29,30 . Consequently, we cannot exclude mechanism other that RvE1 such as that described by Luangsay et al. 31 , who described that the ERV1/ChemR23 receptor mediates anti-inflammatory actions of chemerin in a lung disease model. Several SNPs located within the ERV1/ChemR23 DNA sequence have been identified by genome-wide association studies demonstrating the impact of these variants on the natural course of multifactorial diseases. For example, rs17040430 has been associated with mental disorders in a cohort of 4436 patients with schizophrenia and/or bipolar disorder 32 . Another ERV1/ChemR23 SNP, rs107291463, has been strongly associated with the development of erectile dysfunction after radiotherapy 33 . Finally, the SNP rs1878022, identified in the current study as a protective variant against obesity-induced inflammation, has previously been demonstrated to be associated with poor overall survival in non-small cell lung cancer patients 34 . As far as we know, our study is the first investigation in which a genetic variant of ERV1/ChemR23 has been associated with a higher degree of inflammation in both visceral adipose tissue and systemic circulation. Indeed, our observation reinforces the idea that resolution of inflammation, as well as many other biological processes, is influenced by our genetic background. In a previous report, Simiele and collaborators reported the presence of a SNP variant in the promoter of the gene coding for formyl peptide receptor 2 (FPR2)/ALX, a G-protein-coupled receptor that binds the pro-resolving mediators lipoxin A 4 and RvD1 35 . These investigators reported that individuals carrying this SNP had a reduced expression of the pro-resolving FPR2/ALX receptor and were more vulnerable to developing cardiovascular disease 35 . On the other hand, Kim and collaborators have recently reported a gene variant in FPR2/ALX that conferred significant protection to asthma patients against the development of aspirin exacerbated respiratory diseases, which in this case was associated with an increased protein expression of this pro-resolving receptor 36 . Finally, a previously unappreciated pattern of methylation that makes FPR2/ALX transcriptionally inaccessible, leading to a low expression of this pro-resolving receptor at both mRNA and protein level has recently been uncovered 37 . This latter finding suggests that the expression of pro-resolving receptors is not only dependent on genetic variants but is also regulated by epigenetic factors.
An interesting aspect of our study was that we were able to confirm our findings in obese individuals at experimental level describing a close relationship between ERV1/ChemR23 expression and the degree of inflammation in mice lacking the ERV1/ChemR23 receptor. In these mice, we confirmed that the expression of this receptor inversely correlates with the levels of inflammatory markers in visceral adipose tissue. Our findings are consistent with those by Demoor et al. 38 , who reported that mice lacking ERV1/ChemR23 exhibit an impaired resolution of cigarette smoke-induced inflammation. Moreover, our data point to the presence of insulin resistance in these mice, results which are consistent with those previously reported by Ernst et al. 39 showing that ERV1/ChemR23 knockout mice are glucose intolerant. However, these findings do not agree with those from Rouger et al. 40 , who reported unaltered glucose tolerance in these mice.
In summary, the results of the present study provide evidence that a SNP variant in the ERV1/ChemR23 gene is associated with increased expression of this receptor and with a lower degree of tissue inflammation and circulating levels of cytokines and chemokines in a population of morbidly obese individuals. Importantly, patients carrying this SNP variant exhibit a lower risk of obesity-associated comorbidities such as insulin resistance. The inverse relationship between ERV1/ChemR23 expression and the degree of inflammation was confirmed in ERV1/ChemR23 knockout mice, which displayed a greater degree of inflammation in both visceral adipose tissue, liver and peritoneal macrophages. Collectively, our data indicate that the ERV1/ChemR23-RvE1 axis is susceptible to modulation by genetic changes, corroborating that the resolution of inflammation is also regulated at the DNA level.
Material and Methods
Study participants. One hundred one patients with morbid obesity (BMI > 30 kg/m 2 ) undergoing laparoscopic bariatric surgery and 99 non-obese patients (BMI < 30 kg/m 2 ) undergoing elective gastric surgery were included in the study. The BMI was calculated as mass/(height) 2 . Individuals with inflammatory bowel disease or cancer and obese patients with previous bariatric surgery were excluded from the study. Demographic, clinical data and drug use were collected from the electronic medical records of the patients. Venous blood samples (5 ml) were collected in EDTA-tubes and visceral (omental) adipose tissue samples were intra-operatively harvested with sharp dissection for genotyping and gene expression analysis, respectively. Harvested adipose tissue samples were weighed on a precision balance, washed twice with DPBS, cut into 100 mg pieces, placed in either 10% formalin or snap-frozen in liquid nitrogen and stored in Nunc ® CryoTubes at −80 °C for further analysis. The study protocol (protocol #2012-7239) was approved by the Investigation and Ethics Committee of the Hospital Clínic and all methods were carried out in accordance with the guidelines and regulations dictated by this Committee. Written informed consent was obtained from all participants.
Biochemical analyses and cell count. Serum concentrations of glucose, total cholesterol, HDL-c, TG, ALT, AST and GGT and blood leukocyte, monocyte and platelet counts were determined by standard laboratory procedures. LDL-c was calculated as (total cholesterol -HDL-c) -(TG/5).
Assessment of insulin resistance. Insulin resistance was assessed by calculating the ratio between serum TG and HDL-c levels, as a surrogate marker of insulin resistance and a predictor of incident fatty liver and increased risk for cardiovascular disease in human subjects independently of obesity 16-19 . DNA extraction. Blood samples were centrifuged at 800 g for 15 min and genomic DNA was isolated from the buffycoat containing the nucleated cells using the Qiasymphony MIDI Kit in a Qiasymphony SP instrument (Qiagen, Hilden, Germany). In some patients, DNA was extracted from visceral adipose tissue using the Qiasymphony MINI Kit (Qiagen) for tissue samples.
Allelic Discrimination. Seven SNPs from seven candidate genes involved in the inflammatory response were screened. SNP selection was based on the inclusion of genes related to the inflammatory process. Among these, the screening included SNPs in genes coding for pro-inflammatory (IL-1β, IL-6, STAT3 and JAK2), anti-inflammatory (IL-10 and SOCS3) and pro-resolving (ERV1/ChemR23) factors and signaling pathways. The selection was based on previously published results showing allelic associations between these polymorphisms and human diseases (Supplementary Table 1). SNPs were genotyped using TaqMan ® SNP Genotyping assays in an ABI 7900HT Sequence Detection System (Applied Biosystems, Foster City, CA). The TaqMan ® SNP Genotyping assay was based on an oligonucleotide (probe) labeled with a reporter fluorescent dye (FAM TM or VIC TM ) and a quencher dye (TAMRA TM ), linked covalently to the 5′ and 3′ end, respectively. The TaqMan probes used for genotyping were: IL-10 rs1800871 (C_1747362_10), IL-6 rs1800795 (custom assay), STAT3 rs8069645 (C_30301828_10), SOCS3 rs8064821 (C_43672951_10), ERV1/ChemR23 rs1878022 (C_11698200_10), JAK-2 rs7849191 (C_2008287_10) and IL-1β rs1143634 (C_9546517_10). Reactions took place in 96-well plates in 25 µl of total volume containing 11.25 µl of genomic DNA (1.77 ng/µl), 12.5 µl of Master Mix and 1.25 µl of a mix of probes and primers. During PCR amplification, the probe specifically anneals between the forward and reverse primer sites, and the nuclease activity of Taq DNA polymerase cleaves the probe and frees the fluorescent dye allowing allele identification. Data were analyzed by Sequence Detector Software (SDS) version 2.1 (Applied Biosystems).
DNA Sequencing. The TaqMan ® SNP Genotyping results were confirmed by Sanger DNA sequencing. PCR products were purified using the ExcelaPure ™ 96-Well ultrafiltration-based system (EdgeBio, Gaithesburg, MD) and sequencing reactions were performed with the BigDye Terminator v.3.1 (Applied Biosystems). Sequencing reactions were cleaned-up by removal of unincorporated dyes with the Performa ® DTR system (EdgeBio) before electrophoresis analysis in an ABI Prism 3130xl Genetic Analyzer (Applied Biosystems).
RNA extraction, reverse transcription and real-time PCR analysis. Isolation of total RNA was per-
formed from 70 mg (human) and 40 mg (mouse) of adipose tissue by phenol-chloroform extraction using the TRIzol reagent. Dry RNA pellets were resuspended into 20 μl of DEPC water and RNA concentrations were assessed in a NanoDrop-1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE). Five-hundred ng of total RNA were taken for cDNA synthesis using the High-Capacity cDNA Archive Kit (Applied Biosystems).
Isolation of human peripheral blood leukocytes.
Human peripheral blood leukocytes were isolated from C carrier and non-C carrier individuals. Briefly, blood samples collected with EDTA were centrifuged at 200 g for 10 min, and sedimented cells were incubated with pre-warmed ammonium-chloride-potassium lysis buffer for 5 min at room temperature to remove red blood cells. Samples were then centrifuged at 400 g for 10 min and the supernatants decanted. The red blood cell lysis procedure was repeated twice for 10 min each, and the resultant pellet was finally washed with DPBS without calcium and magnesium. The isolated leukocytes were resuspended in RPMI 1640 medium containing penicillin (100 U/mL) and streptomycin (100 U/mL) and L-glutamine (4 mM) with 0.5% FBS. Leukocytes at a density of 3 × 10 6 cells/mL were stimulated with LPS (100 ng/ml) for 2 hours at 37 °C (5% CO 2 ) in the absence or presence of RvE1 (10 nM). At the end of the incubations, cells were centrifuged at 400 g for 10 minutes at 4 °C and pelleted cells were collected for RNA extraction, reverse transcription and real-time PCR analysis as described above.
Studies in ERV1/ChemR23 gene-deficient mice. Male and female ERV1/ChemR23 knockout mice were obtained from Deltagen (San Mateo, CA). Heterozygous mice in the C57BL/6 background were interbred to generate homozygous ERV1/ChemR23 knockout mice (n = 12) and their WT controls (n = 12). All experimental protocols were approved by the Ethical Committee of Animal Experimentation of the University of Barcelona (authorization# 9362) in accordance with the guidelines set by the Direcció General de Polítiques Ambientals i Medi Natural of the Generalitat de Catalunya and the regulations of the European Union legislation. Genomic DNA from the ear was isolated using the Omni-Pure Tissue Genomic DNA System (Gene Link, Hawthorne, NY) following the manufacturer's protocol and genotyped by PCR. Two different PCR reactions of 20 μL were performed for detection of endogenous (E) and targeted (T) alleles. Expected product sizes in base pairs (bp) and a schematic diagram of the ERV1/ChemR23 knockout construct are shown in Supplementary Fig. 7A. The PCR reactions contained 0.5 µM of primers (forward direction: 5′-TACAGCTTGGTGTGCTTCCTCGGTC-3′ (ChemR23 primer or E) and 5′-GGGTGGGATTAGATAAATGCCTGCTCT-3′ (Neo primer or T); reverse direction: 5′-TGATCTTGCACATGGCCTTCCCGAA-3′ (common E and T)), 0.2 mM dNTPs mix, 1.5 mM MgCl 2 , and 1 U Platinum Taq DNA Polymerase (Invitrogen, Carlsbad, CA). PCR cycle conditions were 15 min at 95 °C followed by 30 cycles of 20 s at 94 °C, 40 s at 62 °C and 1 min at 72 °C, and a final step of 10 min at 72 °C and then cooled to 4 °C. PCR bands were separated by electrophoresis in 2.5% LM Sieve agarose gels and visualized by GelRed TM Nucleic Acid Gel Stain (Biotium, Hayward, CA) using a 100-bp DNA ladder marker (Invitrogen). Gel images of the resulting PCR products of a typical offspring composed by 3 wild-type, 5 heterozygous ERV1/ ChemR23 +/− and 1 homozygous ERV1/ChemR23 −/− littermates, are shown in Supplementary Fig. 7B. The mice were housed in wood-chip bedding cages with 50%-60% humidity under a 12 h light/12 h darkness cycle with unlimited access to food and water. At 18 weeks of age, the mice were euthanized via ketamine/xylazine injection (i.p., 4:1), and visceral adipose tissue and liver were excised, rinsed in DPBS and snap-frozen in liquid nitrogen for RNA analyses. All animal studies were conducted in accordance with the Investigation and Ethics Committee criteria of the Hospital Clínic and European Union legislation.
Isolation of mouse peritoneal macrophages. Isolation of peritoneal macrophages from ChemR23 +/− , ChemR23 −/− and ChemR23 +/+ mice was performed as described by Titos et al. 9 . Briefly, peritoneal macrophages were collected by peritoneal lavage with 7 ml ice-cold DPBS −/− three days after the i.p. injection of 2.5 ml of 3% thioglycolate. The exudates were centrifuged at 500 g for 5 min at 4 °C and further resuspended in DMEM supplemented with penicillin (100 U/ml), streptomycin (100 mg/ml), 2 mM L-glutamine and 5% FBS. The cells (150,000 cells/well) were allowed to adhere on 6-wells culture plates over 2 h at 37 °C in a humidified 5% CO 2 incubator. Non-adherent cells were removed by washing twice with DPBS −/− and the remaining adherent cells were used for the experiments. Finally, the macrophages were washed twice and then collected in TRIzol reagent and kept at −80 °C for RNA extraction and gene expression analysis.
Differentiation and incubation of 3T3-L1 adipocytes. Mouse 3T3-L1 cells were seeded onto six-well plates (250,000 cells per well) in DMEM supplemented with 10% (vol/vol) FBS, 100 U/mL penicillin/streptomycin and 4 mM L-glutamine in a humidified atmosphere of 5% CO2 at 37 °C and allowed to grow to confluence for 2 days. Some confluent 3T3-L1 cells were either differentiated with adipocyte induction medium or left undifferentiated. The adipocyte induction medium contained insulin (5 μg/mL), isobutylmethylxanthine (0.5 mM), dexamethasone (0.25 μM), penicillin/streptomycin (100 U/mL), and L-glutamine (4 mM) in DMEM supplemented with 10% FBS. After 2 days, the induced cells were cultured in continuation medium (5 μg/mL insulin) for 72 hours and then maintained in DMEM supplemented with 10% FBS until exhibiting an adipocyte phenotype at day 8 of differentiation.
Statistical analysis. Study groups were tested for Hardy-Weinberg equilibrium and observed and expected allele and genotypic frequencies were compared by χ 2 analysis. Sample size calculations were conducted using Quanto software version 1.2.4. Allele and genotype frequencies for each SNP were compared between the obese and control groups using a 2 × 2 contingency table and calculation of the odds ratio with a 95% confidence interval. The relation of dependence between the minor allele and obesity was studied by Spearman correlation for non-parametrical variables. Statistical analysis of anthropometric data and gene expression results was performed using the unpaired Student's t test. The results were expressed as means ± SEM and the level of statistical significance was set at P < 0.05.
|
2018-04-03T04:26:28.800Z
|
2017-11-16T00:00:00.000
|
{
"year": 2017,
"sha1": "443433218d441431bf04bc795aa3f96bffeb3635",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-15951-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "954f13c0d1308a60a9a6a201195e680908524e27",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18964638
|
pes2o/s2orc
|
v3-fos-license
|
Association of Nuclear-Localized Nemo-Like Kinase with Heat-Shock Protein 27 Inhibits Apoptosis in Human Breast Cancer Cells
Nemo-like kinase (NLK), a proline-directed serine/threonine kinase regulated by phosphorylation, can be localized in the cytosol or in the nucleus. Whether the localization of NLK can affect cell survival or cell apoptosis is yet to be disclosed. In the present study we found that NLK was mainly localized in the nuclei of breast cancer cells, in contrast to a cytosolic localization in non-cancerous breast epithelial cells. The nuclear localization of NLK was mediated through direct interaction with Heat shock protein 27 (HSP27) which further protected cancer cells from apoptosis. The present study provides evidence of a novel mechanism by which HSP27 recognizes NLK in the breast cancer cells and prevents NLK-mediated cell apoptosis.
NLK has been shown to negatively regulate Wnt/b-catenin signaling by phosphorylation of the complex LEF1/TCFs, which facilitates ubiquitination and degradation of this complex [7]. The ubiquitination of TCF/LEF is executed by NARF (NLK associated RING finger protein), acting as an E3 ligase [14]. In addition, b-catenin-induced transcriptional activation can be antagonized by NLK through activation of the TAK1-mediated non-canonical Wnt pathway [7]. It was recently shown that TAK1 activation of NLK does not occur through direct interaction, but TAB2 may scaffold the association between TAK1 and NLK [15,16]. Furthermore, SETDB1 (SET domain bifurcated 1), a histone methyltransferase, is phosphorylated by NLK, upon Wnt5a stimulation. Phosphorylation of SETDB1 leads to disruption of the PPAR-gamma function through methylation, a mechanism shown to be vital for lineage decision of mesenchymal stem cells [15,17,18]. Besides Wnt, NLK was shown to antagonize Notch signaling during neurogenesis. NLK negatively regulated Notch-dependent transcriptional activation by phosphorylation of a member of the Notch-mediated transcriptional complex, NotchICD. The phosphorylation of NotchICD by NLK blocked its ability to form a transcriptionally active ternary complex [12]. C-Myb [2,5], Smad4 [19], and STAT3 [20,21] are other targets for phosphorylation by NLK. Serine phosphorylation of STAT3 is necessary for mesoderm induction [21], whereas phosphorylation of c-Myb promotes its proteasome-dependent degradation [3][4][5]21]. FOXO1 [22] and myocyte enhancer factor 2A (MEF2) [23] are two recently identified transcription factors, regulated by NLK. The phosphorylation of FOXO1 by NLK inhibits its transcriptional activity through a nuclear export process [22], while phosphorylation of MEF2 by NLK is crucial for Xenopus laevis development [23]. NLK also contributes to the reorganization of the cytoskeleton. Phosphorylation of microtubule-associated protein-1B (MAP1B) and of the focal adhesion protein, paxillin, stimulates NGF-induced re-distribution of F-actin as well as neurite outgrowth [24].
The role of NLK in cancer is not well known. Induction of wildtype NLK in human colon carcinoma cells (DLD-1) was shown to trigger programmed cell death [25,26]. This mechanism involved phosphorylation of CBP and consequential suppression of the transcriptional activity of AP-1, Smad, and p53, all of which use CBP as a co-activator [4,26]. In prostate cancer, NLK expression was decreased at the mRNA level in the tumor site, but no significant differences in the NLK protein expression were observed. Furthermore, overexpression of NLK prompted a more effective induction of apoptosis in AR-expressing prostate cancer cells than in AR-negative cells [27]. However, although NLK was revealed to be overexpressed in hepatocellular carcinomas, depletion of NLK reduced cell growth, and did so by inhibiting the expression of cyclinD1 and CDK2, both essential for the mitogenic potential of tumor cells [28].
Recent studies reported that NLK can be localized in the cytosol or in the nucleus, and that homodimerization of NLK is essential for nuclear localization [29]. However, the impact of specific subcellular localization of NLK is not well established. The present paper discloses that NLK was localized mainly in the nuclei of breast cancer cells. Moreover, the association of NLK with HSP27, which was identified as a novel binding partner for NLK, protected the cancer cells from apoptosis.
Tumor material and ethical approval
Full-faced formalin-fixed, paraffin-embedded tumor and nontumor tissues (FFPE) were obtained from the Department of Pathology at Sahlgrenska University Hospital in accordance with the Declaration of Helsinki. Our study is not a clinical trial and the tumor specimen were used anonymously therefore, patient consent is not needed and the research on these tumors is approved by the Medical Faculty Research Ethics Committee, Gothenburg, Sweden (s164-02). In addition, the review board waived the need for written informed consent from the participants. All samples were obtained from patients undergoing surgical resection in Gothenburg, Sweden, between 1990 and 2006. FFPE sections, 4 mm thick, were applied onto positively charged slides (FLEX IHC microscope slides, Dako, Sweden) for immunostaining, in order to assess NLK protein expression in tumor as well as adjacent normal breast tissue.
For DNA sequencing of breast cell lines we used the following amplifying (Ampl.) and sequencing (Seq.) primers: Ampl
Transient Transfection
Transient transfection assays were carried out in six-well plates at 80% confluence, using PolyFect Transfection Reagent (Qiagen) in accordance with the manufacturer's recommendations. Twenty four hours prior to transfection, cells were seeded in medium containing 10% FBS. Cells were incubated at 37uC and 5% CO 2 for 24-48 hours. The medium was removed and the cells washed with PBS, after which fresh serum-containing medium was added to the cells. DNA, 1.5-4.0 mg, was mixed with an appropriate volume of OptiMEM (Sigma), followed by addition of 10-25 ml PolyFect transfection reagent. Samples were incubated at room temperature for 10 minutes, in order to allow complex formation to be completed, prior to being transferred to the cells. Cells were subsequently incubated for 24, 48, or 72 hours. For HSP27 or NLK knockdown, MCF7 cells were transfected once or two times with 50 nM HSP27-siRNA (Santa Cruz Biotechnology, Inc.
Cell lysis and fractionation
Cells were grown on 10 cm plates to 80% confluence, harvested in PBS, and lysed in buffer comprising 1 M TrisHCl (pH 7.6), 5 M NaCl, 0.5 M EDTA, 1% Triton X-100, and Complete protease inhibitor cocktail (Roche). Protein concentration was assessed using the Bradford method, and sample concentrations were adjusted accordingly. Lysates were boiled for 5 minutes at 100uC with a 26 loading buffer, containing 1 mM dithiothreitol (DTT).
For cell fractionation, cells were lysed with cytosolic lysis buffer, containing 10 mM Hepes (pH 7.9), 10 mM KCl, 0.1 mM EDTA, 1 mM DTT, and protease inhibitors, and the nuclei were subsequently harvested by centrifugation. Supernatants, containing the cytosolic fraction, were further purified through an additional centrifugation step. The nuclei were ruptured by treatment with nuclear extraction buffer (20 mM Hepes [pH 7.9], 0.4 M NaCl, 0.1 mM EDTA, 0.1 mM EGTA, 1 mM DTT, 1 mM PMSF and 0.1% NP40), and by passing 10 times through a 27G needle, followed by centrifugation to purify. A Bradford assay was performed, and both fractions were boiled with 26 loading buffer, containing 1 mM DTT.
Immunoprecipitation and Western blotting
Cells were harvested and lysed for immunoprecipitation, as previously described [43]. Lysates were probed overnight with selected antibodies, and complexes were captured using Protein A Sepharose beads (Sigma) or Anti-FLAG M2 affinity gel (Sigma). The beads/gel were washed four times with lysis buffer, after which the protein complexes were extracted, either with FLAG peptide solution (Sigma), or through boiling with 26loading buffer, containing 1 mM DTT. Immunoprecipitates were subjected to SDS-PAGE, and the resolved proteins were transferred to PVDF membranes (Amersham Biosciences). Western blotting was carried out, following standard protocol. The chemoluminescence was detected with a charge coupled device camera (Alpha Innotech).
Immunofluorescence and confocal microscopy
Cells were grown on glass coverslips in 6-well plates for 24, 48, and 72 hours. After paraformaldehyde (PFA) fixation (4% for 15 minutes), cells were permeabilized with 0.25% Triton X-100 solution, after which they were blocked with 1% BSA, and subsequently probed with antibodies against NLK, FLAG, or HSP27. After washing the coverslips, fluorescent antibodies including Alexa 488 goat anti-rabbit and Alexa 568 donkey antigoat obtained (Invitrogen Life technologies) were applied. Nuclei were stained with DAPI, and the coverslips were washed again, before mounting onto glass slides.
MALDI-TOF mass spectrometry analysis
Immune complexes were obtained as described under the immunoprecipitation section. Iodoacetamide (100 mM) was added to the complexes, and the precipitated proteins were incubated for 20 minutes in darkness. The samples were separated by SDS-PAGE polyacrylamide gel (Invitrogen). The gel was silver-stained as previously described [30]; i.e. it was first washed for 20 minutes in 50% methanol with 5% acetic acid, and then placed in 50% methanol for 10 minutes, followed by sensitization with 0.02% sodium thiosulphate, prior to staining with 0.1% silver nitrate for 20 minutes, at room temperature. The gel was developed for less than 10 minutes in a solution of 0.04% formalin and 2% sodium carbonate, followed by 5% acetic acid to stop the reaction. Bands in the lane, with full length and catalytic inactive mutants (K155M and T286V) of NLK precipitates, were excised and subjected to in-gel digestion, as previously described [31]. Concisely: After destaining and rehydration with neat acetonitrile, the samples were proteolyzed overnight, using porcine modified trypsin (Promega). The generated peptides were analyzed by peptide mass fingerprinting, using a matrix-assisted laser desorption/ionization timeof-flight mass spectrometer (Ultraflex tandem time-of-flight, Bruker Daltonics). The instrument settings were optimized for analytes with masses from 600 to 4500 Da, and a-cyano-4-hydroxycinnamic acid was used as matrix. The peptide mass list was used for protein identification when searching sequence databases, and Pro-Found at the PROWL web site was used as search engine.
Immunohistochemistry analysis
The immunohistochemical staining was performed in an automated immunostainer (DAKO Auotstainer plus, Dakocytomation, Denmark). The tissue sections were incubated with the NLK antibody at a dilution of 1:200 (AbCam ab26050), and were subsequently treated with the Dako EnVisionTM FLEX antigen retrieval EDTA buffer (pH 9), using DAKO PT Link module (PT Link, Dakocytomation, Denmark), in accordance with the manufacturer's instructions. A breast pathologist, who at the time of examination was withheld information on diagnosis and other clinicopathological data, evaluated the antibody staining.
Apoptosis and cell proliferation
Cells were grown on six-well plates, harvested at 24, 48, and 72 hours post-transfection, and counted using Burkers chamber, or Countess Automated Cell Counter (Invitrogen). For apoptosis analyses, the cells were fixed in PFA on coverslips, and stained with Vindelöv solution, containing propidium iodide. After washing, the coverslips were mounted onto glass slides, and examined by fluorescence microscopy. Cells were scored for apoptosis, based on nuclear morphology. Apoptosis was furthermore evaluated, using NucleoCounter NC-3000 (Chemometec), in conformity with the DNA fragmentation assay. Concisely: Transfected and non-transfected cells (controls) were grown on 6-well plates. Cells were harvested by trypsinization, and the trypsinized cells were pooled with the cells floating in the medium. After a short centrifugation, the supernatant was removed and the precipitated cells were washed once with PBS. After a second centrifugation, the cells were resuspended in a small volume of PBS, and the single-cell suspensions were added to 70% ethanol for fixation. Samples were vortexed and stored for 12-24 hours at 220uC. Ethanol-suspended cells were centrifuged and the ethanol carefully decanted. Cells were washed once with PBS and then resuspended with NucleoCounter Solution 3 (1 mg/ml DAPI, 0.1% Triton X-100 in PBS), followed by incubation for 5 minutes at 37uC. Samples of 10 ml volume were loaded into a slide chamber (NC-slide A8), and the DNA Fragmentation protocol was employed according to manufactures instructions (Chemometec).
In vitro kinase assay
To investigate NLK activity, we examined the phosphorylation of LEF1 in MCF10A, MCF7 and MDA231 cells by performing immunoprecipitation using anti-NLK antibody. Immunoprecipitates were purified by washing two times with lysis buffer containing high salt (0.625 M NaCl), three times with PBS containing high salt (0.625 M NaCl), and two times with PBS. The pellet were incubated with 0.5 mg LEF1 (Novus Biologicals H00051176-P01) and 1 mM ATP in 40 ml of kinase buffer (10 mM MgCl 2 , 10 mM HEPES (pH 7.4), and 1 mM DTT) for 30 minutes at 30uC.
Statistical analysis
Data are presented as mean 6 s.e.m. Statistical comparisons were assessed by ANOVA, or by Student's t-test (p,0.05).
Endogenous NLK is localized to the nuclei of breast cancer cells
The total levels of NLK in human breast cancer cells (MDA231 and MCF7) and non-cancerous human breast epithelial cells (MCF10A) revealed that NLK was significantly down-regulated in MDA231 but not in MCF7 compared to MCF10A cells ( Figure 1A). In terms of NLK localization by subcellular fractionation, reduced NLK expression was observed in the cytosolic fractions of breast cancer cell lines, compared with noncancerous human breast epithelial cells MCF10A ( Figure 1B). Confocal microscopy and statistical analysis on the endogenous distribution of NLK disclosed that NLK was localized specifically in the nuclei of cancer cells, whereas in MCF10A cells, we observed a cytosolic predominance ( Figure 1C), suggesting that NLK localization differs between breast cancer cells and noncancerous cells. Immunohistochemical staining of human breast tissue, cancerous and normal, showed NLK to be localized in the nuclei of cancer cells, while normal breast tissue contained cytosolic NLK ( Figure 1D). Furthermore, by treating MCF7 cells with siRNA against NLK, we could show NLK antibody specificity for immunohistochemistry ( Figure 1E).
Prevention of NLK autophosphorylation directs NLK to the cell nucleus NLK mutants Lys 155 (K155M) and Thr 286 (T286V), have been reported to abolish the ability of NLK to autophosphorylate [32]. Investigating whether phosphorylation of NLK is involved in its localization, we analyzed the subcellular distribution of wildtype and two kinase mutants of NLK (K155M and T286V). Overexpression of wildtype NLK in the breast cancer cell lines MCF7, or non-cancerous human breast epithelial cells MCF10A, displayed a dominant cytosolic accumulation in all cells (Figure 2A), whereas in the NLK mutants, the distribution was mainly located to the nuclei of MCF7 and MCF10A cells (Figure 2A). To determine whether loss of NLK in the cytosol would have an oncogenic effect, we examined proliferation and survival of cancer cells in the presence of wildtype or catalytically inactive NLK mutants. MCF7 cells, transiently expressing wildtype NLK, reduced the proliferation rate and the cell number at 24, 48, and 72 hours post-transfection, in comparison with nontransfected cells ( Figure 2B). In contrast, no differences in cell number were observed when MCF7 cells were transfected with NLK-K155M and NLK-T286V ( Figure 2B). To establish whether cell death was responsible for the reduced cell number observed in the transfected wildtype cells, we performed two different apoptosis assays ( Figure 2C and 2D). The number of apoptotic cells in the wildtype but not in catalytically inactive NLKexpressing cells was increased when compared to non-transfected controls ( Figure 2C and 2D). Furthermore, induction of apoptosis by TNF-a promoted accumulation of NLK in the cytoplasm in MCF7 cells ( Figure 3A), whereas NLK shRNA-treated MCF10A cells were less sensitive for TNF-a or etoposide-induced apoptosis compared to control cells ( Figure 3B). However, down-regulation of NLK in MCF7 cells did not affect etoposide or TNF-a-induced apoptosis ( Figure 3C), suggesting that reduction of the NLK levels is not essential for survival of MCF7 cells. In addition, treatment of MCF10A cells with etoposide did not affect the localized distribution of NLK ( Figure 3D). Next, we immunoprecipitated NLK from MCF10A, MCF7 and MDA231 cells and mixed that with purified LEF1 as substrate and could show that endogenous NLK expressed in MCF10A cells induces LEF1 phosphorylation at Thr155 ( Figure 3E). These results suggest that NLK expressed in MCF7 and MDA231 cells is catalytically inactive.
Heat-shock protein 27 recognizes and binds to catalytically inactive NLK in the nucleus
Seeking an explanation for NLK inactivation in cancer cells, we performed a MALDI-TOF mass spectrometry analysis. Immunoprecipitation of wildtype NLK and two NLK mutants (NLK-K155M and NLK-T286V) in MCF7 transfected cells, disclosed heat-shock protein 27 (HSP27; Figure 4A). This novel binding partner was detected in cells transfected with NLK-K155M and NLK-T286V, but not in cells overexpressing wildtype NLK ( Figure 4A). These results were confirmed by an immunoprecip-itation analysis in MCF7 cells, transfected with wildtype, NLK-K155M, or NLK-T286V expression plasmids ( Figure 4B). Endogenous immunoprecipitation of NLK demonstrated a clear interaction with HSP27 in MCF7 cells, but not in MCF10A cells ( Figure 4C). Interaction not being observed between NLK and HSP27 in MDA231 cells might be due to the low level of NLK expression in the total cell lysate ( Figure 4C). Since we found no NLK mutations (K155 or T286) in four breast cancer cell lines and non-cancerous MCF10A cells, this indicates that these mutations alone are not responsible for NLK association to HSP27 ( Figure 4D). Furthermore, we detected co-localization and endogenous association between NLK and HSP27 in the nuclei of MCF7 cells, but not in MCF10A cells (Figures 4E and 4F). To investigate whether NLK associates directly with HSP27, we performed an in vitro pull-down assay and found a direct interaction between NLK and HSP27 (Fig. 4G). HSP27 is ubiquitously expressed in different tissues and cells, and localizes under normal conditions to the cytosol. However, HSP27 is able to translocate into the nucleus in response to heat shock or stress [33,34]. Although we could observe an increase in the levels HSP27 in MCF7 cells compared with MCF10A or MDA231 cells ( Figure 4H), we noted that HSP27 localization paralleled that of NLK ( Figure 4I and Figure 1B). The level of HSP27 in the nuclear fraction of MCF10A cells was distinctly lower than the HSP27 levels in different breast cancer cell lines ( Figure 4I). Since we could not observe any differences in the phosphorylation status of HSP27 (serine 82) in cells transfected with wildtype NLK, NLK-K155M and NLK-T286V expression plasmids, this suggest that NLK is unable to increase the levels of phosphorylated HSP27 at serine 82 ( Figure 4J). However, we cannot rule out the possibility that NLK phosphorylate HSP27 at another site beside serine 82. These results together suggest that in an inactive form, NLK is able to associate with HSP27 in the nuclei.
Down-regulation of HSP27 releases NLK to the cytosol, which induces further cell death in cancer cells
To test the hypothesis that HSP27 is responsible for sustained nuclear localization of NLK, we reduced the level of HSP27 in cancer cells, by using two different siRNA oligos ( Figure 5A), and investigated NLK localization. HSP27 depletion in MCF7 cells prompted re-localization of NLK from the nucleus to the cytoplasm and reduced the levels of NLK in the nucleus ( Figure 5B and 5C). Down-regulation of NLK did not affect the localization of HSP27 in MCF7 cells, implying that the levels of NLK in MCF7 cells is not essential for localization of HSP27 ( Figure 5D). Furthermore, we did not, observe any differences in total NLK levels after treating the cells with siRNA for a knockdown of HSP27 ( Figure 5E). To explore whether NLKmediated apoptosis is dependent upon the association with HSP27, we down-regulated HSP27 in MCF7 cells, and subjected the cells to survival and apoptosis assays. As expected, reduced level of HSP27 in MCF7 cells diminished cell survival ( Figure 5F), and induced apoptosis ( Figure 5G). In contrast, HSP27 overexpression in MCF10A cells was unable to re-localizes NLK to the nucleus ( Figure 5H). These results suggest that NLK is captured by HSP27 in the nucleus, whereas depletion of HSP27 releases NLK to the cytosol, which induces further cell death in cancer cells.
Discussion
The serine/threonine kinase NLK is an atypical MAP kinase, able to phosphorylate several target proteins in cytosol and nucleus, and to regulate different signaling pathways. Evidence for endogenous subcellular localization of NLK, and for the mechanism regulating its nuclear import, is limited. We found in the present study that endogenous NLK was localized predominantly in the nuclei of breast cancer cells, in contrast to a cytosolic localization in non-cancerous cells. The endogenous nuclear localization of NLK in breast cancer cells did not change by cancer cells being transfected with NLK mutants, which prevent NLK activation. We further confirmed that overexpression of wildtype NLK is localized predominantly in the cytosol. Contrasting these results, in prostate cancer cells, endogenous NLK were localized in the nucleus (LNCaP), but the wildtype NLK expression in LNCaP cells also directed NLK to the nucleus [35]. This discrepancy may be explained by cell type specificity as well as androgen-mediated receptor activation, both proved to be essential for NLK localization in prostate cancer cells. Furthermore, our data agree with previous observations of NLK kinase negative mutant being accumulated in the nucleus upon overexpression [29,36].
The tumor suppressor function of NLK was recently established. For example, overexpression of wildtype NLK in colon carcinoma cells, boosted programmed cell death through phosphorylation of CBP, and hence, suppression of the transcriptional activity of AP-1, Smad, and p53, all of which use CBP as a coactivator [4,26]. In prostate cancer, overexpression of NLK induced apoptosis in AR-expressing prostate cancer cells, but not in AR-negative cells [27]. To determine whether NLK localization regulates cell survival in breast cancer cells, wildtype or phosphomutants of NLK were overexpressed in these cells. Wildtype NLK reduced proliferation rate and cell survival, while NLK mutants had no significant effect, suggesting that in contrast to inactive nuclear localized NLK, cytosolic localization of NLK triggers cell death. This was evident by stimulation of breast cancer cells with TNF-a, which promoted re-localization and accumulation of NLK in the cytoplasm. In contrast to the endogenous NLK expressed in MCF10A, which was phosphorylated and catalytically active, NLK expressed in breast cancer cells showed to be unphosphorylated and catalytically inactive.
To find an explanation for the inactivation of NLK in cancer cells, we explored whether any putative interaction partners for NLK might hold NLK in an inactive form in the nucleus of cancer cells. Cells were thus transfected with wildtype and phosphomutant NLK prior to MALDI-TOF mass spectrometry analysis. HSP27 was identified as a novel binding partner for the inactive form of NLK. Endogenous immunoprecipitation of NLK in MCF7 cells, and cells transfected with the phospho-mutant form of NLK, both confirmed this finding. As expected, we were unable to co-precipitate HSP27 in non-cancerous MCF10A cells, or when wildtype NLK was overexpressed. HSP27 is a ubiquitously expressed protein, able to translocate into the nucleus in response . Heat-shock protein 27 binds to NLK in the nucleus. (A) MCF7 cells, transfected with FLAG-tagged WT-, K155M-, or T286V-NLK plasmids, were harvested for immunoprecipitation 24 hours post-transfection, using FLAG antibody. The protein lysates were separated by SDS-PAGE, and the protein bands were cut and processed for the MALDI-TOF analysis. Arrows indicate a specific HSP27 band in NLK-K155M as well as NLK-T286V transfected cells. (B) Immunoprecipitation of FLAG in total cell lysates from MCF7 cells transfected with wild type (WT) and catalytic inactive mutants (K155M and T286V) of NLK, followed by a Western blot, using FLAG or HSP27 antibodies. (C) Endogenous immunoprecipitation of NLK in total cell lysates from MCF10A, MCF7, and MDA231 cells, and a Western blot, using NLK or HSP27 antibodies. As control we used anti-rabbit negative control IgG for immunoprecipitation in total cell lysates from MCF10A cells. (D) DNA sequencing and alignment of NLK gene using MCF10A, MCF7, MDA231, BT549 and CAMA1 cell lines. (E) The cellular distribution of NLK and HSP27 in MCF10A and MCF7 cells, using immunofluorescence staining and confocal microscopy. (F) Endogenous immunoprecipitation of NLK in nuclear fractions of MCF10A, MCF7, and MDA231 cells, and a Western blot, using NLK or HSP27 antibodies. The lysate from nuclear fraction were followed by a Western blot analysis using lamin B or tubulin antibodies. (G) Recombinant GST-NLK and His-HSP27 were incubated in a binding buffer and thereafter GST was pulled down with GSH-coupled sepharose. Pull-downs were analyzed with Western blot using GST or HSP-27 antibodies. (H) The levels of HSP27 in total cell lysates of MCF10A, MCF7, and MDA231 cells. (I) Isolation of nuclei by a subcellular fractionation assay, followed by a Western blot analysis, using HSP27, tubulin, or lamin B antibodies. (J) The levels of HSP27 and phosphorylated HSP27 in total cell lysates of MCF7 cells transfected with wild type (WT) and catalytic inactive mutants (K155M and T286V) of NLK, followed by a Western blot. doi:10.1371/journal.pone.0096506.g004 to heat shock or stress [33,34]. We could also show that NLK is unable to phosphorylate HSP27 at serine 82 in cells transfected with wildtype NLK, but we cannot rule out the possibility that NLK phosphorylate HSP27 at another site beside serine 82.
In breast cancer cells, we noted that HSP27 localization was similar to NLK localization, and by confocal imaging, we confirmed co-localization of these two proteins in cancer cells. Our results suggest that HSP27 recognizes and binds to the nonphosphorylated form of NLK, which is exclusively located in the nuclei of cancer cells. Since we found no NLK mutations at lysine 155 or threonine 286 in breast cancer cell lines, this indicates that these mutations alone are not responsible for NLK association to HSP27. Further studies are needed to define how inactive mutant of NLK can associate with HSP27 in breast cancer cells.
HSP27 functions mainly as a chaperone in protein folding, but is also implicated in cytoskeleton rearrangement, cell movement, cell survival, and tumor progression [37]. In general, the aberrant expression of HSP27 is associated with aggressive tumor behavior, increased resistance to chemotherapy, and poor prognosis for the patient [38,39]. More specifically, high levels of HSP27 have been observed in breast cancer cells, in comparison with normal cells [40][41][42]. or transfected with two different siRNAs targeting HSP27 (siRNA1 HSP27 or siRNA2 HSP27) or with a control oligonucleotide, twice for 24 hours, followed by a Western blot analysis, using HSP27 and Actin. (B) MCF7 cells were transfected with siRNAs targeting HSP27 or with a control oligonucleotide, twice for 24 hours. Nuclear (N) and cytosolic (C) fraction from total cell lysate was fractionated and followed by a Western blot analysis, using NLK, lamin B, or tubulin antibodies. (C) The cellular distribution of NLK in MCF7 cells transfected two times within 48 hours with siRNA oligos against HSP27. After fixation, cells were permeabilized with 0.25% Triton X-100 solution, after which they were blocked with 1% BSA, and subsequently probed with antibodies against NLK (1:100) and HSP27 (1:100). After washing the coverslips, fluorescent antibodies: Alexa fluor 488 goat anti Rabbit (1:1000) or Alexa fluor 568 Donkey anti Goat (1:1000) were applied and nuclei were stained with DAPI. (D) MCF7 cells were transfected with siRNAs targeting NLK or with a control oligonucleotide, twice for 24 hours. Nuclear (N) and cytosolic (C) fraction from total cell lysate was fractionated and followed by a Western blot analysis, using NLK, HSP27, tubulin, or lamin B antibodies. (E) Endogenous levels of NLK and HSP27 in total cell lysates from MCF7 cells, mock-treated, or transfected with siRNA against HSP27 or with a control oligonucleotide, twice for 24 hours. To evaluate whether the HSP27 association with NLK in breast cancer cells may affect NLK localization, we treated cancer cells with siRNA against HSP27, and found that HSP27 depletion in MCF7 cells elicited NLK re-localization from the nucleus to the cytosol. In contrast to this finding, HSP27 overexpression in noncancerous epithelial cells was unable to re-localizes NLK to the nucleus. The treatment of breast cancer cells with siRNA against HSP27, to explore whether the HSP27 association with NLK regulates NLK-mediated cell death, reduced cell survival, and increased the rate of cell apoptosis, suggesting that an inactive form of NLK is captured by HSP27 in the nucleus, while depletion of HSP27 releases NLK to the cytosol, which consequently induces further cell death in breast cancer cells. Since, downregulation of NLK in MCF7 cells did not affect cell survival, this suggest that reduction of the NLK levels is not important for cell apoptosis instead NLK translocation from the nuclei to the cytoplasm, which execute cell death. We could also observe that NLK overexpression-induced apoptosis was reduced by HSP27 overexpression. Further investigations are required to identify the signaling pathway, enabling cytosolic NLK to induce apoptosis.
In conclusion, our findings unveil a novel mechanism, by which HSP27 recognizes NLK in breast cancer cells and prevents NLKmediated cell apoptosis.
|
2017-03-31T20:49:49.167Z
|
2014-05-09T00:00:00.000
|
{
"year": 2014,
"sha1": "79a30f4917c2c87488da7c0f9e0ab93188acb9cc",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0096506&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f3ed14fe019ca0c8062bd9acea5153a1965e1732",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
248212394
|
pes2o/s2orc
|
v3-fos-license
|
Plasmid-Mediated Transfer of Antibiotic Resistance Genes in Soil
Due to selective pressure from the widespread use of antibiotics, antibiotic resistance genes (ARGs) are found in human hosts, plants, and animals and virtually all natural environments. Their migration and transmission in different environmental media are often more harmful than antibiotics themselves. ARGs mainly move between different microorganisms through a variety of mobile genetic elements (MGEs), such as plasmids and phages. The soil environment is regarded as the most microbially active biosphere on the Earth’s surface and is closely related to human activities. With the increase in human activity, soils are becoming increasingly contaminated with antibiotics and ARGs. Soil plasmids play an important role in this process. This paper reviews the current scenario of plasmid-mediated migration and transmission of ARGs in natural environments and under different antibiotic selection pressures, summarizes the current methods of plasmid extraction and analysis, and briefly introduces the mechanism of plasmid splice transfer using the F factor as an example. However, as the global spread of drug-resistant bacteria has increased and the knowledge of MGEs improves, the contribution of soil plasmids to resistance gene transmission needs to be further investigated. The prevalence of multidrug-resistant bacteria has also made the effective prevention of the transmission of resistance genes through the plasmid-bacteria pathway a major research priority.
Introduction
Antibiotics promote healthcare and animal husbandry by inhibiting the growth and reproduction of microorganisms and by treating and preventing bacterial infections. However, the chronic use of large amounts of antibiotics can create selection pressures that cause resistant bacteria to develop resistance genes (ARGs). ARGs are widespread in clinical settings, human hosts, plants, animals, and in virtually all natural environments [1][2][3][4].
Vertical gene transfer (VGT) and horizontal gene transfer (HGT) are the main methods by which ARGs proliferate and spread in host bacterial cells. HGT, which includes transformation, splicing and transduction, is a method of transferring genetic material such as resistance genes, between conspecifics or different species of bacteria via mobile genetic elements (MGEs), rather than by reproductive processes. MGEs include transposons, integrons, phages, plasmids, etc. These critical MGEs may facilitate the spread of multidrug resistance. Plasmids carry a wide range of drug resistance genes, such as tet, qnr variants [5], aac(6')-lb-cr, and the efflux pump genes oqxAB and qepA, and are the major vector for HGT. HGT is the main mechanism for the production and spread of ARGs and drug-resistant bacteria in the environment [6][7][8]. Chen et al. [4]. identified the dynamic migration of the intI and sul genes between water and sediment, with intI being closely associated with some specific genes in sediment. intI is present in the vast majority of bacteria and contributes to the transfer of ARGs in soil.
Soils play a vital role in the healthy functioning of the biosphere and in the continuation of the human race [9]. However, the current epidemic of antibiotic resistance in soil is an urgent environmental issue affecting human health worldwide. For example, in China, ensuring soil health is an important strategic goal for sustainable development. Exploring the plasmid-mediated transfer of ARGs in soil is important for soil antibiotic resistance and ensuring soil safety. This paper reviews the coupling mechanism of plasmids and plasmidmediated transfer of resistance genes in the soil environment and lays the foundation for further experimental studies.
Comparison of Plasmid Extraction and Analysis Methods
Plasmid isolation is usually performed using endogenous culture methods from the host or by independent isolation methods based on plasmid-encoded traits. Plasmid extraction is used to isolate plasmids from bacterial genomic DNA, remove impurities such as proteins and RNA, and obtain relatively pure plasmids, such as by alkaline lysis. Alkaline lysis is the most widely used method for preparing plasmid DNA. Chromosomal DNA is denatured in an alkaline environment and is not easily renatured. Plasmid DNA can be separated from chromosomal DNA because it has a ring-like structure and can be renatured more quickly under neutral conditions. Agarose gel electrophoresis assists in the detection of plasmid DNA in bacterial DNA extracts, but the subsequent isolation and purification of plasmids is very difficult. Methods to extract plasmids from complex environments are quite limited. The existing general commercial plasmid DNA purification kits are not suitable for environmental samples and often cause chromosomal DNA contamination [10]. The transposon-aided capture system (TRACA) for plasmids facilitates the isolation of resistance gene-encoding plasmids from samples of complex composition. In this system, genomic DNA is cut with DNase, and plasmids with transposons carrying replication start sites and selectable markers are captured. This method is ideal for isolating plasmids with small copy numbers but does not capture linear plasmids and may even yield the wrong total number of plasmids. Jones et al. [11] used TRACA to obtain plasmids from metagenomic DNA extracts and stably maintain them in surrogate hosts. Plasmids isolated using TRACA have traits that are independent of the plasmid encoding them, such as selectable markers, host species mobilization traits, and the ability to replicate in host species. This means that even if the plasmid lacks the traditional selectable markers, it can still be isolated from Gram-negative (G−) and Gram-positive (G+) bacteria using TRACA and maintained in Escherichia coli. Metagenomics-based extraction and sequencing approaches also have limitations. An insufficient sequencing depth usually makes it difficult to extract intact plasmid sequences from the data. Additionally, genes in low abundance are easily lost due to their small fragment size and difficulty in assembly. In addition, plasmids usually contain repetitive sequences compared to genomic DNA, which makes the formation of short-read data challenging. Exogenous plasmid isolation [12] allows the isolation of linear plasmids using recipient bacteria to capture plasmids directly from parental crosses of complex samples. However, this method is highly dependent on the stability of the plasmid in the host and the binding of the plasmid in the sample. Currently, multiplex displacement amplification (MDA) based on metagenomics analysis is widely used.
The MDA method is a process in which all bacterial genomic DNA is removed from the total DNA sample using a plasmid-safe DNA enzyme, and the rest of the cyclic plasmid DNA is amplified and detected by a phi29 DNA polymerase with a loop-rolling mechanism. This method amplifies all the extracted circular plasmid DNA and produces a large amount of plasmid DNA, regardless of the number of plasmids. However, as with TRACA, linear plasmids are not separated by this method. Plasmids with larger copy numbers are easily degraded by DNase to form smaller fragments during the extraction process. Nucleotides on short-loop plasmids can be copied each time they bind to polymerase. By this method, Kav et al. [13] isolated and purified total bovine rumen plasmid DNA and performed deep sequencing using Illumina technology. The improved plasmid purification method can also be used to obtain plasmids from other ecological sites and to analyse the plasmid population in a nonculture mode using deep sequencing and metagenomic approaches [10].
Plasmid metagenomic analysis contributes to the understanding of the structure and function of the environmental plasmid community. It identifies the sites of plasmid enrichment and the additional genetic elements of the plasmid based on the environmental sample from which the plasmid was obtained. Li et al. [14] used a combination of MDA and pyrophosphate sequencing to construct a microbial library and performed experiments with existing gene libraries for a comparative analysis. The method has not been fully optimized because steps such as nucleic acid exonuclease treatment and whole-gene amplification favour small and gap-free plasmids. Jrgensen et al. [15] proposed a method with which to identify intact small plasmids from a genome-wide shotgun sequencing macrogenomic dataset. A total of 616 loop sequences were identified in the rat caecum, of which 160 genes had plasmid replication domains. In silico plasmid identification was on the Illumina platform is extremely successful (95%), with minimal risk of in vitro false positives.
Plasmid Transfer Mechanisms
Splice transfer is the process of plasmid exchange between bacteria through direct or indirect contact. Plasmids can carry ARGs to the recipient cell, thus facilitating the transfer of antibiotic resistance ( Figure 1). More than 50% of plasmids are available for transfer by splicing [16]. ARG-carrying MGEs have been widely reported in a variety of settings. Coupled plasmids usually carry all the genes required for transfer. These genes encode different modules or functions.
DNA and performed deep sequencing using Illumina technology. The improved plasm purification method can also be used to obtain plasmids from other ecological sites and analyse the plasmid population in a nonculture mode using deep sequencing and me genomic approaches [10].
Plasmid metagenomic analysis contributes to the understanding of the structure a function of the environmental plasmid community. It identifies the sites of plasmid e richment and the additional genetic elements of the plasmid based on the environmen sample from which the plasmid was obtained. Li et al. [14] used a combination of MD and pyrophosphate sequencing to construct a microbial library and performed expe ments with existing gene libraries for a comparative analysis. The method has not be fully optimized because steps such as nucleic acid exonuclease treatment and whole-ge amplification favour small and gap-free plasmids. Jrgensen et al. [15] proposed a meth with which to identify intact small plasmids from a genome-wide shotgun sequenci macrogenomic dataset. A total of 616 loop sequences were identified in the rat caecum, which 160 genes had plasmid replication domains. In silico plasmid identification was the Illumina platform is extremely successful (95%), with minimal risk of in vitro fa positives.
Plasmid Transfer Mechanisms
Splice transfer is the process of plasmid exchange between bacteria through direct indirect contact. Plasmids can carry ARGs to the recipient cell, thus facilitating the trans of antibiotic resistance ( Figure 1). More than 50% of plasmids are available for transfer splicing [16]. ARG-carrying MGEs have been widely reported in a variety of settings. Co pled plasmids usually carry all the genes required for transfer. These genes encode diff ent modules or functions. The tra regions encode all genes involved in conjugational transfer (green); the origin of transfer oriT (yellow); the leading gene (red) is the first to be transferred into the recipient cell; Other Tra proteins (TraI, TraM, and TraY) constitute the relaxosome, which, in combination with the integration host factor (IHF), binds to oriT; chromosomal single-strand binding protein SSB; the leading region contains a specific 328 bp Frpo region (for F plasmid RNA polymerase). A growing number of studies on plasmid isolation and sequence analysis have indicated great diversity in the genetic characteristics and structures of plasmids. This diversity suggests that different plasmids may use different regulatory systems, molecular responses or strategies to accomplish gene transfer. Splicing can occur between identical bacterial species or between unrelated groups at large taxonomic distances [17]. Environmental factors play an essential role in plasmid splicing efficiency. In sludge sediment, Pseudomonas, Actinobacter, Enterobacter and Aeromonas are known to be the most metastable genera [18][19][20].
Within the Donor Cell
To splice the donor strain, the transfer gene first needs to be expressed and aggregated in the transposable zone of the plasmid. Plasmids encode all of the type IV secretion system (T4SS)-binding-related protein factors required for pair formation as well as the relaxation component required prior to transfer. Prior to DNA transfer, protein complexes (relaxosomes) begin to assemble and carry out activities. Other Tra proteins form the relaxosome (TraI, TraM and TraY), which binds to the integrated host factor (IHF) on oriT and is transferred via the cleavage reaction of TraI relaxase. The TraI relaxase protein catalyses the nicking reaction, leading to a relaxation of plasmid dsDNA. After the nicking reaction, cyclic ssDNA in the donor is turned into dsDNA by rolling circle replication (RCR), at which point the linearized T-stranded DNA combined with TraI at the 5' end enters the recipient cell via the conjugated pore. Briefly, the interaction of relaxosome with the type IV coupling protein (T4CP) initiates the transfer of the protein.
Furthermore, T4CPs are DNA-dependent ATPases that are fixed on the cell membrane through the N-terminal structural domain. Membrane-anchored T4CPs interact directly with relaxors to form a hexameric structure on the T-chain that is actively translocated through the coupling pore during transfer. The RCR is fundamental to the conjugation plasmid transfer process in many bacteria. In the spliced plasmid, the RCR reaction is carried out by the relaxase protein. It achieves RCR initiation mainly by cleaving doublestranded DNA at the double-stranded origin (dso) or oriT site [5]. Notably, the replication of the two ssDNA strands occurs in different cells, whereby the leading strand is replicated in the donor cell, while the trailing strand (T-strand) is replicated in the recipient cell. In the recipient, ssDNA is converted to dsDNA by RCR in the donor while the TraI-bound T-strand is transferred.
Within the Receptor Cell
The relaxosome is moved to the receptor, where it is refolded and primed to undertake the physiological tasks required for the splice transfer process. The pull of the relaxase from the acceptor and the push of the T4CP from the donor may facilitate the passage of the T-strand through the conjugate pore. Once the ends of the acceptor are joined together, the relaxase performs the ligation reaction, leading to recirculation of the ssDNA plasmid. Upon entry into the receptor, the T-strand of ssDNA is wrapped by the single-stranded binding protein (SSB) of the host chromosome. The single-stranded promoter Frpo has a stem-loop structure that can be identified by host RNA polymerase to trigger the synthesis of RNA primers. In other words, Frpo assists in initiating DNA synthesis reactions and early gene expression. After the T-strand enters the recipient cell, ssDNA is converted to dsDNA. Once ssDNA is converted to dsDNA, the transferred plasmid genes are expressed in recipient cells. The phenotype of the recipient cell is thus transformed into a transconjugant with additional metabolic properties.
Plasmid-Mediated Transfer of Antibiotic Resistance Genes
An MGE identified in a bacterial strain in 2003 was one of the first indicators of the existence of antibiotic resistance [21]. Since then, bacterial strains with resistance to ampicillin, chloramphenicol, erythromycin, streptomycin and tetracycline have been found in frozen soil samples [22,23]. Antibiotic resistance genes are widely present in a variety of environments, whether natural without human intervention or heavily contaminated with antibiotics ( Table 1). The well-known dominant phyla in soil are Proteobacteria, Acidobacteria, Actinobacteria, Verrucomicrobia, Bacteroidetes, Chloroflexi, Gemmatimonadetes and Firmicutes [24]. A recent study has found that drug-resistant bacteria such as Actinobacterium, Bacillus, Xanthobacteraceae and Geobacter species, are common latent hosts for multidrug resistance genes (MRGs) [25]. Polymyxins have therefore been repurposed for infections caused by multidrug-resistant Gram-negative bacteria [26]. Colistin possesses antibacterial activity against members of the Enterobacteriaceae family, including Klebsiella species, Escherichia coli (E. coli), Shigella species, Enterobacter species, and Salmonella species [27]. The main pathway through which bacteria obtain external ARGs and develop resistance is HGT. HGT mainly occurs through transformation, splicing and transduction [28]. The horizontal transmission of ARGs among bacteria is primarily driven by bacterial plasmids, which facilitate the transfer of resistance genes. ARGs such as those encoding broad-spectrum β-lactamases (ESBLs) (e.g., CTX-M), carbapenemases (e.g., KPC, NDM, and OXA-58) [29], and mucilage resistance (e.g., MCR-1) [30], are prevalent in Gram-negative bacteria. Several Gram-negative bacteria, such as Pseudomonas, Acinetobacter and Stenotrophomonas species isolated by Kudinova et al. [31] have simultaneously developed resistance to multiple antibiotics. Plasmids were also detected in some dominant Gram-positive bacteria, such as Bacillus, Microbacteriaceae, and Methanobacterium species, suggesting that ARGs are highly likely to be transferred in both G− and G+ bacteria [32].
Presence of ARGs in the Natural Environment
ARGs are ubiquitous in the natural environment. On the one hand, they originate from the production of antibiotics or their derivatives by microorganisms in the soil. On the other hand, biological interactions between bacteria and other microorganisms, such as antagonistic interactions between fungi and bacteria, affect bacterial community composition and the abundance of ARGs directly [51].
ARGs have been found to be present in most terrestrial ecosystems on Earth with no or limited anthropogenic disturbance, including seabed, primeval forests, and even polar regions. Inka et al. [33] identified three sulfonamide-resistant synthases in beech and pine forest soils with different taxonomic origins. This suggests that sulfonamide antibiotic resistance occurs naturally in bacterial communities in forest soil. Song et al. [34] detected a large number of ARGs resistant to modern antibiotics in soils of primary forests in China with very low levels of antibiotics in the soil, indicating that forest soils are highly likely to be a source of potential resistance traits. The low abundance of MGEs in forest soils and their nonpositive association with ARGs reflect the minimal likelihood of HGT in forest soil environments. Kim et al. [39] detected a total of 70 independent ARGs related to 18 antibiotics in the Arctic permafrost zone using a macrogenomic approach. The genomes of permafrost and clinical strains contain similar mobile elements and prophages [52], suggesting that strains in the natural environment exhibit an extremely strong horizontal transfer of genetic material. Permafrost strains, although related to various clinical isolates, do not form separate clusters in the phylogenetic tree. Belov et al. [53] analysed the macrogenomes of perennial permafrost and sediments; Proteobacteria, Firmicutes, Chloroflexi, Acidobacteria, Actinobacteria and Bacteroidetes were the most common taxa, and the bacterial abundance was high in the microbial communities of the Canadian Arctic. Paun et al. [54] obtained and identified the first strains of bacteria from 13,000-year-old ice cores that accumulated in caves over many years since the Late Ice Age. Among the isolated bacteria, Gram-negative bacteria were more resistant than Gram-positive bacteria. Over 50% of the strains showed high resistance to 17 antibiotics. Some of these strains can inhibit the growth of typically clinically resistant strains, revealing a metabolic profile with potential applications. Mootapally et al. [40] evaluated antibiotic resistance groups in pelagic sediments and found that the dominant genes carA, macB, bcrA, taeA, srmB, tetA, oleC and sav1866 were mainly resistant to macrolides, glycopeptides, and tetracyclines. Nathani et al. [41] studied a pelagic sediment microbiome for marine resistance groups and their corresponding bacterial communities. A total of 2354 unique resistance genes were identified in a comparison with samples from the open Arabian Sea, showing the presence of tlrC genes in addition to carA, macB, bcrA, taeA, srmB, tetA, sav1866 and oleC. Moreover, Proteobacteria, Actinobacteria and Bacteroidetes were the predominant phyla in the deep-sea sediments.
Transfer of ARGs from Severely Contaminated Sites
Antibiotics have been extensively used in healthcare and farm animal husbandry to treat or prevent bacterial infections and promote animal husbandry. However, the overuse of antibiotics has led to antibiotic residues in clinical settings and in soil on farms, sewage treatment plants, and other sites. These residues are potentially toxic to organisms, resulting in the enrichment of ARGs, making it an emerging and persistent environmental pollutant [55]. Hospitals consume large amounts of antibiotics, especially β-lactams, quinolones and methotrexate [56]. However, their residues in hospital wastewater are unknown. The efficiency of antibiotic removal from hospital wastewater treatment processes was reported to be 74-81% [57]. Among various types of antibiotics, the removal efficiency for β-lactam antibiotics was high (84.4-99.5%) [58], while ofloxacin was more difficult to remove, and these residues were detected in wastewater at a higher rate than other types of antibiotics [49]. The improper disposal of antibiotics and medical waste in hospitals can contribute to the introduction of antibiotic residues in soil and underground water.
Most of the antibiotics administered to people in hospitals are used in homes and end up in domestic wastewater. Thus, municipal wastewater treatment plants (WWTPs) are one of the major sources of antibiotic-resistant bacteria (ARB) and ARGs released into the environment and have become a hotspot for HGT. Osinska et al. [59] showed a high potential for bacterially mediated HGT in wastewater environments. Single ARB are consistently associated with multiple ARGs. Once ARB successfully enter a WWTP, ARGs can be transmitted between the bacteria in the endogenous microbial community and the bacteria passing through the WWTP. Guo et al. [60] found that MGEs, including plasmids, transposons, integrons (intI1) and insertion sequences (e.g., ISSsp4, ISMsa21 and ISMba16) were abundant in sludge samples. Additionally, a network analysis indicated that some environmental bacteria might be potential hosts for multiple ARGs. Isolates resistant to β-lactams most frequently carried the blaTEM and blaOXA genes. The genomes that encode resistance to tetracyclines were most commonly tetA, tetB and tetK, while the qnrS gene was found in isolates resistant to fluoroquinolones [61]. Munir et al. [62] showed that the concentration of ARB decreased by several orders of magnitude compared to that in the original influent water, but the concentration of ARGs remained quite similar in pre-and post-disinfection effluents. There was no significant reduction in the abundance of MGEs in the effluent water either [63]. Compared with those in the original influent, most of the ARGs were effectively removed after wastewater treatment [64,65]. The specific environmental conditions in WWTPs offer a selective advantage for HGT of ARGs and ARB in bacterial communities.
The plasmid-mediated transfer of ARGs poses a grave danger to global public health. The use of amoxicillin on farms has made the poultry farm environment an essential reservoir of blaNDM-carrying bacteria [42,43]. Additionally, blaNDM contamination was also detected in the farm environment (soil, sewage, feed, dust) in commercial goose farms [66]. Moreover, IncX3-and pM2-1-type plasmids contribute to the prevalence and spread of ARGs in different bacteria. Mohsin et al. [44] detected IncFII-and IncQ-type plasmids carrying the tet (X4) gene in four different sources of E. coli (poultry, chicken, wild birds and slaughterhouse wastewater). In another study, all mcr-1-positive E. coli strains isolated from poultry were multidrug resistant, with up to 88.24% of the isolates containing blaTEM genes and tetracycline (tetA and tetB) and sulfonamide (sulI, sulII and sulIII) resistance genes [45]. The antibiotics commonly used in aquaculture are aminoglycosides, β-lactams, sulfonamides and tetracyclines [67]. Residual antibiotics leached from fish feed are often present in effluents. The levels of ARGs in fish farm effluents were found to be significantly higher than those in the surrounding water environment, and most of the ARGs were present on plasmids [68].
Human Activities Affect the Transfer of ARGs in the Environment
The major dominant groups in agricultural sediments are Actinobacteria, Chlamydomonas and Firmicutes [69]. Wendi et al. [70] detected no antibiotic-associated resistance genes in aquaculture farm sediments used for farming, suggesting that natural resistance bodies may be present in farm sediments. However, the application of organic fertilizers to agricultural soils greatly contributes to resistance gene contamination. ARGs carried by bacteria in organic fertilizers and in antibiotics themselves have caused a significant increase in the abundance of resistance genes in fertilized soils [71,72]. Pu et al. [46] isolated two transferable amino-glycoside resistance plasmids from pig or chicken manure, namely, pRKZ3 and pKANJ7. As is known, pRKZ3 is a nonconjugated IncQ plasmid with arr-3 and aacA resistance-conferring genes that encode plasmid replication and stabilization (repA, repB and repC) and mobilization (mob) functions. Furthermore, pKANJ7 is a conjugated IncQ plasmid encoding the T4SS-type IncX plasmid. Wang et al. [47] analysed the contamination of soil with ARGs in agricultural soils with long-term application of organic fertilizers. There is a high abundance of macrolide-and quinolone-resistant bacteria and drug resistance genes in fertilized soils in contrast to unfertilized soils. In addition, the abundances of intI and intII were significantly correlated with the abundances of qnrS and ermB, respectively. In general, intI is located on the Tn21 transposon, and intII is located on the Tn7 transposon, which has certain ramifications. Thus, this gene can be transmitted among bacteria via transposons. The intl1 and intl2 genes are frequently found in manure-treated agricultural soils and greenhouse soils. The broad availability of integrase genes can facilitate gene transfer, thereby increasing the persistence and accumulation of ARGs [73,74]. Zhao et al. [48] also found that the total relative abundance of the intI gene in manure-amended soil positively correlated with those of tetW, tetO, sulI and sulII. However, it has also been shown that the production of drug-resistant bacteria is negatively correlated with the dose of antibiotic exposure. This may be due to high antibiotic concentrations affecting the community structure and function of soil microorganisms. Some developed countries have applied sludge to agricultural production to reduce production costs [75]. The direct application of sludge also leads to the introduction of ARGs in agricultural systems. Markowicz [76] isolated 16 resistance genes and four integrator classes in sewage sludge containing plasmids with extreme resistance to β-lactams as well as tetracyclines. Iwu et al. [77] isolated multidrug-resistant E. coli containing plasmids harbouring AmpC and ESBLS in irrigation water and agricultural soil samples, as well as a plasmid-harbouring multigene sequence.
Talukder et al. [78] isolated multidrug-resistant P. aeruginosa from soils from industrial areas, and 60% of MARs carried 1000-2000 bp double plasmids, which suggests the occurrence of plasmid-mediated transfer of ARGs in industrial soils. This is most likely due to the targeted selection of resistant bacteria by certain concentrations of antibiotic residues. The horizontal transfer of ARGs in sediments is rarely reported compared to that in agricultural soils, but sediments are considered to be the main vector for the multiplication and translocation of antibiotics and ARGs [79]. Chen et al. [80] found that in the Pearl River basin, the intI and sul genes were dynamically transported between water and sediment, and intI was closely associated with some specific genes in the sediment [81]. Yang et al. [79] detected a higher variety and relative abundance of genes in the sediments of East Dongting Lake than in Hong Lake. Another study found that the most common ARGs in the coastal sediments of the East China Sea in China were sulfonamide resistance genes [82].
Transfer of ARGs under Other Selection Pressures
The co-selection of ARGs by heavy metals and antibiotics also increases ARG contamination in soil [83,84]. Xu et al. [85] reported correlations between heavy metals and some ARG subtypes and observed positive correlations between Zn and the intI gene, with Cu and Zn having stronger positive correlations with ARGs than antibiotics. This implies that metals may play an important role in increasing the integration frequency of ARGs in various bacteria in agricultural soils. Both copper oxide nanoparticles and copper ions (Cu 2+ ) can facilitate the conjugative transfer of multiple resistance genes [86]. Heavy metal exposure accelerates the plasmid-mediated conjugative transfer of ARGs. Although nanomaterials can remove heavy metals by adsorption, Cd 2+ and high concentrations of Fe 2 O 3 nanoparticles significantly increase the frequency of the conjugative transfer of RP4 plasmids [87]. High concentrations of metals in soil affect the composition and function of soil bacterial communities. Klumper et al. [88] demonstrated for the first time that metal stress can modulate the tolerance of different soil bacteria to IncP plasmids. Soil minerals also affect the rate of the conjugative transfer of plasmids carrying ARGs, and the effect of different types of soil minerals on the rate of conjugative transfer varies [89]. Herbicides can cause changes in the susceptibility of certain strains to antibiotics and can also accelerate the HGT of ARGs in soil bacteria [90]. It has been shown that herbicide-use has a weak effect on the abundance and composition of soil microbial communities but can increase the abundance of corresponding ARGs and MGEs as well as the coupling frequency of plasmids [91].
Phage-Mediated Transfer of Antibiotic Resistance Genes
Phages can transfer genes by specific or universal transduction. Specific transduction involves the transfer of only a few specific genes, whereas universal transduction can move any segment of the bacterial genome. Another mechanism that is similar to transduction but different in nature is lysogenic conversion. When a mild phage infects a host bacterium, the phage DNA integrates with the host chromosome, causing the host to become lysogenic and leading it to acquire certain characteristic traits. Certain phenotypes of the host can Antibiotics 2022, 11, 525 9 of 14 also be altered by lysogenic transformation, leading to the acquisition or loss of a trait. Among several mechanisms of DNA transfer, lysogenic transformation caused by phage is more dominant and efficient [92]. Once phage-transferred ARGs reach the recipient bacteria by either mechanism, the survival of ARGs depends on the ability of the sequence to integrate into the bacterial genome. If ARGs are specifically transduced by phage transfer, an intact phage genome including the integrase gene will increase the chances of successful integration. If the gene is transduced by universal transduction, then the successful transfer of ARGs requires the recombination of the exogenous gene into the host chromosome. Thus, the genes encoding recombinase and integrase will determine the efficiency of the acquisition of ARGs by the recipient bacterium [93]. The presence of phages in aqueous environments and their potential for the HGT of ARGs have been widely demonstrated [94], but has been less studied in soil environments. Blance [95] et al. isolated phage particles carrying five ARGs (bla TEM , bla CTX-M-1 , bla CTX-M-9 , sul1 and tetW) from seawater. Another study found that fluoroquinolone exposure of multidrug-resistant Salmonella induced its phage-mediated gene transfer [96]. However, several studies have found phages carrying ARGs in the faces of poultry, cattle, pigs and even humans [97,98]. In manure-amened agricultural soils, this undoubtedly gives rise to a significant risk of phage-mediated transfer of ARGs.
One Health Approach of Antibiotics Resistance
The United Nations has set the goal of "Good Health and Well-being" to ensure healthy lives and to promote well-being for all at all ages [99]. However, the use of antibiotics in humans, livestock farming, and agricultural lands has led to significant environmental stress, which in turn has contributed to the prevalence of antibiotic resistance. As a large agricultural country, China undoubtedly has a great risk of antibiotic contamination in the soil environment and in the spread of ARGs. The application of animal manure with high levels of residual antibiotics, ARB and ARGs increases the risk of introducing ARGs into agricultural soils [100,101]. In manure-amended soils, increased antibiotic concentrations and the associated abundance of resistance genes are accompanied by enhanced correlations between class I integrons and ARGs [102].
In recent years, phytochemicals such as alkaloids and phenolic compounds have been shown to be alternatives to traditional antibiotics for the treatment of infections caused by corresponding antibiotic-resistant bacterial pathogens. The sephytochemicals act on membrane proteins, biofilms, efflux pumps and other structures closely related to gene transfer at the level of ARGs, thus inhibiting the growth of resistant bacterial pathogens [103]. Functional antimicrobial peptides (AMPs) are an important class of effector molecules for the innate host immune defense against pathogen invasion. AMPs (cecropin A and melittin) extracted from insects do not induce stress pathways in bacteria. Hermetia illucens AMPs have been demonstrated to have the potential to replace antibiotics in animal husbandry [104].
Outlook
In bacteria, the HGT of ARGs is mainly carried out through MGEs such as phages and plasmids. Phage-mediated HGT occurs mainly within species because phage transmission is limited by the genetic similarity of hosts, but plasmids can cross interspecies barriers, and the HGT mediated by plasmids has a larger range and higher frequency [105]. Invasive bacteria can carry plasmids into plant and animal cells, plasmids can be integrated into the genome for stable expression in daughter cells, and some chromosomal plasmids can even be vertically transferred with the bacteria carrying them. The plasmid-bindingrelated transfer mechanism has now been demonstrated in model plasmids, but studies on the presence and nature of potential signals for activating splice pairing have yet to be addressed. Not only can plasmids mediate the HGT of antibiotic resistance, but other virulence genes and adaptors are also applicable. Although studies have been conducted to investigate how HGT promotes the transmission, persistence, and maintenance of virulence of pathogenic bacteria through whole-genome sequencing data, the scope of such studies is relatively narrow [106,107]. For mobile ARGs, most studies have focused only on specific classes of ARGs, such as sulfonamide resistance genes and tetracycline resistance genes, and there is a lack of systematic generalized analyses on the general migration and transformation mechanism of ARGs. The contribution of soil plasmids to the spread of resistance genes needs to be further investigated as drug-resistant bacteria spread globally and the understanding of phages improves.
Bacteria are involved in HGT as vectors for the spread of ARGs in different environments (sewage sludge, manure, agricultural soil, etc.), posing a great threat to the natural environment and human social life. Plasmid-bacteria interactions are extremely complex, and even multidrug-resistant bacteria are commonly observed, so the effective prevention of the transmission of resistance genes through the plasmid-bacteria pathway needs to be further explored. There are more studies on the transmission mechanisms of ARGs in aquatic environments, including the linkage of ARGs between primitive polar glaciers and urban rivers or coastal seas. The transport and transmission of ARGs between soils and plant bodies has also been reported, but the transport pathways of ARGs between aqueous and soil environments or even atmospheric environments have been less well studied. Mucin is the last drug used in the treatment of Gram-negative infections, and further studies on the plasmid-mediated genes of resistance to mucin should be performed. When grown on antibiotic-contamination soils with a high abundance of resistance genes, the products eventually move through the food chain to the next level of consumers, thus forming a chain of resistance-gene transmission. Tracking studies for a specific class of ARGs to characterize the entire cycle is a worthy direction for future research.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-04-17T15:15:15.514Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "b37366c9669f49b3be474dc045e2db764204afdf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6382/11/4/525/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fecc5a37ae4699eb07ca4c7ce67b696675383ef",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248986680
|
pes2o/s2orc
|
v3-fos-license
|
A New Decomposition of the Graph Laplacian and the Binomial Structure of Mass-Action Systems
We provide a new decomposition of the Laplacian matrix (for labeled directed graphs with strongly connected components), involving an invertible core matrix, the vector of tree constants, and the incidence matrix of an auxiliary graph, representing an order on the vertices. Depending on the particular order, the core matrix has additional properties. Our results are graph-theoretic/algebraic in nature. As a first application, we further clarify the binomial structure of (weakly reversible) mass-action systems, arising from chemical reaction networks. Second, we extend a classical result by Horn and Jackson on the asymptotic stability of special steady states (complex-balanced equilibria). Here, the new decomposition of the graph Laplacian allows us to consider regions in the positive orthant with given monomial evaluation orders (and corresponding polyhedral cones in logarithmic coordinates). As it turns out, all dynamical systems are asymptotically stable that can be embedded in certain binomial differential inclusions. In particular, this holds for complex-balanced mass-action systems, and hence, we also obtain a polyhedral-geometry proof of the classical result.
Introduction
The Laplacian matrix (or graph Laplacian) is a matrix representation of a graph.It can be seen as a discrete version of the Laplace operator defined on graphs.On the one hand, the Laplacian matrix of an undirected graph, its spectrum, and its eigendecomposition have a variety of applications ranging from organic chemistry to signal processing and machine learning [24,22,31,3].On the other hand, labeled, directed graphs underlie dynamical systems ranging from continuous-time Markov processes (linear stochastic models) [23] to mass-action systems (non-linear deterministic models of chemical reaction networks) [20].
In the linear setting, the vertices V of a simple digraph G = (V, E) represent states, and the edges E represent transitions.Moreover, edge labels k represent transition rate constants.The dynamical system for a state variable ψ is given by dψ where A k is the Laplacian matrix of the labeled digraph G k = (V, E, k).That is, (A k ) i,j = k j→i if there is a transition (j → i) ∈ E, (A k ) i,i = − (i→j)∈E k i→j , and (A k ) i,j = 0 otherwise.(As in chemical reaction network theory, we use the letter A for the graph Laplacian and indicate its dependence on the edge labels k by a subscript.)The linear system can be called "Laplacian dynamics", it is equivalent to the stochastic master equation, and it is studied in applications ranging from biochemistry to systems biology [17,23].
In the nonlinear setting, the dynamical system for the species concentrations x is given by dx All notation is defined at the end of this introduction, and mass-action systems are introduced in Section 3. Here, we motivate Eqn. ( 2) in an informal way.As an example, we consider the chemical reaction 1X 1 + 1X 2 → X 3 with "stoichiometric" coefficients equal to 1.Under the assumption of massaction kinetics, its rate is given by v = k (x 1 ) 1 (x 2 ) 1 , where k > 0 is the rate constant, and x 1 , x 2 ≥ 0 are the concentrations of the species X 1 , X 2 .More abstractly, we can write the reaction as y → y ′ with (educt and product) "complexes" y = (1, 1, 0, 0, . ..)T and y ′ = (0, 0, 1, 0, . ..)T , and we can write its rate as v = k x y with the monomial x y := j (x j ) yj = (x 1 ) 1 (x 2 ) 1 (x 3 ) 0 (x 4 ) 0 • • • in the species concentrations x = (x 1 , x 2 , x 3 , x 4 , . ..)T .In a network, an individual reaction y → y ′ contributes the summand k x y (y ′ − y) to the dynamical system for x, where the reaction vector y ′ − y captures the consumption of educts y and the formation of products y ′ .For the example reaction, x y = x 1 x 2 (as stated above) and y ′ − y = (−1, −1, 1, 0, . ..)T .Now, we can introduce a mass-action system as a simple digraph G = (V, E), a map y (assigning complexes to vertices), and edge labels k.In particular, every edge (i → i ′ ) ∈ E defines a reaction y(i) → y(i ′ ) with rate constant k i→i ′ .Hence, the associated dynamical system dx dt = (i→i ′ )∈E k i→i ′ x y(i) (y(i ′ )−y(i)) involves a sum over all edges, and every summand is a product of a reaction rate and a reaction vector.Using the Laplacian matrix A k , the right-hand-side can be decomposed as shown in Eqn.(2).The matrix Y collects the complexes y(i) for i ∈ V , and the vector of monomials x Y is defined via (x Y ) i = x y(i) .Altogether, the dynamical system is polynomial.It is determined by the complex matrix Y (by stoichiometry) as well as by the Laplacian matrix A k (by the graph), and chemical reaction network theory studies the interplay of these two matrices to understand dynamics and steady states of mass-action systems, starting from the foundational 1972 papers [20,18,12] until today.
A steady state x > 0 with A k x Y = 0 is called a positive complex-balanced equilibrium (CBE), also known as vertex-balanced steady state.Indeed, at a CBE, the sum of all "flows" k i→i ′ x y(i) from vertex i/complex y(i) equals the sum of all k i ′ →i x y(i ′ ) to the latter.As shown by Horn [18] and Horn & Jackson [20] in 1972, the existence of a CBE has three important consequences: the components of the graph are strongly connected (the network is "weakly reversible"); all equilibria are complex-balanced and asymptotically stable; and there is a unique equilibrium in every dynamically invariant affine subspace ("stoichiometric compatibility class").More technically, complex-balanced equilibria are given by binomial equations and have a monomial parametrization.
For symmetric digraphs ("reversible" networks), detailed-balanced equilibria are given by binomial equations (by definition).Moreover, the polynomial dynamical system is a sum of binomials.(Just note that every reversible reaction y ⇄ y ′ contributes the summand (k y→y ′ x y −k y ′ →y x y ′ ) (y ′ −y) to the dynamical system for x.)We show that this also holds for weakly reversible networks.To this end, we provide a new decomposition of the graph Laplacian, involving an invertible core matrix, based on an order on the vertices.Further, we extend the classical result by Horn and Jackson on the asymptotic stability of complex-balanced equilibria.In addition to a Lyapunov function (as in classical proofs), we consider regions in the positive orthant with given monomial evaluation orders (and corresponding polyhedral cones in logarithmic coordinates).As it turns out, all dynamical systems are asymptotically stable that can be embedded in certain binomial differential inclusions.In particular, this holds for complex-balanced mass-action systems, and hence we also obtain a polyhedral-geometry proof of the classical result.
Organization of the work.In Section 2, we provide a new decomposition of the graph Laplacian (for labeled directed graphs with strongly connected components), involving an invertible core matrix, based on an order on the vertices.Depending on the particular order, the core matrix has additional properties.
In Section 3, we apply the graph-theoretic/algebraic results to mass-action systems.In Subsection 3.1, we demonstrate their binomial structure, and in 3.2, we introduce monomial evaluation orders and corresponding geometric objects (polyhedra and polyhedral cones).In Subsection 3.3, we embed complex-balanced mass-action systems in binomial differential inclusions and show that all equilibria of the latter are asymptotically stable, and in 3.4, we discuss our results.
In Appendix A, we provide explicit formulas for the vector of tree constants and the Laplacian matrix, using cycle decomposition.In Appendix B, we state auxiliary results used in the new decomposition of the graph Laplacian.In Appendix C, we give another proof of the asymptotic stability of complexbalanced equilibria (and the non-existence of other steady states) without using differential inclusions.
Notation.We denote the positive real numbers by R > and the nonnegative real numbers by R ≥ .Throughout the work, we use index notation: for a finite index set I, we write R I for the real vector space of vectors x = (x i ) i∈I with x i ∈ R, and analogously we write R I ≥ and R I > .(For I = {1, . . ., n}, we have the standard case R I = R n .)We write x > 0 for x ∈ R I > and x ≥ 0 for x ∈ R I ≥ .
For vectors x, y ∈ R I , we denote their scalar product by x • y ∈ R and their componentwise (Hadamard) product by x • y ∈ R I .For x ∈ R I > , y ∈ R I , we define the (generalized) monomial x y = i∈I (x i ) yi ∈ R > , and for x ∈ R I > , Y ∈ R I×J , we define the vector of monomials x Y ∈ R J > via (x Y ) j = x y(j) , where y(j) is the column of Y with index j ∈ J.
The graph Laplacian
In the following, we assume that the components of a digraph are strongly connected.For the simplicity of the presentation, we first consider one strongly connected component separately.
One component
We consider a strongly connected, simple, directed graph G = (V, E) with a finite set of vertices V = {1, . . ., m} and a set of edges E ⊆ V × V .Further, we consider positive edge labels k ∈ R E > and the resulting labeled digraph where I E ∈ R V ×E is the incidence matrix and I E,s ∈ R V ×E is the "source matrix".Explicitly, otherwise, and This definition is used in dynamical systems.For example, k is the vector of transition rate constants in the continuous-time, linear process dψ dt = A k ψ (with ψ ∈ R V ≥ and i∈V ψ i = 1).In other fields, the Laplacian matrix is defined as , where 1 ∈ R V is the vector with all entries equal to one.Further, ker with a positive vector K k ∈ R V > (depending on the rate constants).The entries of K k (the tree constants) can be given explicitly in terms of k, where T i is the set of directed spanning trees of G rooted at vertex i ∈ V (and directed towards the root).For a minimal proof of Eqn.(3), see [21, Lemma 1] or Appendix A. We note that the explicit formula is not crucial for our analysis.Finally, the tree constants K k correspond to minors of the matrix −A k which is the content of the matrix-tree theorem (for labeled, directed graphs) [34,Theorem 3.6].
Clearly, the matrix has positive diagonal entries and nonpositive off-diagonal entries.Most importantly, it has zero row and column sums: Indeed, 1T A k diag(K k ) = 0, and also As a consequence, the matrix is diagonally dominant.
The entries of A k diag(K k ) can be given explicitly in terms of k.For a derivation of this formula and a discussion of the Birkhoff/von Neumann Theorem [5,35], see Appendix A. Again, we note that the explicit formula is not crucial for our analysis.
Example.Throughout this section, we consider the labeled directed graph Most importantly, we introduce an auxiliary connected directed graph G E = (V, E) with the same set of vertices V as in G = (V, E), but with an arbitrary set of edges In particular, it has no cycles.Further, G E need not be a subgraph of G nor be directed towards a root.The corresponding incidence matrix Note that the definitions of the incidence matrices I E and I E agree formally.
(Just the sets of edges E and E differ.)Clearly, ker I E = {0} and ker I T E = im 1.
Proposition 1.Let G k = (V, E, k) be a strongly connected, labeled, simple digraph and G E = (V, E) be an auxiliary digraph.Then, there exists a unique invertible matrix A k,E ∈ R E×E , called the core matrix of the graph Laplacian, such that Proof.Since G is strongly connected, for a unique matrix B k,E ∈ R V ×E , where uniqueness follows from ker I E = {0}.
For the same reason, we have Finally, we obtain For an auxiliary digraph In the following two results, we assume G E to be either of the form Proposition 2. Let G k = (V, E, k) be a strongly connected, labeled, simple digraph, and let G E = (V, E) be an auxiliary digraph that is a chain graph.Then A k,E ∈ R E×E , the core matrix of the graph Laplacian, is non-negative with positive diagonal.
Proof.Let G E = (V, E) be the chain graph It induces a natural order on the set of vertices V (and on the set of edges E).
For i, j ∈ V , we write i ≤ j if i = j or i → . . .→ j.An "inverse" of the incidence matrix Explicitly, using the order i 1 , i 2 , . . ., i m on V , , and indeed, J E I E = −I, where I ∈ R E×E is the identity matrix.That is, −J E is a generalized left-inverse of I E .Hence, by Proposition 1,
For an arbitrary matrix
Explicitly, (σ) is the sum of all entries in the upper left i × j block of A. Now, recall that the matrix A = −A k diag(K k ) has positive diagonal entries and nonpositive off-diagonal as well as zero row and column sums.Hence, the sum (σ) is nonnegative.Finally, recall that the underlying graph G is strongly connected.If i → i ′ equals j → j ′ , then the sum (σ) is positive, since the corresponding subgraph with vertices {i 1 , i 2 , . . ., i} has incoming and outgoing edges.
Example (continued).In the labeled digraph G k = (V, E, k) introduced above, there are 3 vertices and hence 6 possible chain graphs.For example, for ) be a strongly connected, labeled, simple digraph, and let G E = (V, E) be an auxiliary digraph that is a star graph.Then A k,E ∈ R E×E , the core matrix of the graph Laplacian, is (row and column) diagonally dominant with positive diagonal and non-positive off-diagonal entries.
Explicitly, let Proof.Let G E = (V, E) be the star graph An "inverse" of the incidence matrix Explicitly, using the order i 1 , i 2 , . . ., i m on V , , and indeed, That is, (σ ⋆ ) is the sum of all entries of A except the entries in row i and column j.Now, recall that the matrix A = −A k diag(K k ) has zero row and column sums.Hence, (σ ⋆ ) equals the sum of all entries (which is zero) minus the sums of all entries in row i and column j (which are zero) plus the common entry of row i and column j.That is, As claimed, A k,E equals A =−A k diag(K k ) with row i m and column i m removed.Like A, it has positive diagonal entries and nonpositive off-diagonal entries and is (row and column) diagonally dominant.(However, not all row and column sums are zero.) Example (continued).In the labeled digraph G k = (V, E, k) introduced above, there are 3 vertices and hence 3 possible star graphs.For example, for Remark.In applications to mass-action systems in Section 3, we use chain graphs (rather than star graphs).
Several components
In general, we consider a labeled, simple digraph Accordingly, an auxiliary digraph a chain graph, and analogously for a star graph.Propositions 1, 2, and 3 imply the main result of this section.Theorem 4. Let G k = (V, E, k) be a labeled, simple digraph with strongly connected components, and let G E = (V, E) be an auxiliary digraph.Then, there exists an invertible, block-diagonal matrix A k,E ∈ R E×E , called the core matrix of the graph Laplacian, such that is diagonally dominant with positive diagonal and non-positive off-diagonal entries.
Mass-action systems
We apply the graph-theoretic/algebraic results from the previous section to mass-action systems.We start with a brief summary of fundamental concepts and results.
A chemical reaction network (G, y) is given by a simple directed graph G = (V, E) with a finite set of vertices V = {1, . . ., m} and a set of edges (reactions) E ⊆ V × V together with an injective map y : ) If the components of G (the linkage classes) are strongly connected, then the network is called weakly reversible.
A mass-action system (G k , y) is a chemical reaction network (G, y) where every edge (i (If the network is weakly reversible, then also the mass-action system is called weakly reversible.) The resulting dynamical system for x ∈ R n ≥ (the concentrations of n molecular species) is given by The right-hand side of the ODE can be decomposed as where I E ∈ R V ×E is the incidence matrix, I E,s ∈ R V ×E is the "source matrix", and is the resulting Laplacian matrix of the labeled, simple digraph G k .In the following, we consider the dynamical system in the form The stoichiometric subspace is given by S = im(Y I E ).Clearly, dx dt = f k (x) ∈ S, and hence x(t) ∈ x(0) + S.
then it is a positive complex-balanced equilibrium (CBE), also known as vertexbalanced steady state.
Remark.In the linear setting, the Laplacian matrix captures state transitions on a graph.Let ψ = x Y be the state variable, given by the vector of monomials.
If A k ψ = 0, then transitions are balanced (at every vertex of the graph), and As shown by Horn [18] and Horn & Jackson [20], if there exists a positive CBE (in some stoichiometric class), then 1. the mass-action system is weakly reversible [18, Theorem 3C], 2. the equilibrium is asymptotically stable, and all equilibria are complexbalanced [20, Theorem 6A], and, 3. there exists a unique positive (necessarily complex-balanced) equilibrium in every stoichiometric class [20,Lemma 4B].
In the following remarks, we elaborate on results 1, 2, and 3.
Remark (result 1).Let G be weakly reversible and G E = (V, E) be some auxiliary digraph.By Theorem 4, Given a particular positive CBE x * ∈ R n > , Eqn. ( 6) is equivalent to and further to (y(i ′ ) − y(i)) T ln(x/x * ) = 0 for (i , the set of all positive CBEs is given by the monomial parametrization x = x * • e S ⊥ . Remark (result 2).In Section 3.3, we extend the classical stability result.As it turns out, it holds not only for complex-balanced equilibria of mass-action systems, but for all equilibria of binomial differential inclusions.
In Appendix C, we give another proof for the asymptotic stability of complexbalanced equilibria (and the non-existence of other steady states) without using differential inclusions.
Binomial structure
Given that the network is weakly reversible (the components of the graph G are strongly connected), our main graph-theoretic/algebraic result, Theorem 4, implies that the dynamical system (4) for the mass-action system (G k , y) can be decomposed as where G E = (V, E) is some auxiliary digraph.
Again, we have a closer look at the term That is, the right-hand side of the dynamical system is a sum of binomials.This is obvious for symmetric digraphs (reversible networks); cf.[9, Eqn. ( 14)].
By Theorem 4, it also holds for digraphs with strongly connected components (weakly reversible networks).
In particular, for a complex-balanced equilibrium, not just the right-hand side of ( 7) is zero, but every individual binomial is zero.In this sense, the ODE (7) does not only have binomial steady states (positive complex-balanced equilibria, given by binomial equations), but truly is a binomial dynamical system.
Monomial evaluation orders and corresponding polyhedra/polyhedral cones
Let (G k , y) be a mass-action system based on the labeled, simple digraph G k = (V, E, k) and the map y (the matrix Y ).
For fixed x ∈ R n > , the values of the monomials x y(i) with i ∈ V are ordered (using the order on R).For simplicity, we first consider a connected graph G = (V, E).Obviously, the total order x y(i1) ≤ x y(i2) ≤ . . .≤ x y(im) can be represented by a chain graph, If the order is non-strict (if some monomials have the same value), then the representation is not unique.Analogously, the partial order x y(i1) ≤ x y(im) , x y(i2) ≤ x y(im) , . . ., x y(im−1) ≤ x y(im) can be represented by a star graph, In general, every auxiliary graph G E = (V, E) represents a partial order on the vertices of G and hence on the values of the monomials.
In the following, we will consider monomials with coefficients: , for weakly reversible networks with tree constants K k ∈ R V > , and
Weak reversibility
Let (G k , y) be a weakly reversible mass-action system, and fix x ∈ R n > .We call an order on the entries of x Y K k ∈ R V > that is total within connected components, but does not relate entries in different components, a monomial evaluation order (since the notion monomial order(-ing) has a different meaning in algebra).We represent the order by a chain graph G E = (V, E) and often just by the set of edges E. Explicitly, (i Thereby, the vertices i, i ′ ∈ V are necessarily in the same component.If the order is non-strict, then E is not unique. Analogously, the maximal entries of x Y K k ∈ R V > within connected components are greater or equal than all other entries in the respective components.We represent this order by a star graph G E = (V, E).If there is more than one maximal entry within a component, then E is not unique.
Conversely, fix an auxiliary graph G E = (V, E), for example, a chain graph or a star graph.The subset of R n > with monomial evaluation order represented by E is given by By the monotonicity of the logarithm, with the polyhedron
Complex balancing
If there exists a positive CBE x * ∈ R n > , then the polyhedra become polyhedral cones.
Fix an auxiliary graph G E = (V, E).Using complex balancing (6) for x * , the subset (8) can be written as By the monotonicity of the logarithm, which does not depend on k. (Of course, x * depends on k.)The lineality space of C E does not even depend on E, Obviously, S ⊥ = lineal C E ⊆ C E .For fixed E, there are two possibilities: • C E = S ⊥ .Then, all defining (non-strict) inequalities of C E (and S k,E ) are fulfilled with equality, and S k,E = x * • e S ⊥ equals the set of complexbalanced equilibria.
• C E ⊃ S ⊥ .Then C E and S k,E are full-dimensional, and the monomial evaluation order is strict in the interior of S k,E and non-strict on the boundary (where some monomials have the same value).
In the following study of complex-balanced mass-action systems (and their extension to binomial differential inclusions), we use chain graphs G E , representing monomial evaluation orders.In this setting, a full-dimensional subset S k,E is called a stratum, cf.[32].This term has also been used for partial orders related to the original graph, rather than to an auxiliary graph, cf.[9].
Remark.As stated above, for every x ∈ R n > , there is a (non-unique) E such that x ∈ S k,E .In particular, R n > is a union of strata which intersect only on their boundaries.Correspondingly, R n is a union of polyhedral cones C E .Indeed, by the monotonicity of the logarithm, an order on the entries of ( (within components) is equivalent to an order on the entries of Y T z ∈ R V with z = ln x x * , and the set of pairs of vertices within components, induces an arrangement of central hyperplanes, The central hyperplane arrangement decomposes R n into open polyhedral cones called faces; full dimensional faces are called cells.In our terminology, a cell is the interior of a polyhedral cone C E and hence corresponds to the interior of a stratum S k,E .
x * S k,E The positive orthant is a union of strata corresponding to monomial evaluation orders.In particular, consider the stratum given by the order x y(1) ≤ x y(2) ≤ x y (3) , that is, S k,E with E = {1 → 2, 2 → 3}, bounded by the green and blue lines.The green line specifies x y(1) = x y (2) ; above it, x y(2) > x y (1) , as indicated by the corresponding vertices 2 and 1.The blue line specifies x y(2) = x y(3) ; below it, x y(3) > x y (2) .(The dashed black line specifies x y(1) = x y(3) , which does not bound the particular stratum.)In the interior of S k,E , the order is strict.In logarithmic coordinates z = ln(x/x * ), the stratum corresponds to the polyhedral cone C E .
In general, S k,E = x * • e S ⊥ (equals the set of complex-balanced equilibria) if and only if C E = S ⊥ .In the example, S = R 2 and S ⊥ = {0}.
Binomial differential inclusions
Finally, we extend a classical result by Horn and Jackson from 1972.
Theorem 5 (cf.[20], Theorem 6A).Let (G k , y) be a mass-action system and x * ∈ R n > be a positive CBE of the dynamical system (4).Then, for all x ∈ R n > that are not complex-balanced equilibria.Hence, (i) all positive equilibria are complex-balanced, and (ii) x * is asymptotically stable.
All proofs are based on the entropy-like Lyapunov function L : R n > → R, For T , and hence T f k (x) ≤ 0 with "=" if and only if x = x * , then L(x) is a strict Lyapunov function, and x * is asymptotically stable.
Previous proofs further use inequalities for the exponential function or the logarithm and cycle decomposition of the graph, cf.[20,33,1,15].For a new proof using monomial evaluation orders and corresponding geometric objects (strata and polyhedral cones), see Appendix C.
In the following, we extend the stability result and provide a maximally transparent, polyhedral-geometry proof.First, we relate the dynamics in a given stratum to the corresponding polyhedral cone.
Polar cone
In Proposition 6 below, we use the concept of the polar cone Proposition 6.Let (G k , y) be a complex-balanced mass-action system, G E be a chain graph, and S k,E ⊂ R n > be a stratum.Then, for all x ∈ S k,E that are not positive complex-balanced equilibria, Proof.Let x ∈ S k,E and u ∈ C E .Using the dynamical system (4) and Theorem 4, we have Using S k,E and C E as in Eqns.( 8) and ( 10), we have b ≥ 0 and a ≥ 0.
By Theorem 4, the core matrix of the graph Laplacian, A k,E ∈ R E×E , is nonnegative with positive diagonal.Hence, Now, let (G k , y) be a mass-action system and x * ∈ R n > be a positive CBE of the dynamical system (4).Proposition 6 suggests to introduce a corresponding piece-wise constant binomial differential inclusion as thereby explicitly specifying the set of positive equilibria x * • e S ⊥ .Equivalently, Proposition 6 immediately implies the following result.
Theorem 7. Let (G k , y) be a mass-action system and x * ∈ R n > be a positive CBE of the dynamical system (4).Then, the mass-action system can be embedded in the binomial differential inclusion (12).
Finally, we extend Theorem 5 (from complex-balanced mass-action systems to binomial differential inclusions).Theorem 8. Let x * ∈ R n > be a positive equilibrium of the binomial differential inclusion (12).Then, for all x ∈ R n > that are not positive equilibria and all f ∈ F (ln x x * ).Hence, x * is asymptotically stable.
Proof.Let S k,E ⊂ R n > be a stratum and x ∈ S k,E not be a positive equilibrium.On the one hand, that is, ln x x * lies in C E , but not in the lineality space lineal C E = S ⊥ .On the other hand, and T f < 0, and L(x) is a strict Lyapunov function.
Remark 9.Even if a weakly reversible mass-action system (G k , y) does not admit a complex-balanced equilibrium x * , it can be embedded in a piece-wise constant differential inclusion.Technically, the absence of a CBE x * does not allow to pass from the polyhedron P k,E (with given monomial evaluation order) to the cone C E , cf.Eqns.( 9) and (10).That is, instead of a central hyperplane arrangement (that defines the cones C E ), one considers a non-central hyperplane arrangement (that defines the polyhedra P k,E ).In analogy to Proposition 6, one can show that, for a chain graph G E and a stratum S k,E , it holds that f k (x) ∈ int(rec(P k,E ) pol ), for all x ∈ S k,E .Here, rec(C) denotes the recession cone of a set C.
Discussion
As Horn and Jackson in 1972 [20, Theorem 6A], we have shown that, in massaction systems with a positive complex-balanced equilibrium, every positive equilibrium is complex-balanced and asymptotically stable.For a proof using the new decomposition of the graph Laplacian, monomial evaluation orders, and corresponding geometric objects (strata and polyhedral cones), see Appendix C.
In fact, we have extended the result to binomial differential inclusions (BDIs), introduced in this work.Every positive equilibrium of a BDI is asymptotically stable, see Theorem 8.
Binomial and toric differential inclusions
Given a reaction network (G, y) with graph G = (V, E) and "complex" map y : V → R n ≥ , a BDI depends on the components of the graph (but not on the exact edge set E) and on some positive equilibrium x * (but not explicitly on the rate constants).In fact, it is mainly determined by stoichiometry, namely by pairwise differences of complexes, defining a hyperplane arrangement.In particular, monomial evaluation orders correspond to polyhedral cones (in logarithmic coordinates) and strata (in the original positive variables).More formally, a BDI is given by a hyperplane arrangement (with lineality space S ⊥ ) and a positive equilibrium x * , see Equation (12).Most importantly, complex-balanced massaction systems can be embedded in BDIs.
Recently, toric differential inclusions (TDIs) have been used in a proposed proof [7,8] of the global attractor conjecture [19], stating that complex-balanced equilibria are not just asymptotically, but also globally stable.In fact, TDIs also allow to tackle the persistence and permanence conjectures for (weakly reversible) mass-action systems with (time-)variable rate constants.In the classical setting, rate constants k > 0 are fixed, whereas, in the study of the conjectures mentioned above, rate constants ǫ ≤ k(t) ≤ 1/ǫ may vary over time, but are bounded [1,11].To address this complication, "uncertainty regions" with thickness δ(ǫ) around the boundaries of "regions with definite monomial order" are introduced.On the one hand, BDIs are special cases of TDIs with δ → 0 (modulo a translation of the hyperplane arrangement by log x * ), and also the piece-wise constant differential inclusions mentioned in Remark 9 can be embedded in TDIs (with δ > 0).On the other hand, BDIs allow to consider (the asymptotic stability of) positive equilibria, whereas TDIs capture the dynamics close to the boundary of the positive orthant without being explicit about equilibria.
Generalized mass-action systems
In previous work, we have studied generalized mass-action systems [27,28,25,26,10,6].In order to motivate the setting, we consider the reaction 1X 1 + 1X 2 → X 3 with "stoichiometric" coefficients equal to 1.Under the assumption of generalized mass-action kinetics, its rate is given by v = k (x 1 ) a (x 2 ) b with arbitrary "kinetic orders" a, b > 0 (in particular, different from 1).Using the complexes y = (1, 1, 0, 0, . ..)T , y ′ = (0, 0, 1, 0, . ..)T , and the kinetic-order complex ỹ = (a, b, 0, 0, . ..)T , we can write the reaction as y → y ′ with rate v = k x ỹ .For a network, the resulting dynamical system, is determined by the matrices Y (by stoichiometry), Ỹ (by kinetics), and A k (by a graph).For generalized mass-action systems, asymptotic stability of complexbalanced equilibria and non-existence of other steady states are not guaranteed (as for classical mass-action systems, cf.Theorem 5).We have already provided necessary conditions for linear stability of complex-balanced equilibria [6].In parallel work [29], we use the new decomposition of the graph Laplacian and monomial evaluation orders to study sufficient conditions for linear stability of complex-balanced equilibria and non-existence of other steady states.
On the other hand, every subgraph S ∈ G i gives rise to a spanning tree T ∈ T i ′ and vice versa (by removing/adding the edge i ′ → i that is in the cycle).For i ∈ V , Hence, where the sum is over all cycles C contained in G, and A C is the Laplacian matrix of the cycle C with k = 1 ∈ R E > (all edge labels set to 1).
Proof.Both matrices, A k diag(K k ) and C λ k,C A C , have zero row and column sums.Hence, it is sufficient to compare the off-diagonal entries.
On the one hand, every spanning tree in T i gives rise to a subgraph in G C that contains the edge i → i ′ in the cycle and vice versa (by adding/removing the edge i On the other hand, and the two matrices, A k diag(K k ) and C λ k,C A C , agree.
Remark.In a time-discrete, linear process otherwise, the edge labels k ∈ R n > do not represent transition rates, but transition probabilities.Then, (i→i ′ )∈E k i→i ′ ≤ 1, and B k is simply the matrix of transition probabilities with "k i→i "= 1 − (i→i ′ )∈E k i→i ′ column sums equal to one.That is, B k = A k + I, the identity matrix.Obviously, ψ = B k ψ if and only if A k ψ = 0. Whereas A k diag(K k ) always has zero row and column sums, B k may (or may not) be doubly stochastic (have column and row sums equal to one).
The Birkhoff/von Neumann Theorem [5,35] states that every doubly stochastic (d.s.) matrix B ∈ R n×n ≥ is the convex sum of permutation matrices; however, this decomposition is not unique.In fact, there are n! permutation matrices.Still, the polytope of d.s.matrices lies in an (n− 1) 2 -dimensional affine subspace of R n×n ≥ , and hence every d.s.matrix can be written as the sum of at most (n − 1) 2 + 1 permutation matrices.
On the contrary, the matrix A k diag(K k ) is the unique sum of all Laplacian matrices of cycles.However, there are more than (n − 1) 2 + 1 cycles, in general.
B Auxiliary graph-theoretic results
Lemma 12 (cf.[13], Lemma 2).Let G k = (V, E, k) be a connected, labeled, simple digraph with one absorbing strong component, and A k and I E be the corresponding Laplacian and incidence matrices.Then, im(A k ) = im(I E ).Lemma 13 (cf.[28], Proposition 5).Let G = (V, E) be a connected, simple digraph, G E = (V, E) be an auxiliary digraph, and I E and I E be the corresponding incidence matrices.Then, im(I E ) = im(I E ).
Proof.From graph theory and the definition of an auxiliary graph, we know that dim im(I E ) = dim im(I E ) = |V | − 1.In the rest of the proof, we show that im(I E ) ⊆ im(I E ).We consider the edge (i → j) ∈ E and the corresponding column e j − e i of I E , where e i denotes the ith standard basis vector in R V .
Since G E is a directed tree, there is a path from i to j in the undirected version of G E , that is, i = i 1 − − i 2 − − . . .− − i l = j with either (i k → i k+1 ) ∈ E or (i k ← i k+1 ) ∈ E for k = 1, . . ., l − 1.Hence, where α k ∈ {−1, 1} and e i k+1 − e i k is the column of I E corresponding to either the edge (i k → i k+1 ) ∈ E or (i k ← i k+1 ) ∈ E.
C A proof of Theorem 5 We provide a proof of Theorem 5 in the main text, based on the entropy-like Lyapunov function.Previous proofs further use inequalities for the exponential function or the logarithm and cycle decomposition of the graph, cf.[20,33,2,15].We use monomial evaluation orders and corresponding geometric objects (strata and polyhedral cones).
Theorem.Let (G k , y) be a mass-action system and x * ∈ R n > be a positive CBE of the dynamical system (4).Then, ln x x * T f k (x) < 0 for all x ∈ R n > that are not complex-balanced equilibria.Hence, (i) all positive equilibria are complex-balanced, and (ii) x * is asymptotically stable.
Proof.Let x ∈ R n > not be a CBE.Then there is a full-dimensional subset (a stratum) S k,E ⊂ R n > for some chain graph G E = (V, E) such that x ∈ S k,E , that is, ln x x * ∈ C E .Using the dynamical system (4) and Theorem 4, we have Using S k,E and C E as in Eqns.( 8) and ( 10), we have b ≥ 0 and a ≥ 0.
Since x is not be a CBE, b = 0, that is, there is i → i ′ ∈ E such that By complex balancing (6), (i) If there is a positive equilibrium x ∈ R n > that is not complex-balanced, then f k (x) = 0, contradicting ln x x * T f k (x) < 0.
Lemma 12 in Appendix B, and further im I E = im I E , cf.Lemma 13 in Appendix B. Altogether, we have im B k,E = im I E and hence B k,E = −I E A k,E for a unique matrix A k,E ∈ R E×E .(The minus sign ensures positive diagonal entries of A k,E for particular auxiliary graphs; see below.
Proof.
From graph theory, we know that dim im(I E ) = |V | − 1 and ker(A k ) = im ξ, where ξ ∈ R V ≥ has support on the absorbing strong component of G. Hence, also dim im(A k ) = |V | − 1.By definition, im(A k ) ⊆ im(I E ) and hence im(A k ) = im(I E ).
a i→i ′ = (y(i ′ ) − y(i)) T ln x x * > 0.By Theorem 4, the core matrix of the graph Laplacian, A k,E ∈ R E×E is nonnegative with positive diagonal.Hence,ln x x * T f k (x) = −a T A k,E b < 0.
(
ii) Recall that a positive CBE x * is the unique steady state in its stoichiometric compatibility class (forward invariant set).Hence,d dt L(x(t)) = ln x x * T f k (x) ≤ 0 with "=" if and only if x = x * ,and L(x) is a strict Lyapunov function.
and y ∈ int C pol if and only if y • x < 0 for all x ∈ C \ lineal C.
|
2022-05-24T01:16:13.535Z
|
2022-05-23T00:00:00.000
|
{
"year": 2023,
"sha1": "6b69b7741d4c74025f0a6221c3d7407f9d0cf6ee",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00332-023-09942-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "3ca778ed2ed62c098a779e118b5d581a59ebd09c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
118593231
|
pes2o/s2orc
|
v3-fos-license
|
The Weight Function in the Subtree Kernel is Decisive
Tree data are ubiquitous because they model a large variety of situations, e.g., the architecture of plants, the secondary structure of RNA, or the hierarchy of XML files. Nevertheless, the analysis of these non-Euclidean data is difficult per se. In this paper, we focus on the subtree kernel that is a convolution kernel for tree data introduced by Vishwanathan and Smola in the early 2000’s. More precisely, we investigate the influence of the weight function from a theoretical perspective and in real data applications. We establish on a 2-classes stochastic model that the performance of the subtree kernel is improved when the weight of leaves vanishes, which motivates the definition of a new weight function, learned from the data and not fixed by the user as usually done. To this end, we define a unified framework for computing the subtree kernel from ordered or unordered trees, that is particularly suitable for tuning parameters. We show through eight real data classification problems the great efficiency of our approach, in particular for small data sets, which also states the high importance of the weight function. Finally, a visualization tool of the significant features is derived.
Analysis of tree data
Tree data naturally appear in a wide range of scientific fields, from RNA secondary structures in biology (Le et al., 1989) to XML files (Costa et al., 2004) in computer science through dendrimers (Martín-Delgado et al., 2002) in chemistry and physics. Consequently, the statistical analysis of tree data is of great interest. Nevertheless, investigating these data is difficult due to the intrinsic non-Euclidean nature of trees.
Several approaches have been considered in the literature to deal with this kind of data: edit distances between unordered or ordered trees (see Bille, 2005, and the references therein), coding processes for ordered trees (Shen et al., 2014), with a special focus on conditioned Galton-Watson trees Bharath et al., 2016). One can also mention the approach developed in (Wang and Marron, 2007). In the present paper, we focus on kernel methods, a complementary family of techniques that are well-adapted to non-Euclidean data.
Kernel methods consists in mapping the original data into a (inner product) feature space. Choosing the proper feature space and finding out the mapping might be very difficult. Furthermore, the curse of dimensionality takes place and the feature space may be extremely big, therefore impossible to use. Fortunately, a wide range of prediction algorithms do not need to access that feature space, but only the inner product between elements of the feature space. Building a function, called a kernel, that simulates an inner product in an implicit feature space, frees us from constructing a mapping. Indeed, K : X 2 → R is said to be a kernel function on X if, for any (x 1 , . . . , x n ) ∈ X n , the Gram matrix [K(x i , x j )] 1≤i,j≤n is positive semidefinite. By virtue of Mercer's theorem (1909), there exists a (inner product) feature space Y and a mapping ϕ : X → Y such that, for any (x, y) ∈ X 2 , K(x, y) = ϕ(x), ϕ(y) Y . This technique is known as the kernel trick. Algorithms that can use kernels include Support Vector Machines (SVM), Principal Components Analyses (PCA) and many others. We refer the reader to the books (Cristianini et al., 2000;Schölkopf and Smola, 2001;Shawe-Taylor et al., 2004) and the references therein for more detailed explanations of theory and applications of kernels.
To use kernel-based algorithms with tree data, one needs to design kernel functions adapted to trees. Convolution kernels, introduced by Haussler (1999), measure the similarity between two complex combinatorial objects based on the similarity of their substructures. Based on this strategy, many authors have developed convolution kernels for trees, among them the subset tree kernel (Collins and Duffy, 2002), the subtree kernel (Vishwanathan and Smola, 2002) and the subpath kernel (Kimura et al., 2011). A recent state-of-the-art on kernels for trees can be found in the thesis of Da San Martino (2009), as well as original contributions on related topics. In this article, we focus on the subtree kernel as defined by Vishwanathan and Smola (2002). In this introduction, we develop some concepts on trees in Subsection 1.2. They are required to deal with the precise definition of the subtree kernel in Subsection 1.3 as well as the aim of the paper presented in Subsection 1.4.
Unordered and ordered rooted trees
Rooted trees A rooted tree T is a connected graph with no cycle such that there exists a unique vertex R(T ), called the root, which has no parent, and any vertex different from the root has exactly one parent. The leaves L(T ) are all the vertices without children. The height of a vertex v of a tree T can be recursively defined as H(v) = 0 if v is a leaf of T and otherwise. The height H(T ) of the tree T is defined as the height of its root, i.e., H(T ) = H(R(T )). The outdegree of T is the maximal branching factor that can be found in T , that is where C(v) denotes the set of children of v. For any vertex v of T , the subtree T [v] rooted in v is the tree composed of v and all its descendants D(v). S(T ) denotes the set of subtrees of T .
Unordered trees Rooted trees are said unordered if the order between the sibling vertices of any vertex is not significant. The precise definition of unordered rooted trees, or simply unordered trees, is obtained from the following equivalence relation: two trees T 1 and T 2 are isomorphic (as unordered trees) if there exists a one-to-one correspondence Φ from the set of vertices of T 1 into the set of vertices of T 2 such that, if w is a child of v in T 1 , then Φ(w) is a child of Φ(v) in T 2 . The set of unordered trees is the quotient set of rooted trees by this equivalence relation.
Ordered trees In ordered rooted trees, or simply ordered trees, the set of children of any vertex is ordered. As before, ordered trees can be defined as a quotient set if one adds the concept of order to the equivalence relation: two trees T 1 and T 2 are isomorphic (as ordered trees) if there exists a one-to-one correspondence Φ from the set of vertices of T 1 into the set of vertices of T 2 such that, if w is the r th child of v in T 1 , then Φ(w) is the r th child of Φ(v) in T 2 .
In the whole paper, T * denotes the set of * -trees with * ∈ {ordered, unordered}.
Subtree kernel
The subtree kernel has been introduced by Vishwanathan and Smola (2002) as a convolution kernel on trees for which the similarity between two trees is measured through the similarity of their subtrees. A subtree kernel K on * -trees is defined as, where w τ is the weight associated to τ , N τ (T ) counts the number of subtrees of T that are isomorphic (as * -trees) to τ and κ is a kernel function on N, Z or R (see Schölkopf and Smola, 2001, Section 2.3 for some classic examples). Assuming κ(0, n) = κ(n, 0) = 0, the formula (1) of K becomes K(T 1 , T 2 ) = τ ∈S(T 1 )∩S(T 2 ) w τ κ (N τ (T 1 ), N τ (T 2 )) , making the sum finite. Indeed, all the subtrees τ ∈ T * \ S(T 1 ) ∩ S(T 2 ) do not count in the sum (1). In this paper, as for Vishwanathan and Smola (2002), we assume that κ(n, m) = nm, then which is the subtree kernel as introduced by Vishwanathan and Smola (2002). The weight function τ → w τ is the only parameter to be tuned. In the literature, the weight is always assumed to be a function of a quantity measuring the "size" of τ , in particular its height H(τ ). Then w τ is taken as an exponential decay of this quantity, w τ = λ H(τ ) for some λ ∈ [0, 1] (Aiolli et al., 2006;Collins and Duffy, 2002;Da San Martino, 2009;Kimura et al., 2011;Vishwanathan and Smola, 2002). This choice can be justified in the following manner. If a subtree τ is counted in the kernel, then all its subtrees are also counted. Then an exponential decay counterbalances the exponential growth of the number of subtrees.
In the literature, two algorithms have been proposed to compute the subtree kernel for ordered trees. The approach of (Vishwanathan and Smola, 2002) is based on string representations of trees, while the authors of (Aiolli et al., 2006;Da San Martino, 2009) extensively use DAG reduction of tree data, an algorithm that achieves lossly compression of trees. To the best of our knowledge, the case of unordered trees has only been considered through the arbitrary choice of a sibling order.
Aim of the paper
The aim of the present paper is threefold: 1. We investigate the theoretical properties of the subtree kernel on a 2-classes model of random trees in Section 2. More precisely, we provide a lower-bound for the contrast of the kernel in Proposition 2. Indeed, the higher the contrast, the less data are required to achieve a given performance in prediction (see Balcan et al., 2008, for general similarity functions and Corollary 3 for the subtree kernel). We exploit this result to show in Subsection 2.4 that the contrast of the subtree kernel is improved if the weight of leaves vanishes. The relevance of the model is discussed in Remark 1.
2. We rely on Aiolli et al. (2006); Da San Martino (2009) on ordered trees to develop in Section 3 a unified framework based on DAG reduction for computing the subtree kernel from ordered or unordered trees, with or without labels on their vertices. Subsection 3.1 is devoted to DAG reduction of unordered then ordered trees. DAG reduction of a forest is introduced in Subsection 3.2. Then, the subtree kernel is computed from the annotated DAG reduction of the data set is Subsection 3.3. We notice in Remark 13 that DAG reduction of the data set is costly but makes possible super-fast repeated computations of the kernel, which is particularly adapted for tuning parameters. This is the main advantage of the DAG computation of the subtree kernel compared to the algorithm based on string representations (Vishwanathan and Smola, 2002). Our method allows the implementation of any weighting function, while the recursive computation of the subtree kernel proposed by (Da San Martino, 2009, Chapter 6) also uses DAG reduction of tree data but makes an extensive use of the exponential form of the weight (combining equations (3.12) and (6.2) from Da San Martino (2009)). We also investigate the theoretical complexities of the different steps of the DAG computation for both ordered and unordered trees (see Proposition 7 and Remark 13). This kind of question has been tackled in the literature only for ordered trees and from a numerical perspective (Aiolli et al., 2006, Section 4).
3. As aforementioned, we show in the context of a stochastic model that the performance of the subtree kernel is improved when the weight of leaves is 0 (see Section 2). Relying on this (see also Remark 5 on the possible generalization of this result), we define in Section 4 a new weight function, called discriminance, that is not a function of the size of the argument as in the literature, but is learned from the data. The learning step of the discriminance weight function strongly relies on the DAG computation of the subtree kernel presented above because it allows the enumeration of all the subtrees composing the data set without redundancies. We explore in Section 5 the relevance of this new weighting scheme across several data sets, notably on the difficult prediction problem of the language of a Wikipedia article from its structure in Subsection 5.2. Beyond very good classification results, we show that the methodology developed in the paper can be used to extract the significant features of the problem and provide a visualization at a glance of the data set. In addition, we remark that the average discriminance weight decreases exponentially as a function of the height (except for leaves). Thus, the discriminance weight can be interpreted as the second order of the exponential weight introduced in the literature. Application to real-world data sets in Subsections 5.3, 5.4 and 5.5 shows that the discriminance weight is particularly relevant for small databases when the classification problem is rather difficult, as depicted in Fig. 18.
Finally, concluding remarks are presented in Section 6. Technical proofs have been deferred into Appendices A and B.
Theoretical study
In this section, we define a stochastic model of 2-classes tree data. From this ideal data set, we prove the efficiency of the subtree kernel and derive the sufficient size of the training data set to get a classifier with a given prediction error. We also state on this simple model that the weight of leaves should always be 0. We emphasize that this study is valid for both ordered and unordered trees.
Two trees as different as possible
Our goal is to build a 2-classes data set of random trees. To this end, we first define two typical trees T 0 and T 1 that are as different as possible in terms of subtree kernel.
Let T 0 and T 1 be two trees that fulfill the following conditions: , i.e, two subtrees of T i are not isomorphic (except leaves).
, i.e., any subtree of T 0 is not isomorphic to a subtree of T 1 (except leaves).
These two assumptions ensure that the trees T 0 and T 1 are as different as possible. Indeed, it is easy to see that which is the minimal value of the kernel and where ω • is the weight of leaves. We refer to Fig. 1 for an example of trees that satisfy these conditions. Trees of class i will be obtained as random editions of T i . In the sequel, T i (v → τ ) denotes the tree T i in which the subtree rooted at u has been replaced by τ . These random edits will tend to make trees of class 0 closer to trees of class 1. To this end, we introduce the following additional assumption. Let (τ h ) a sequence of trees such that H(τ h ) = h.
3. Let u ∈ T 0 and v ∈ T 1 . We consider the edited trees T 0 = T 0 (u → τ H(u) ) and In other words, if one replaces subtrees of T 0 and T 1 by subtrees of the same height, then any subtree of T 0 is not isomorphic to a subtree of T 1 (except the new subtrees and leaves). This means that the similarity between random edits of T 0 and T 1 will come only from the new subtrees and not from collateral modifications. We refer to Fig. 2 for an example of trees that satisfy these conditions. Figure 2: Two trees T 0 and T 1 that fulfill conditions 1, 2 and 3.
A stochastic model of 2-classes tree data
From now on, we assume that, for any h > 0, τ h is not a subtree of T 0 nor T 1 . For the sake of simplicity, T 0 and T 1 have the same height H. In addition, if u ∈ T i then T u i denotes T i (u → τ H(u) ).
The stochastic model of 2-classes tree data that we consider is defined from the binomial distribution P ρ = B(H, ρ/H) on support {0, . . . , H} with mean P ρ = ρ. The parameter ρ ∈ [0, H] is fixed. In the data set, class i is composed of random trees T u i , where the vertex u has been picked uniformly at random among vertices of height h in T i , where h follows P ρ . Furthermore, the considered training data set is well-balanced in the sense that it contains the same number of data of each class.
Intuitively, when ρ increases, the trees are more degraded and thus two trees of different class are closer. ρ somehow measures the similarity between the two classes. In other words, the larger ρ, the more difficult is the supervised classification problem.
Remark 1
The structure of a markup document such as an HTML page can be described by a tree (see Subsection 5.1 and Fig. 6 for more details). In this context, the tree T i , i ∈ {0, 1}, can be seen as a model of the structure of a webpage template. By assumption, the two templates of interest are as different as possible. However, they are completed in a similar manner, for example to present the same content in two different layouts. Edition of the templates is modeled by random edit operations. They tend to bring trees from different templates closer. Balcan et al. (2008) have introduced a theory that describes the effectiveness of a given kernel in terms of similarity-based properties. A similarity function over X is a pairwise function K : X 2 → [−1, 1] (Balcan et al., 2008, Definition 1). It is said ( , γ)-strongly good (Balcan et al., 2008, Definition 4) if, with probability at most 1 − ,
Theoretical guarantees on the subtree kernel
where label(x) = label(x ) = label(y). From this definition, the authors derive the following simple classifier: the class of a new data x is predicted by 1 if x is more similar on average to points in class 1 than to points in class 0, and 0 otherwise. In addition, they prove (Balcan et al., 2008, Theorem 1) that a well-balanced training data set of size 32/γ 2 log(2/δ) is sufficient so that, with probability at least 1 − δ, the above algorithm applied to an ( , γ)strongly good similarity function produces a classifier with error at most + δ.
We aim to prove comparable results for the subtree kernel that is not a similarity function. To this end, we focus for i ∈ {0, 1} on We emphasize that the two following results (Proposition 2 and Corollary 3) assume that the weight of leaves ω • is 0. For the sake of readability, we introduce the following notations, for any 0 ≤ h ≤ H and i ∈ {0, 1}, The following results are expressed in terms of a parameter 0 ≤ h < H. The statement is then true with probability G ρ (h). This is equivalent to state a result that is true with probability 1 − , for any > 0.
Proof The proof lies in Appendix A.
This result shows that the two classes can be well-separated by the subtree kernel. The only data that can not be separated are the trees completely edited. In addition, the lower-bound in (4) is of order H exp(−ρ) (up to a multiplicative constant).
Corollary 3 For any 0 ≤ h ≤ H, a well-balanced training data set of size is sufficient so that, with probability at least 1−δ, the aforementioned classification algorithm produces a classifier with error at most 1 − G ρ (h) + δ.
Proof The proof is based on the demonstration of (Balcan et al., 2008, Theorem 1). However, in our setting, the kernel K is bounded by max i K(T i , T i ) and not by 1. Consequently, by Hoeffding bounds, the sufficient size of the training data set if of order 2 log 2 δ where γ can be read in Proposition 2, γ = P ρ (0)C i,h ≥ P ρ (0) min i C i,h . The coefficient 2 lies because we consider here the total size of the data set and not only the number of examples of each class. Together with P ρ (0) ∼ H exp(−ρ), we obtain the expected result.
Weight of leaves
Here K + is the subtree kernel obtained from the weights used in the computation of K together with a positive weight on leaves, w • > 0. We aim to show that K + separates the two classes less than K. ∆ +,i x denotes the conditional expectation (3) computed from K + .
Proposition 4 For any x ∈ T i , Proof We have the following decomposition, for any trees T 1 and T 2 , in light of the formula (2) of K. Thus, with (3), , which ends the proof.
The sufficient number of data provided in Corollary 3 is obtained (5) through the square ratio of max In addition, by virtue of Proposition 4, either ∆ +,0 x ≤ ∆ 0 x or ∆ +,1 x ≤ ∆ 1 x (and the inequality is strict if trees of classes 0 and 1 have not the same number of leaves on average). Consequently, min thus the sufficient number of data mentioned above is minimum for ω • = 0.
Remark 5
The results stated in this section establish that the subtree kernel is more efficient when the weight of leaves is 0. It should be placed in perspective with the exponential weighting scheme of the literature (Aiolli et al., 2006;Collins and Duffy, 2002;Da San Martino, 2009;Kimura et al., 2011;Vishwanathan and Smola, 2002) for which the weight of leaves is maximal. We conjecture that the accuracy of the subtree kernel should be in general improved by imposing a null weight to any subtree present in two different classes. This can not be established from the model for which the only such subtrees are the leaves. Relying on this, one of the objectives of the sequel of the paper is to develop a learning method for the weight function that improves in practice the classification results (see Sections 4 and 5).
DAG computation of the subtree kernel
In this section, we define DAG reduction, an algorithm that achieves both compression of data and enumeration of all subtrees of a tree without redundancies. DAG reduction of a tree is presented in Subsection 3.1, while Subsection 3.2 is devoted to the compression of a forest. In Subsection 3.3, we state that the subtree kernel can be computed from the DAG reduction of data set of trees.
DAG reduction of a tree
Trees can present internal repetitions in their structure. Eliminating these structural redundancies defines a reduction of the initial data that can result in a Directed Acyclic Graph (DAG). In particular, beginning with Sutherland (1963), DAG representations of trees are also much used in computer graphics where the process of condensing a tree into a graph is called object instancing (Hart and DeFanti, 1991). DAG reduction can be computed upon unordered or ordered trees. We begin with the case of unordered trees.
Unordered trees We consider the equivalence relation "existence of an unordered tree isomorphism" on the set of the subtrees of a tree T : Q(T ) = (V, E) denotes the quotient graph obtained from T using this equivalence relation. V is the set of equivalence classes on the subtrees of T , while E is a set of pairs of equivalence classes (C 1 , C 2 ) such that R(C 2 ) ∈ C(R(C 1 )) up to an isomorphism. The graph Q(T ) is a DAG (Godin and Ferraro, 2010, Proposition 1) that is a connected directed graph without path from any vertex v to itself. Let (C 1 , C 2 ) be an edge of the DAG Q(T ). We define L(C 1 , C 2 ) as the number of occurrences of a tree of C 2 just below the root of any tree of C 1 . The tree reduction of T is defined as the quotient graph Q(T ) augmented with labels L(C 1 , C 2 ) on its edges. We refer to Fig. 3a for an example of DAG reduction of an unordered tree. Two different algorithms that allow the computation of the DAG reduction of an unordered tree but that share the same time-complexity in O(#T 2 deg(T ) log(deg(T ))) are presented by Godin and Ferraro (2010).
Ordered trees In the case of ordered trees, it is required to preserve the order of the children in the DAG reduction. As for unordered trees, we consider the quotient graph Q(T ) = (V, E) obtained from T using the equivalence relation between ordered trees. V is the set of equivalence classes on the subtrees of T . Here, the edges of the graph are ordered as follows. (C 1 , C 2 ) is the r th edge between C 1 and C 2 if R(C 2 ) is the r th child of R(C 1 ) up to an isomorphism. We obtain a DAG with ordered edges that compresses the initial tree T . An example of DAG reduction of an ordered tree is presented in Fig. 3b. Polynomial algorithms have been developed to allow the computation of a DAG, with complexities ranging in O(#T 2 ) to O(#T ) for ordered trees (Downey et al., 1980).
Figure 3: A tree (left) and its DAG reduction (right) seen (a) as an unordered tree and (b)
as an ordered tree. In each figure, roots of isomorphic subtrees are displayed with the same color, which is reproduced on the corresponding vertex of the DAG. Note that the subtree on the left is colored differently in the two cases, whether the order of its children is relevant or not. If no label is specified on an edge (in the unordered case), it is equal to 1.
In this paper, R * (T ) denotes the DAG reduction of T as * -tree, * ∈ {ordered, unordered}. It is crucial to notice that the function R * is a one-to-one correspondence, which means that DAG reduction is a lossless compression algorithm. In other words, T can be reconstructed from R * (T ) and (R * ) −1 stands for the inverse function.
The DAG structure inherits of some properties of trees. For a vertex ν in a DAG D, we will denote by C(ν) (P(ν), respectively) the set of children (parents, respectively) of ν. H(ν) and deg(ν) are inherited as well. Similarly to trees, we denote by D[ν] the subDAG rooted in ν composed of ν and all its descendants in D.
DAG reduction of a forest
Let T F T be the super-tree obtained from a forest of * -trees F T = (T 1 , . . . , T N ) by placing in this order each T i as a subtree of an artificial root. We define the DAG reduction of the forest However, if the forest F T is stocked as a forest of compressed DAGs, that is, , it would be superfluous to decompress all trees before reducing the super-tree. So, one would rather compute R * (F T ) directly from F D . From now on, we consider only forests of DAGs that we will denote unambiguously F. In this context, R * (F) stands for the DAG reduction of the forest of trees ((R * ) −1 (D 1 ), . . . , (R * ) −1 (D N )). We define the degree of the forest as deg (F) , and (ii) we recompress D F using Algorithm 1. Fig. 4 illustrates step by step Algorithm 1 on a forest of two trees seen as unordered then ordered trees.
Algorithm 1: DagRecompression
Data: D F the superdag obtained from a forest of DAG reductions of * -trees, It should be noticed that Im f (that appears line 3) depends on * . Indeed, if * = ordered, Im f is the set of all lists of children; otherwise, Im f is the set of all multisets of children.
Proof Starting from the leaves, we examine all vertices of same height in D F . Those with same children (with respect to * ) are merged into a single vertex. The algorithm stops when at some height h, we cannot find any vertices to be merged. Vertices that are merged in the algorithm represents isomorphic subtrees, so it suffices to prove that the algorithm stops at the right time. Let h be the first height for which σ(h) = ∅. Suppose by contradiction that some vertices were to be merged at some height h > h. They represent isomorphic subtrees, so that their respective children should also be merged together, and all of their descendants by induction. As any vertex of height h + 1 admits at least one child of height h , σ(h) would not be empty, which is absurd.
Proposition 7 Algorithm 1 has time-complexity: Proof The proof lies in Appendix B. One can observe the DAGs (left) and the execution of the algorithm (right).
At each step 1, 2 and 3, we examine vertices at height (0,1,2) and merge those which have same children. At step 4, we can not find any vertex to merge and we stop. Note that in (c) at step 3, we find two pairs of vertices to be merged : we are not restricted to one pair per height. Merged vertices are colored in red. The artificial root is colored in black. Remark 8 One might also want to treat online data, but without recompressing the whole data set when adding a single entry in the forest. Let R * (F) be the already recompressed forest and D a new DAG to be introduced in the data. It suffices to place D has the rightmost child of the artificial root of R * (F) to get D F ∪D , then run Algorithm 1 to obtain R * (F ∪D).
DAG annotation and kernel computation
We consider a data set composed of two parts: the train data set X train = (T 1 , . . . , T n ) and the data set to predict X pred = (T n+1 , . . . , T N ). In the train data set, the classes of the data are assumed to be known. Our aim is to compute two Gram matrices where: • (i, j) ∈ X train × X train for the training matrix G train ; • (i, j) ∈ X pred × X train for the prediction matrix G pred .
SVM algorithms will use G train to learn their classifying rule, and G pred to make predictions (Cristianini et al., 2000, Section 6.1). Other algorithms, such as kernel PCA, would also require to compute a Gram matrix before processing (Schölkopf and Smola, 2001, Section 14.2). We denote by ∆ = R * (X train ∪ X pred ) the DAG reduction of the data set and, for any 1 ≤ i ≤ N , D i = R * (T i ). DAG computation of the subtree kernel requires to annotate the DAG with different pieces of information.
Origins In order to compute the subtree kernel, it will be necessary to retrieve from the vertices of ∆ their origin in the data set, that is, from which tree they come from. For any vertex ν in ∆ \ R(∆), the origin of ν is defined as Assuming that (D 1 , . . . , D N ) are children of the root of ∆ in this order (which is achieved if ∆ had been constructed following the ideas developed in Subsection 3.2) leads to the following proposition.
Proposition 9 Origins can be calculated using the recursive formula, Proof Using the assumption, origins are correct for the children of R(∆). If D i ν for some i ∈ {1, . . . , N } and ν ∈ ∆, then D i ⊇ D(ν). The statement follows by induction.
Frequency vectors Remember that in (2) N τ (T ) counts the number of subtrees of a tree T that are * -isomorphic to the tree τ . To compute the kernel, we need to know this value, and we claim that we can compute it using only ∆. We associate to each vertex ν ∈ ∆ \ R(∆) a frequency vector ϕ ν where, for any 1 Proposition 10 Frequency vectors can be calculated using the recursive formula, where either L(p, ν) = 1 if * = ordered, or L(p, v) is the label on the edge between p and ν in ∆ if * = unordered.
Proof Let ν be in ∆ \ R(∆). If ν ∈ C(R(∆)), then ν represents the root of a tree T i (possibly several trees if there are repetitions in the data set), and therefore ϕ ν (i) = N T i (T i ) = 1. Otherwise, suppose by induction that ϕ p (i) is correct for all p ∈ P(ν), and any i. We fix p ∈ P(ν). ν appears L(p, ν) times as a child of p, so if (R * ) −1 (∆[p]) appears ϕ p (i) times in T i , then the number of occurrences of (R Summing over all p ∈ P(ν) leads ϕ ν (i) to be correct as well.
DAG weighting The last thing that we lack to compute the kernel is the weight function. Remember that it is defined for trees as a function w : T → R + . As we only need to know the weights of the subtrees associated to vertices of ∆, we define the weight function for DAG as, for any ν ∈ ∆, ω ν = w (R * ) −1 (∆[ν]) .
Remark 11
In light of Propositions 9 and 10, it should be noted that both o and ϕ can be calculated in one exploration of ∆. By definition, this is also true for ω.
DAG computation of the subtree kernel We introduce the matching subtrees function M as where 2 ∆ is the powerset of the vertices of ∆. Note that M is symmetric. This leads us to the following proposition.
Remark 13 M can be created in O(N 2 #∆) within one exploration of ∆ and allows afterward computations of the subtree kernel K(T i , T j ) in O(#M(i, j)) = O(min(#D i , #D j )), which is more efficient than the O(#T i + #T j ) algorithm proposed by Vishwanathan and Smola (2002) (the time-complexity is announced by Kimura et al. (2011, Section 1)). However, since the whole process through Algorithm 1 is costly, the global method that we propose in this paper is not faster than existing algorithms. Nonetheless, our algorithm is particularly adapted to repeated computations from the same data, e.g., for tuning parameters. Indeed, once M and ∆ have been created, they can be stored and are ready to use. An illustration of this property is provided from experimental data in Fig. 19.
Remark 14
The DAG computation of the subtree kernel investigated in this section relies on Aiolli et al. (2006); Da San Martino (2009). Our work and the aforementioned papers are different and complementary. First, our framework is valid for both ordered and unordered trees, while these papers focus only on ordered trees. In addition, the method developed by Aiolli et al. (2006); Da San Martino (2009) is only adapted to exponential weights (see equations (3.12) and (6.2) from Da San Martino (2009)). Thus, even if this algorithm is also based on DAG reduction of trees, it is less general than ours since the weight function is not constrained (see in particular Section 4 where the weight function is learned from the data). Finally, in Aiolli et al. (2006, Section 4), the time-complexities are studied only from a numerical point of view, while we state theoretical results.
Discriminance weight function
For a given probability level and a given classification error, and under the stochastic model of Subsection 2.2, we state in Subsection 2.4 that the sufficient size of the training data set is minimum when the weight of leaves is 0. In other words, counting the leaves, which are the only subtrees that appear in both classes, does not provide a relevant information to the classification problem associated to this model. As mentioned in Remark 5, we conjecture that, in a more general model, this result would be true for any subtree present in both classes. In this section, we propose to rely on this idea by defining a new weight function, learned from the data and called discriminance weight that assigns a large weight to subtrees, that help to discriminate the classes, i.e., that are present or absent in exactly one class, and a low weight otherwise. The training data set is divided into two parts: X weight = (T 1 , . . . , T m ) to learn the weight function, and X class = (T m+1 , . . . , T n ) to estimate the Gram matrix. For the sake of readability, ∆ denotes the DAG reduction of the whole data set, including X weight , X class and X pred . In addition, we assume that the data are divided into K classes numbered from 1 to K.
For any vertex ν ∈ ∆ \ R(∆), we define the vector ρ ν of length K as, where (C k ) 1≤k≤K forms a partition of X weight such that T i ∈ C k if and only if T i is in class k. In other words, ρ ν (k) is the proportion of data in class k that contain the subtree (R * ) −1 (∆[ν]). Therefore, ρ ν belongs to the K-dimensional hypercube. It should be noticed that ρ ν is a vector of zeros as soon as (R * ) −1 (∆[ν]) is not a subtree of a tree of X weight . For any 1 ≤ k ≤ K, let e k (e k , respectively) be the vector of zeros with a unique 1 in position k (vector of ones with a unique 0 in position k, respectively). If ρ ν = e k , the vertex ν corresponds to the subtree (R * ) −1 (∆[ν]), which only appears in class k: ν is thus a good discriminator of this class. Otherwise, if ρ ν = e k , the vertex ν appears in all the classes except class k and is still a good discriminator of the class. For any vertex ν, δ ν measures the distance between ρ ν and its nearest point of interest e k or e k , It should be noted that the maximum value of δ ν depends on the number of classes and can be larger than 1. If δ ν is small, then ρ ν is close to a point of interest. Consequently, since ν tends to discriminate a class, its weight should be large. In light of this remark, the discriminance weight of a vertex ν is defined as ω is increasing with f (x) = 0 for x ≤ 0 and f (1) = 1. Fig. 5 illustrates some usual choices for f . In the sequel, we chose ω ν = f * (1 − δ ν ) with the smoothstep function f * : x → 3x 2 − 2x 3 . We borrowed the smoothstep function from computer graphics (Ebert and Musgrave, 2003, p. 30), where it is mostly used to have smooth transition in a threshold function. Since leaves appear in all the trees of the training data set, ρ • is a vector of ones and thus δ • = 1, which implies ω • = 0. This is consistent with the result developed in Subsection 2.4 on the stochastic model. As aforementioned, the discriminance weight is inspired from the theoretical results established in Subsection 2.4 and the conjecture presented in Remark 5. The relevance in practice of this weight function will be investigated in the sequel of the paper through two applications.
Remark 15
The discriminance weight is defined from the proportion of data in each class that contain a given subtree, for all the subtrees appearing in the data set. It is thus required to enumerate all these subtrees. This is done, without redundancy, via the DAG reduction ∆ of the data set defined and investigated in Section 3. As the m trees of the training data set dedicated to learning the discriminance weight are partitioned into K classes, computing one ρ ν vector is of complexity O(m). Therefore, computing all of them is in O(#∆m). In addition, computing all values of δ ν is in O(#∆K 2 ), as there are 2K Euclidean distances to be computed for each vector of length K. All gathered, computing the discriminance weight function has an overall complexity of O(#∆(N + K 2 )).
Real data analysis
This section is dedicated to the application of the methodology developed in the paper to eight real data sets with various characteristics in order to show its strengths and weaknesses. The related questions are supervised classification problems. As mentioned in Subsection 3.3, our approach consists in computing the Gram matrices of the subtree kernel via DAG reduction and with a new weight function called the discriminance (see Section 4). In particular, we aim to compare the usual exponential weight of the literature and the latter in terms of prediction capability. In all the sequel, the Gram matrices are used as inputs to SVM algorithms in order to tackle these classification problems. We emphasize that this approach is not restricted to SVM but can be applied with other prediction algorithms.
Preliminaries
In this subsection, we introduce (i) the protocol that we have followed to investigate several data sets, together with a description of (ii) the classification metrics that we use to assess the quality of our results, (iii) an extension of DAG reduction to take into account discrete labels on vertices of trees, and (iv) the standard method to convert a markup document into a tree. It should be already noted that all the data sets presented in the sequel are composed of trees (that can be ordered or unordered, labeled or not) together with their class.
Protocol For each data set, we have followed the same presentation and procedure. First, a description of the data is made notably via histograms describing the size, outdegree, height and class repartition of trees. Given the dispersion of some of these quantities, we have binned together the values that does not fit inside the interval [Q 1 − 1.5 · IQR; Q 3 + 1.5 · IQR] where IQR = Q 3 − Q 1 is the interquartile range. Therefore, the flattened-large bins that appears in some histograms represents those outliers bins. The objective of this part is to show the wide range of data sets considered in the paper.
In a second time, we evaluated the performance of the subtree kernel on a classification task via two methods: (i) for exponential weights τ → λ H(τ ) we randomly split the data in thirds, two for training a SVM, and one for prediction; (ii) for discriminance weight, we also randomly split the data in thirds, one for training the discriminance weight, one for training a SVM, and the last one for prediction. We repeated 50 times this random split for discriminance, and for different values of λ. The classification results are assessed by some metrics defined in the upcoming paragraph, and gathered in boxplots. The first application example, presented in Subsection 5.2, is slightly different since (i) we have worked with 50 distinct databases, and (ii) the results have been completed with a deeper analysis of the discriminance weights, in relation with the usual weighting scheme of the literature.
Classification metrics
To quantify the quality of a prediction, we use four standard metrics that are accuracy, precision, recall and F-score. For a class k, one can have true positives T P k , false positives F P k , true negatives T N k and false negatives F N k . In a binary classification problem, those metrics are defined as, For a problem with K > 2 classes, we adopt the macro-average approach, that is, Metric(k).
We used the implementation available in the scikit-learn Python library, via the two functions accuracy_score and precision_recall_fscore_support.
DAG reduction with labels In the sequel, some of the presented data sets are composed of labeled trees, that are trees which each vertex possesses a label. Labels are supposed to take only a finite number of different values. Two labeled * -trees are said isomorphic if (i) they are * -isomorphic, and (ii) the underlying one-to-one correspondence mapping vertices of T 1 into vertices of T 2 is such that ∀ v ∈ T 1 , v and Φ(v) have the same label. The set of labeled * -trees is the quotient set of rooted trees by this equivalence relation. It should be noticed that the subtree kernel as well as DAG reduction are defined through only the concept of isomorphic subtrees. As a consequence, they can be straightforwardly extended to labeled * -trees. This formalization is an extension of the definition introduced by the authors of Aiolli et al. (2006); Da San Martino (2009), as they consider only ordered labeled trees, whereas we can consider unordered labeled trees as well.
From a markup document to a tree Some of the data sets come from markup documents (XML or HTML files). From such a document, one can extract a tree structure, identifying each couple of opening and closing tags as a vertex, which children are the inner tags. It should be noticed that, during this transcription, semantic data is forgotten: the tree only describes the topology of the document. Fig. 6 illustrates the conversion from HTML to tree on a small example. Such a tree is ordered but can be considered as unordered. Finally, a tag can also be chosen as a label for the corresponding vertex in the tree.
Prediction of the language of a Wikipedia article from its topology
Classification problem and results Wikipedia pages are encoded in HTML and, as aforementioned, can therefore be converted into trees. In this context, we are interested in the following question: does the (ordered or unordered) topology of a Wikipedia article (as an HTML page) contain the information of the language in which it has been written? This can be formulated as a supervised classification problem: given a training data set composed of the tree structures of Wikipedia articles labeled with their language, is a prediction algorithm able to predict the language of a new data only from its topology? The interest of this question is discussed in Remark 16.
In order to tackle this problem, we have built 50 databases of 480 trees each, converted from Wikipedia articles as follows. Each of the databases is composed of 4 data sets: • a data set to predict X pred made of 120 trees; • a small train data set X small train made of 40 trees; • a medium train data set X medium train made of 120 trees; • and a large train data set X large train made of 200 trees. For each data set, and each language, we picked Wikipedia articles at random using the Wikipedia API 1 , and converted them into unlabeled trees. It should be noted that the probability to have the same article in at least two different languages is extremely low. For each database, we aim at predicting the language of the trees in X pred using a SVM algorithm based on the subtree kernel for ordered and unordered trees, and trained with X size train where size ∈ {small, medium, large}. Fig. 7 provides the description of one typical database. All trees seem to share common characteristics, regardless of their class.
Classification results over the 50 databases are displayed in Fig. 8. Discriminance weighting achieves highly better results than exponential weighting, with all metrics greater than 90% on average from only 200 training data. This points out that the language information exists in the structure of Wikipedia pages, whether they are considered as ordered or unordered trees, unlike what intuition as well as subtree kernel with exponential weighting suggest. It should be added that the variance of all metrics seem to decrease with the size of the training data set when using discriminance. The colors of the boxplot indicates, for each size ∈ {small, medium, large}, the results obtained for the classification of X pred from X size train .
These numerical results show the great interest of the discriminance weight, in particular with respect to an exponential weight decay. Nevertheless, it should be compelling in this context to understand the classification rule learned by the algorithm. Indeed, this could lead to explain how the information of the language is present in the topology of the article.
Comprehensive learning and data visualization When a learning algorithm is efficient for a given prediction problem, it is interesting to understand which features are significant. In the subtree kernel, the features are the subtrees appearing in all the trees of all the classes. Looking at (2), the contribution of any subtree τ to the subtree kernel with discriminance weighting is the product of two terms: the discriminance weight w τ quantifies the ability of τ to discriminate a class, while κ(N τ (T 1 ), N τ (T 2 )) evaluates the similarity between T 1 and T 2 with respect to τ through the kernel κ. As explained in Section 4, if w τ is close to 1, τ is an important feature in the prediction problem.
As shown in Section 3, DAG reduction provides a tool to compress a data set without loss. We recall that each vertex of the DAG represents a subtree appearing in the data. Consequently, we propose to visualize the important features on the DAG of the data set where the radius of the vertices is an increasing function of the discriminance weight. In addition, each vertex of the DAG can be colored as the class that it helps to discrimine, either positively (the vertex of the DAG corresponds to a subtree that is present almost only in the trees of this class), or negatively. This provides a visualization at a glance of the whole data set that highlights the significant features for the underlying classification problem. We refer the reader to Fig. 10 for an application to one of our data sets. Thanks to this tool, we have remarked that the subtree corresponding to the License at the bottom of any article highly depends on the language, and thus helps to predict the class.
Distribution of discriminance weights To provide a better understanding of our results, we have analyzed in Fig. 9 the distribution of discriminance weights of one of our large training data sets. It shows that the discriminance weight behaves on average as a shifted exponential. Considering the great performance achieved by the discriminance weight, this illustrates that exponential weighting presented in the literature is indeed a good idea, when setting w • = 0 as shown in Subsection 2.4 or suggested in (Vishwanathan and Smola, 2002, 6 Experimental results). However, a closer look to the distribution in Fig. 9 (left) reveals that important features in the kernel are actually outliers: relevant information is both far from the average behavior and scarce. To a certain extent and regarding these results, discriminance weight is the second order of the exponential weight. Figure 9: Estimation of the distribution of the discriminance weight function h → {w ν : H(ν) = h, ν ∈ R * (X )} from one large training Wikipedia data set of unordered trees (left) and fit of its average behavior (in red) to an exponential function (in blue). All ordered and unordered data sets show a similar behavior.
Remark 16
The classification problem considered in this subsection may seem unrealistic as ignoring the text information is obviously counterproductive in the prediction of the language of an article. Nevertheless, this application example is of interest for two main reasons. First, this prediction problem is difficult as shown by the bad results obtained from (1 − δ ν ) so that the largest vertices are those that best discriminate the different classes. For each ν, we find the class k such that ρ ν has minimal distance to either e k or e k . If it is e k , we say that ν discriminates by its presence, and if it is e k , ν discriminates by its absence. We color ν following this distinction according to the legend, where "en" is for English language, "de" for German, "fr" for French, and "es" for Spanish.
the subtree kernel with exponential weights (see Fig. 8). As highlighted in Fig. 10 and 9 (left), the subtrees that can discriminate the classes are very unfrequent and diverse (in terms of size and structure), so difficult to be identified. On a different level, as Wikipedia has a very large corpus of pages, it provides a practical tool to test our algorithms and investigate the properties of our approach. Indeed, we can virtually create as many different data sets as we want by randomly picking articles, ensuring that we avoid overfitting.
Markup documents data sets
We present and analyze in this subsection three data sets obtained from markup documents.
INEX 2005 and 2006
These data sets originate from the INEX competition (Denoyer and Gallinari, 2007 Fig. 11 (left). However, inside each group, all trees are alike. In the case of INEX 2006, no special group seems to emerge from topological characteristics of the data, as pointed out in Fig. 11 (right).
The classification results are depicted in Fig. 12, for both data sets, and with trees considered successively as ordered and unordered. For INEX 2005, both exponential decay and discriminance achieve similar good performance. However, for INEX 2006, neither of them are able to achieve significant results. Actually, discriminance performs slightly worse than exponential decay. From these results we deduce that subtrees do not seem to form the appropriate substructure to capture the information needed to properly classify the data. Videogame sellers We manually collected, for two major websites selling videogames 2 , the URLs of the top 100 best-selling games, and converted them into ordered labeled trees. As webpages might seem similar to some extent, the trees are actually very different, as highlighted in Fig. 13. We found that the subtree kernel retrieves this information as, for both exponential decay and discriminance weights, we achieved 100% of correct classifications in all our tests.
Biological data sets
In this subsection, three data sets from the literature are analyzed, all related to biological topics.
Vascusynth The Vascusynth data set from Hamarneh and Jassi (2010); Jassi and Hamarneh (2011) is composed of 120 unordered trees that represent blood vasculatures with different bifurcations numbers. In a tree, each vertex has a continuous label describing the radius r of the corresponding vessel. We have discretized these continuous labels in three categories: small if r < 0.02, medium if 0.02 ≤ r < 0.04 and large if r ≥ 0.04 (all values are in arbitrary unit). We split up the trees into three classes, based on their bifurcation number. Based on Fig. 14 (left), we can distinguish between the three classes by looking only at the size of trees. Contrary to the videogame sellers data set that had the same property, the classification does not achieve 100% of good classification, as depicted in Fig. 14 (right). On average, discriminance performs better than the other weights, despite having a larger variance. This is probably due to the small size of the data set, as the discriminance is learned only with around 13 trees per class. From the encoding of the data that they have provided as a supplementary material 3 , we have extracted ordered unlabeled trees that are presented in Fig. 15 (left). The data set contains, for two classes, trees of outdegree 0 (i.e., isolated leaves) that can be considered as noise. With respect to the exponential weight, the value of the kernel between such trees will be identical, whether they belong to the same class or to two different classes. They therefore contribute to reducing the kernel's ability to effectively discriminate between these two classes. On the other hand, the discriminance weight will assign them a zero value, "de-noising", in a way, the data. This observation may explain why discriminance weight achieves better results than exponential weight. Faure et al. (2015) have developed a method to construct cell lineage trees from microscopy and provided their data online 4 . We extracted 300 unordered and unlabeled trees, divided between three classes. It seems from Fig. 16 (left) that one class among the three can be distinguished from the two others. Classification results can be found in Fig. 16 (right): the discriminance weight performs better than the exponential weight, whatever the value of the parameter.
LOGML
The LOGML data set is made of user sessions on an academic website, namely the Rensselaer Polytechnic Institute Computer Science Department website 5 , that registered the navigation of users across the website. 23 111 unordered labeled trees are present, divided into two classes. The trees are very alike, as shown in Fig. 17 (left), and the classification results of Fig. 17 (right) are very similar to INEX 2005, where all weight functions behave similarly, without any advantage for the discriminance weight in terms of prediction. 6. Concluding remarks 6.1 Main interest of the DAG approach: learning the weight function In Section 2, we have shown on a 2-classes stochastic model that the efficiency of the subtree kernel is improved by imposing that the weight of leaves is null. As explained in Remark 5, we conjecture that the weight of any subtree present in two different classes should be 0. The main interest of the DAG approach developed in Section 3 is that it allows to learn the weight function from the data, as developed in Section 4 with the discriminance weight function. Our method has been implemented and tested in Section 5 on eight real data sets with very different characteristics that are summed up in Table 1. As a conclusion of our experiments, we have analyzed the relative improvement in prediction obtained with the discriminance weight against the best exponential weight in order to show both the importance of the weight function and the relevance of the method developed in this paper. More precisely, for each data set and each classification metric, we have calculated from the average values of the different metrics. The results are presented in Fig. 18. We have found that, except in one case, discriminance behaves as good as exponential weight decay and even performs better in most of the data sets. Furthermore, one can observe a kind of trend, where the relative improvement decreases when the number of trees in the training data set is increasing, which proves the great interest of the discriminance to handle small data sets, provided that (i) the problem is difficult enough that the exponential weights are not already high performing, as it is the case in the Videogames sellers data set, and (ii) the data set is not too small, as for Vascusynth. Indeed, as the discriminance is learned independently from the SVM, one must have enough training data to divide them efficiently. Nevertheless, it should be noted that, in the framework of the DAG approach, results from the discriminance weight can be obtained much faster due to the fact that the Gram matrices are estimated from one half of the training data set, while learning the discrimance is very fast as it can be done in one traversal of the DAG (see time-complexity presented in Remark 15). Finally, we have investigated on a single example some properties of the discriminance, discovering that it can be interpreted as a second-order exponential weight, as well as a method for visualizing the important features in the data. Table 1: Summary of the 8 data sets.
Interest of the DAG approach in terms of computation time
As shown in Fig. 16 (right), the exponential decay classification results for the Faure et al. data set are very dependent on the value chosen for the parameter λ. In this case, it can be interesting to tune this parameter and estimate its best value with respect to a prediction score. This requires to compute the Gram matrices from different weight functions. We present in Fig. 19 the computation time required to compute the Gram matrices from a given number of values of the parameter. As expected from the theoretical results, we observe a linear dependency: the intercept corresponds to the computation time required to compute and annotate the DAG reduction, while the slope is associated with the time required to compute the Gram matrices, which is proportional to the average of O(min(#T i , #T j )) (see Remark 13). This can be compared to the time-complexity of the algorithm developed in Vishwanathan and Smola (2002) which is the average of O(#T i + #T j ). Consequently, the corresponding computation times should be proportional to at least twice the slope that we observe with the DAG approach. This shows another interest of our method that is not related to the discriminance weight function. It should be faster to compute several repetitions of the subtree kernel from the DAG approach than from the previous algorithm (Vishwanathan and Smola, 2002) provided that the number of repetitions is large enough. Computation time (s) Figure 19: Computation time required to compute several repetitions of the kernel on the Faure et al. data set. All those calculations have been repeated 50 times for each number of repetitions. The intercept corresponds to the DAG compression of the data set, which is independent on the number of repetitions. The blue curve is a linear fitting of all the measurement points.
Implementation and reproducibility
The treex library for Python is designed to manipulate rooted trees, with a lot of diversity (ordered or not, labeled or not). It offers options for random generation, visualization, edit operations, conversions to other formats, and various algorithms. We implemented the subtree kernel as a module of treex so that the interested reader can manipulate the concepts discussed in this paper in a ready-to-use manner.
Basically, the subtree_kernel module allows the computation of formula (2) with options for choosing (i) κ among some classic choices of kernels (Schölkopf and Smola, 2001, Section 2.3) and (ii) the weight function among exponential decay or discriminance. Resorting to dependencies to scikit-learn, tools for processing databases and compute SVM are also provided for the sake of self-containedness. Finally, visualization tools are also made available to perform the comprehensive learning approach discussed in Subsection 5.2.
Installing instructions and the documentation of treex can be found from . For the sake of reproducibility, the databases used in Section 5, as well as the scripts that were designed to create them and process them, can be made available upon request to the authors.
Proof We begin with the case u = v. The result relies on the following decomposition which is valid under the assumptions made on T i and the sequence (τ h ), : z ∈ F(u) ∪ F(v)} ∪ S(τ H(u) ) ∩ S(τ H(v) ) .
Together with (2), If θ ∈ S(τ H(u) ) ∩ S(τ H(v) ), then N θ (T z i ) = N θ (τ H(z) ), z ∈ {u, v}, because, for any h > 0, τ h is not a subtree of T 0 nor T 1 by assumption. Thus, in light of (2) again. Furthermore, if θ ∈ S(T i ) \ {T i [z] : z ∈ F(u) ∪ F(v)}, then N θ (T z i ) = N θ (T i ), z ∈ {u, v}, and since N θ (T i ) = 1 because of the first assumption on T i . (7) and (8) state the first result. When u = v, the decomposition is slightly different, : z ∈ {u} ∪ D(u)} ∪ S(τ H(u) ), but the rest of the proof is similar. Finally, the formula for K(T u 1 , T v 2 ) is a direct consequence of the third assumption on T 1 , T 2 and the sequence (τ h ).
By virtue of the previous lemma, one can derive the following result on the quantity ∆ i x defined by (3).
Lemma 18 Let x ∈ T i , i ∈ {1, 2}. One has Proof In light of Lemma 17, one has By assumption on the stochastic model of random trees, H(u) and H(v) have the same distribution and thus E u [K(τ H(x) , τ H(u) )] = E v [K (τ H(x) , τ H(v) )], which states the expected result.
|
2019-04-10T20:11:13.000Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "236fa49075147ef684f659fda744070c3fb99318",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "236fa49075147ef684f659fda744070c3fb99318",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
4564036
|
pes2o/s2orc
|
v3-fos-license
|
Chemical and Microbiological Parameters in Fresh Water and Sediments to Evaluate the Pollution Risk in the Reno River Watershed ( North Italy )
The European Water Framework (WFD) establishes a framework for the protection and the monitoring condition of all natural superficial waters of the member States. The Italian Legislative Decree n. 156/2006 implements the WFD establishing a monitoring system which foresees a detailed detection of several physical, chemical and microbiological parameters in order to assess the qualitative status of the water body. This study reports the freshwater quality in the Reno river basin (North Italy) from 2003 to 2011. The Reno superficial water was classified as “good” in the mountain stations and at the closed basin while in all the other stations of the Po plain the quality was from “mediocre” to “poor”. The decrease of water quality was due to the flowing of artificial canals that collect discharges the wastewater of sewage treatment plants, drainage and run-off from the urban, industrial and agricultural lands. In spring-summer 2011, characterized by severe drought, a study on the distribution of pollutants and nutrients in water of the Reno river and its tributaries highlight the impact of highway (Via Emilia) that closes the mountain basin of water courses. Along this street cities and industrial and craft have developed, increasing discharges of pollutants and nutrients in rivers. An increase of metals and nutrients was found from upstream to downstream, furthermore the concentration of the microbiological faecal indicators were two to three times higher than those determined in the water upstream of urban/industrial settlements. The thresholds of Italian Law for Hg and Pb were exceeding in all most rivers. The sediments analysis was also performed because they can be considered a sink and/or source for pollutants. In many monitoring sites the metals concentrations was higher than the thresholds of Italia Low (data not shown), but the availability of these metals was tested with mixtures of different strength extracting (EDTA, DTPA and water). The coefficient of partition solid/water (Kd) was calculated to evaluate the metals affinity to be in the aqueous phase and it increase as following Cr > Mn > Ni > Pb > Zn > Cu > Cd.
Introduction
The chemical, physical and biological pollution of surface water is a topic of great attention all over the world.Rivers play an important role since they collect municipal and industrial wastewaters but also they collect pollution from agricultural activities.Studies on water contamination have developed over the last 20 years with a view to monitoring and preventing water pollution.At international level there are different guideline systems for monitoring and assessing the quality of aquatic ecosystems.The European community, in the Water Frame-work Directive 2000/60/EC (WFD) proposes an analytic method based on the detection of several physical, chemical and biological parameters in order to be summarized in a quality index.The European objective is to obtain a "good" quality status for all the natural superficial waters within the Member States by 2015.Chemical analysis of freshwater can quantify nutrients and pollutants in aquatic environments, but provides no direct indication of the potential toxic effects of these metals on aquatic biota.The excess nutrients may lead to various problems including an increase of algal bloom, loss of oxygen, fish deaths and loss of biodiversity.Agricultural and urban activities are considered to be major sources of N and P in aquatic ecosystems [1][2][3][4], while metals are collected in rivers from a variety of sources, either natural or anthropogenic [5,6].Pollutants may accumulate in microorganisms, aquatic flora and fauna and enter the human food chain.In the aquatic environment there is a continuous adsorption and desorption process between water column and bed sediment: studying these dynamics is the key to understanding the behavior of toxicants in a lotic system and its biological life.Because of different physico-chemical processes (e.g.adsorption, hydrolysis, co-precipitation) only a small portion of free metals are dissolved in water while a larger amount is deposited in sediments [7].The ability of sediments to faithfully record the "environmental impact" on freshwater ecosystems over time is demonstrated [8][9][10].Sediment provides habitats for benthic organisms, for microbial and fungal populations, and for all the species which reflect the quality and health of the ecosystem [11].Contaminants are not necessarily fixed permanently by the sediments, and under changing environmental conditions they may be released into the water column by various processes of remobilization [12].The marked tendency for heavy metals towards solid and water phase partitioning and the ability of sediments to integrate longterm information makes the sediments attractive for assessing the impact of industry and urban development on the fluvial environment [13,14].Sediment contamination is indeed a worldwide problem, especially in industrialized countries, even though the response to this problem varies in terms of jurisdictions [15].The Reno river is one of the most important water bodies of northern Italy to have affected the hydrographic system and development of the Po Valley (North Italy).The current configuration of the Reno basin is due to historical remediation (from the Roman Age to the early years of the last century).The protection of the hydraulic system has led to strong anthropogenic waterways to the detriment of their naturalness, and also the presence of industrial and urban settlements has similarly decreased the quality of the water.The aim of this work was therefore to evaluate the water quality of the Reno watershed from 2003 to 2008.Besides, to understand the dynamics of water pollution and sediment in the Reno basin, the Via Emilia, a main road on which the majority of civil and industrial settlements are located and which is crossed by 10 4 -10 5 vehicles per day [16], was taken as a watershed of human activities.For this reason the variations of microbiological and chemical pollutants in the river Reno and its tributaries were studied up-and downstream of the Via Emilia during the period spring-summer 2011.In fact, in this season the rainfall is low and the rivers are fed only by wastewater from urban and industrial centers.
Study Area
The basin of the river Reno has a total area of 4930 sq km.The territorial network composed by the river and its tributaries is the result of the countless conversions made since Roman times, in terms of extensive remediation and hydraulic protection.The reclamation of the plain has led to a radical change in the river basin where the surface water, beyond the Via Emilia, flows within artificial embankments that carry the river water to the Adriatic Sea.The mountains of Corno alle Scale (1945 m a.s.l.) and Monte Orsigna (1555 m a.s.l.) catch the watershed of the Reno river basin, with elevations varying from 1500 to 500 m a.s.l., and the territory is formed by a wide range of major and minor valleys arranged in a south-western and north-eastern direction.The hills finally meet the Via Emilia, where many towns and artisan industries have developed.The geology of the hills is characterized by gullies and outcrops of gypsum, while behind them the last stretch of the high-end plain, between 100 and 50 m a.s.l., consists of fluvial sediment that has given rise to the formation of cones.The area of the Via Emilia is characterized by alluvial deposits, formed by river waters over the centuries.The alluvial deposits (gravel, sand, silt) are of different sizes and these have formed cones in the direction of the plain.The plain itself is characterized by the vertical overlap of sedimentary bodies and was formed during the flooding of many rivers in the course of history.
The Reno river basin lies between the Apennines and the Adriatic sea, in a northern temperate zone, to the south-central border of the Po Valley.
Table 1 shows the average rainfall in the mountain basin of the river Reno.The less rainy months were from May to August, while those of autumn-winter were characterized by higher rainfall.
Water and Sediment Sampling Survey
The water quality of Reno and its tributaries was monitored each month in different stations from 2003 to 2011, as established by Italian law (Legislative Decree 152/ 2006) by Agency of Environmental Protection of Emilia Romagna Region (ARPA-Emilia Romagna).The monitoring stations (Figure 1), starting from the Apennines (193 m a.s.l.) from upstream to downstream are: Vergato (VE), Casalecchio di Reno (CR, closure of the mountain basin-70 m a.s.l.), Pieve di Cento (PC, after the confluence of the artificial high water canals of the Dosolo and those of low water, and the stream Samoggia into the river Reno), Malabergo (MA, after the confluence into the river Reno of the artificial canal that passes through Bologna) and Bastia (BA, where the stream Idice and Sillaro flow into the Reno river).The Reno tributaries During spring-summer 2011, a new survey was performed.Water and sediment samples were collected from the stations of the river Reno and its tributaries (Figure 1) upstream and downstream of the Via Emilia, The sampling stations upstream of the road are situated within the hilly/mountainous morphology, characterized, as reported above, by evaporitic formations (Vena del gesso) and gullies.The downstream stations are characterized by alluvial gravel deposits in the western area and more siltclay in the eastern area.On the plain human activity in-creases and industrialization becomes heavier.The latitude and longitude coordinates of the monitoring stations are shown in Figure 1.
Water samples in 2011 survey were collected in glass bottles and kept refrigerated until analyzed and microbiological analysis was performed 24 h from the sampling.Bottles for chemical analysis were washed with diluted nitric acid to remove trace elements and then flushed with milli-Q water, while those for microbiological analysis were sterilized before use.Sediment samples (0 -10 cm) were collected in sterilized plastic bags and kept refrigerated at 4˚C until analysis.Microbiological analysis was performed 48 h from sampling, while for chemical analysis samples were first air dried.In Idice upstream station (IDIu) it was not possible to collect any sample because of the high percentage of gravel in the bed.
Chemical and Microbiological Analysis of Fresh Water Samples
Water samples were processed for the following analysis: electrical conductivity (EC), temperature and pH were measured in the field (Hach-Lange probe).The concentration of carbonate ion ( 3 ) was performed in laboratory by titration with 0.02 N HCl at the end point of pH 4.4.Dissolved organic C and N (DOC and DON) were determined by TOC analyzer (TOC-UV series, Shimazu Instruments) on unfiltered samples.
NO , NO , Cl , SO , PO The detection limits was 0.5 ppb for both total C and N. Major and trace elements were determined by Inductive Coupled Plasma-Optical Emission Spettroscopy (ICP-OES) (Ametek, Spectro) [17].
Anions ( For microbiological analysis samples were first diluted (until 1:10000) in Phosphate Buffered Saline (PBS buffer, BR0014G, Oxoid) and filtered through nitrocellulose membranes (0.45 m pore size, 47 mm diameter, Sartorious).Filters were placed on solid selective media for the detection and enumeration of fecal indicators (E.coli, Enterococcus spp.).Chromcult Coliformen Agar (1.00850, Merck) was used for Escherichia coli detection after in-cubation in aerobic conditions at 44˚C for 24 h.E. coli typically appears as blue/purple colonies whereas coliforms appear as red/rose colonies.Testing for indole production and citocrome oxidase activity gave further confirmation of the microorganism identity.
Slanetz & Bartley (1.05262, Merck), selective agar Wa used for Enterococcus spp.detection after incubation in aerobic conditions for 48 h at 37˚C.Membranes with red-maroon or pink colonies were then transferred to plates with Bile Aesculine Azide Agar (100072, Merck) and incubated for 2 h at 44˚C.Colonies that turned dark brown to black with a typical dark halo were considered to be fecal enterococci colonies.
Colonies were enumerated and the results were expressed in CFU/100mL, according to the following equation: where C = n. of colonies confirmed in 100 ml; A = n. of colonies confirmed; B = n. of colonies to confirm; N = n. of colonies suspected; T = volume of sample analyzed; Vs = reference volume (100 ml); F = dilution factor.
Chemical and Microbiological Analysis of Sediment Samples
Sediment samples were air-dried and sieved to 2 mm.Electrical conductivity (EC, Orion) was performed on the 1:2.5 ratio w:v with distilled and on the same ratio pH was determined by potenziometric pHmeter (pH metro, Crison).Total carbonate (CaCO 3 ) were quantified by volumetric method, according to Dietrich-Fruehing.Total C (TOC) was determined according to Springer and Klee (1954) methodology [18] while Total Nitrogen (TN) with Kjeldahl method [19].The major and trace elements concentrations were carried out in aqua regia where 250 mg of sample, finely ground in agate mortar, ware digested in microwave oven (Millestone, 1200) with 6 mL HCl and 2 mL HNO 3 , suprapure (Carlo Erba), brought to 20 ml with milli-Q water, filtered on Wathmann 42.The solution was detect by ICP-OES.For the available metals fraction, 2.5 g of sediment sample was extracted in 25 ml EDTA (ethylenediaminetetraacetic acid buffered to pH 4.65 with ammonium acetate and acetic acid) and 10 g were extracted in 20 ml DTPA (diethylenetriamine pentaacetic acid + TEA buffered to pH 7.3) [20].The suspensions were shacked for 1 h and, after filtration with Wathman 42, the solution was detected for Cd, Co, Cr, Cu, Ni, Pb, Zn by ICP-OES. 10 g of sediment sample were extracted with distilled water (1:10 w:v) shacked for 16 h, centrifugated and filtrated with Millipore system at 0.45 µm; the solution was analyzed by ICP-OES.The partitioning coefficient Kd (l/kg, [21]) was calculated according to the following equation: where Cs is the pseudo total metal concentration extracted with acqua regia (mg•kg −1 ), and Cw is the dissolved metal concentration extracted with deionized water (mg•kg −1 ).Results were then expressed as log value.Spores of Clostridium spp.were detected according to [22] with slight modifications.Breafly, 15 g of sediment sample were first placed in 135 ml of sterile Phosphate Buffered Saline (PBS buffer, BR0014G, Oxoid) plus Tween 80 (0.1%, V/V) and then stirred for 30 min to standardize the mixing.A further serial dilution (1:100) of each samples was heated at 85˚C for 10 min to facilitate the sporulation.Each dilution (10 −1 and 10 −2 ) was tested in quintuplicate by seeding 0.5 mL of the suspension in Sulfite Polymyxin Sulfadiazine Agar (110,235, Merck) and incubated in anaerobic conditions at 37˚C for 24 ± 1 h.Suspected Clostridium perfrigens black colonies were purified in Tryptone Soya Agar (1.05458, Merck) and identified by catalase production and biochemical profile (API 20 A, Biomerieux).
Water Quality
The water quality of Reno river from 2003 to 2011 was classified as "good" in the mountain stations and at the closed basin (VE and CR, respectively) while in all the other stations of the plain the quality was "mediocre" and "poor".This trend is clearly evident in the box plot constructed from the chemical and physical data of the time series (Figure 2).The deterioration of the water quality followed the increase in electrical conductivity (EC) and nutrient load and the first critical point was the PC station, after the closure of the basin, where Samoggia (SAM) river and the network of artificial drainage canals with a high pollutants load [23] flow into Reno.Similarly, in Bastia station (BA) high pollutants load were found after confluence of Sillaro (SIL), Idice (IDI) and artificial canals (Canale Navile and Savena Abbandonato) which cross the Bologna city and collect its municipal and industrial wastewater.The nutrients concentration were high with an average amount of (data not shown, authors' communication).The percentage of dissolved oxygen decreases to 4% -5% so that the quality of water in Reno was compromised.From station PC to BA Reno river was able to self-purify by pollutants, decreasing their concentrations at the Malabergo (MA) station as shown in Figure 2.
The self-purification capacity of Reno was compromised by the poor water quality of its tributaries than that of the artificial canals network (Table 2).The water quality of tributaries was classified "poor" every years according to Italian Law (D. lgs 152/2006).The values of EC in the water of the tributaries increase as follows: NO while the highest nutrient load was detected in water of SIL and SAM streams.
The dissolved oxygen varied from an average percentage of 90 in SAN to 66 to 75% for the other tributaries.BOD5 and COD in SAM water were two and three times higher than the other fresh waters while high amount of chlorides and sulfates was detected in SIL stream.The water quality of both Reno and its tributaries was compromised by a high number of colonies of pathogen microorganisms.In particular the maximum number of colonies of both Escherichia coli and Enterococcus spp. in Reno was of 15,000 and 10,000 CFU 100 mL −1 in the VE and CR stations, respectively, increasing to 68,000 and 63,000 CFU 100 mL −1 in the stations on the plain (e.g.PC and BA).The presence of E. coli colo-nies varies as follows: IDI (220,000 CFU 100 mL −1 ) > SIL (127,000 CFU 100 mL −1 ) SAM (11,800 CFU 100 mL −1 ) > SAN (8000 CFU 100 mL −1 ), while the average of the Enterococcus spp.colonies are 300,000 CFU 100 mL −1 in SIL and 73,000 CFU 100 mL −1 in IDI; lower values are found for SAM and SAN (7.800 and 600 CFU 100 mL −1 respectively).The percentage of Salmonella spp. in the Reno water ranges from 21 to 31% of positive cases of pathogen presence, while in the tributaries it is as follows: SIL 20%, IDI 28%, SAM 31% and SAN 37%.
The results obtained by Spring-Summer 2011 survey were showed in Table 3. High amount of major anion (e.g.
Upstream stations
Cl 4) was clearly detected in downstream stations.The concentration of the microbiological fecal indicators was two to three times higher than those determined in the water upstream of urban/industrial settlements.The Reno river had the nutrients load lower than the other rivers while SIL had the higher load.In these two rivers Hg concentration higher than threshold of Italian law (1 μg•L −1 ), compromising their water quality which were classified "poor".In all samples the Pb concentration in fresh waters were higher than legal threshold (10 μg•L −1 ).
Sediment Quality
The sediments of IDI, SAM and SAN had higher percentages of skeleton upstream than the plain stations, while these of REN and SIL had an homogeneous fine texture (Table 5).Total organic C (TOC) and total N (TN) range from 0.4 to 2.4 g•kg −1 and from 0.1 to 1.4 g•kg −1 , respectively, whereby the C/N ratio was low (8 -12), except for Reno (32 and 22).pH values range from 8.1 to 9.3 and the highest values are found in SAN and SAM downstream stations.The fecal contamination in sediments has been estimated by the content of Clostridium spp.spores which were higher in upstream Reno and SAN than the other river, an increasing trend was found in SIL.The coefficient of correlation (data not shown) showed that enrichment of Clostridium spores were influenced by increase of EC and total organic C and S and by decrease of pH value.The metals concentration in sediments increased from upstream to downstream and in SAN they exceeded the threshold value for Pb and Sn (100 and 1 mg•kg −1 , respectively) (Table 6).The percentage of available metals on total aqua regia determination decreased as a function of extracting solutions (Figure 3).As expected EDTA, an acid chelate agent (pH 4.65), extracted a greater amount than DTPA (pH 7.3), while deionized water was the weakest extract agent.The mobility of metals in the aquatic system was usually studied by the partitioning coefficient (Kd) from liquid phase and sediments.The logarithmic values of this coefficient ranged from 1.9 for Cd to 4.3 for Cr whereby the Kd decreased as follows: . Cr Mn Ni Pb Zn Cu Cd
Discussions
The water quality of the Reno basin is strongly influenced by human impact related to the very high density of inhabitants.When Reno river flows in the plain its bed is hanging and its river banks manifest a reduced biodiversity vegetation [23]; despite this, Reno river is able to implement processes of self-purification.The type of land cover can influence the water quality, which can improve greatly in forest areas [24] compared to agricultural land, where pollution is widespread.The riparian vegetation of Reno is able to decrease the pollutants and nutrients load between PC and BA stations, when no natural tributaries or artificial canals flow into its water, despite that the land-use of the plain is prevalently agricultural.
The Spring-Summer 2011 survey highlights how the settlements of the Via Emilia are the main cause of pol-lution due to an increase of nutrients load, pathogens and contaminants [25].The high concentrations of N and P downstream are due to urban wastewater discharge [7] rather than runoff from agricultural land [25].In this recent years, a severe doughty in Spring and Summer time is characterized by low rainfall and increasing of the evaporation process, while the minimum vital outflow of rivers decreases and consequently the concentrations of pollutants increase.Indeed, under this latter condition the increase in EC was correlated to an increase in the nutrient load [26].In this season only episodic storms that increase the soil losses due to soil erosion were observed, that fail to dilute the content of water pollutants.Temperature and rainfall can affect the some microorganisms growth and permanence such as E. coli and Enterococcus spp.[27,28].The increase of pollutants load (e.g.Pb and Zn) in water can be expected with the reduction of flow, whereby the decrease of pathogens concentration can be due to high concentrations of metals (e.g.Hg and Pb) discharges in rivers.The enrichment of metals and nutrients concentrations in sediments, especially in those with the fine texture, highlights that the adsorption process is the prevailing self-purification mechanism [29].As well as the sediments represent a memory of aquatic ecosystem [30], their role of sink of pollutants and nutrients is correlated to fine size (silt and clay) and iron and manganese oxides.The Clostridium spores in sediments indicate a fecal contamination [22] and their growth depends on the S and C content because they are sulphur reductive bacteria involving in the sulphur-compound demolition in anoxic environment [31].
The high concentration of metals in sediments can compromise the life of the aquatic ecosystem [11], but the pollution risk from metals depends on their chemical speciation rather their total elemental contents [32].The extraction with EDTA and DTPA solutions reveals a different percentage of availability of the metals in sediments and the high amount extracted by EDTA is due to the acid pH [33,34], while the DTPA with neutral pH does not extract the metals immobilized in sediments [34,35].High percentage of Pb and Cu are extracted by both solutions highlighting a greater pollution risk than the other metals which are poorly extractable.The same trends are found by the water-sediment partitioning coefficient (Kd) [21], in which relatively low values of log Kd of Cd (1.85), Cu (2.07), Pb (2.76) and Zn (2.35) suggesting that these metals are less likely associated with sediments and more free for transportation and mobilization in water, Kd higher than 2.8 suggesting a low geochemical mobility in water.
Conclusion
The mediocre and poor water quality determined in the Reno watershed is due to the anthropogenic impact due to the municipal and industrial waters discharged into fresh water rivers.The nutrient and pollutant loads affect the self-purification capacity of the Reno river.Adsorption of pollutants in sediments seems be the main self-purification mechanism while the low pathogenic contamination is related to severe drought during the springsummer period that, lowering the river flow, will increase the concentration of pollutants in water.
The study of the release of pollutants in sedimentwater interface is therefore a very important goal to increase the self-purification capacity.Sediment is the sink of nutrients and pollutants and their hazard can be evaluated by metals speciation and by their availability.The partition coefficient (Kd) of metals from water to sediment seems to be a good source of information about pollution risks.Therefore, the impact of anthropogenic activities on the fluvial ecosystems should be studied taking into account the water-sediment interface.
Figure 1 .
Figure 1.Map of the Reno river basin and location of the sampling stations on Reno river and on some of its tributaries.flowing under the Bologna district (Samoggia (SAM), Idice (IDI), Sillaro (SIL) and Santerno (SAN)) were monitored only in one station of the Po plain upstream of the Via Emilia.During spring-summer 2011, a new survey was performed.Water and sediment samples were collected from the stations of the river Reno and its tributaries (Figure1) upstream and downstream of the Via Emilia, The sampling stations upstream of the road are situated within the hilly/mountainous morphology, characterized, as reported above, by evaporitic formations (Vena del gesso) and gullies.The downstream stations are characterized by alluvial gravel deposits in the western area and more siltclay in the eastern area.On the plain human activity in- determined by Integrated Capillary High-Pressure Ionic Chromatography (Dionex, ICS 4000 Thermo Scientific).
4 and 4
of 6 and 3 mg•L −1 respectively, showing an endpoint of 12
Figure 2 . 4 NH 4 SO
Figure 2. Boxplot of the historical trend in Reno river and its tridutaries.Data present mean, maximum and minimum value of some fundamnental parameters on water.Table 2. Average, maximum and minimum concentration of water quality parameters of the Reno tributaries (rivers Samoggia, Idice, Sillaro and Santerno).Suspended solids (SS), total nitrogen (TN), ammonium 4 NH , nitrate 3 NO , nitrite (NO 2 ), dissolved oxygen (DO), biochemical and chemical oxygen demand (BOD5 and CO5, respectively), total phosphorus (TP), orthophosphate (ORT-P), chloride and sulphate (Cl and 2 4 SO ) are expressed as mg•L −1 , Electrical conductivity (EC) as µS•cm −1 .Data of microbiological parameters are expressed as log Escherichia coli (log ES) and Enterococci (log EN).
3 3 .
Mean concentration of chemical-physical and microbiological elements in up-and downstream stations.Chemical parameters are expressed as mg•L −1 , CE is expressed as µS•cm −1 and microbiological parameters are expressed as CFU 100 mL −1 .
Figure 3 .
Figure 3. Mean, min and max value of some metal availability among EDTA, DTPA and water.Data are expressed as percentage on the total fraction determined by aqua regia.
|
2019-04-24T13:09:27.973Z
|
2013-04-19T00:00:00.000
|
{
"year": 2013,
"sha1": "86cf2d7090c43ff36ea513eca26a50ae918831cd",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=30255",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "86cf2d7090c43ff36ea513eca26a50ae918831cd",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
7071577
|
pes2o/s2orc
|
v3-fos-license
|
Catchment-Scale Modeling of Nitrogen Dynamics in a Temperate Forested Watershed, Oregon. An Interdisciplinary Communication Strategy
We present a systems modeling approach to the development of a place-based ecohydrological model. The conceptual model is calibrated to a variety of existing observations, taken in watershed 10 (WS10) at the HJ Andrews Experimental Forest (HJA) in Oregon, USA, a long term ecological research (LTER) site with a long history of catchment-scale data collection. The modeling framework was designed to help document and evaluate an evolving understanding of catchment processing of water, nitrogen, and carbon that has developed over the many years of on-going research at the site. We use the dynamic model to capture the temporal variation in the N and C budgets and to evaluate how different components of the complex system may control the retention and release of N in this pristine forested landscape. Results indicate that the relative roles of multiple competing controls on N change seasonally, between periods of wet/dry and growth/senescence. The model represents a communication strategy to facilitate dialog between disciplinary experimentalists and modelers, to produce a more complete picture of nitrogen cycling in OPEN ACCESS
lie outside of urban or other point sources of atmospheric nitrogen, N continues to be a major limiting nutrient, losses of plant-available N remain very low, and nitrogen saturation is not currently part of the developmental trajectory [12]. The models that have been developed in regions of excess N deposition are not necessarily applicable in these places, a fact that underscores the suitability of low deposition regions as a useful counterpoint in the evaluation of ecosystem N cycling.
The HJ Andrews Experimental forest (HJA) in the Pacific Northwest of the USA is one such region. A variety of studies, based in small watersheds at HJA, and focused on different components of the nitrogen cycle have been developed since its inception in the 1960s. The synthesis of these various disciplinary studies is an ongoing effort. The application and development of a conceptual numerical model, which attempts to incorporate key components of the evolving understanding of N dynamics, provides an opportunity to inject some temporal dynamism into the ecosystem budget approach, and in this sense, contribute to the overall direction of field-based research and interpretation.
The overall objective of this paper is to outline a model formulated to describe the processes controlling N cycling in low deposition, small and primarily coniferous forest watersheds at HJA. The basis for this objective is twofold. First, some potential factors associated with the low deposition nature of the region are not explicitly captured by the current suite of standard ecosystem models. These include the relative importance of losses of organic nitrogen, as well as the potential importance of instream processing of nitrogen, and epiphytic and asymbiotic nitrogen fixation. Second, a range of disciplinary experimentalists, ranging from forest ecologists to soil biogeochemists to hillslope hydrologists and aquatic biologists collect data and develop expertise at the site. Much of this expertise has a direct bearing on nitrogen cycling, yet because it emerges from different disciplines it has been difficult to integrate it and develop a more complete understanding of the ecosystem as a whole. The model has been developed to explicitly capture a temporally dynamic N budget, by directly capturing observed rates and states that operate at HJA and forests like it ( Figure 1).
The model represents a communication strategy to facilitate dialog between disciplinary experimentalists, to produce a more complete picture of nitrogen cycling in the region. This kind of modeling has seen recent growth in the environmental sciences [13,14] and we view this explicit development of complete, yet conceptually simplified models as a mechanism to more fully evaluate complex environmental dynamics. In particular, our work contributes to the idea that conceptual systems modeling can contribute to interdisciplinary science as a means of providing the capacity for individual researchers to contribute to evolving models of complex systems [13]. The models that result from such a process are not necessarily designed as predictive tools, but instead as a means to document key system details and how components may interact. Such a model may provide a useful point to begin the development of predictive modeling tools, either through detailed sensitivity and uncertainty analysis, or through the development of process-based algorithms. Nonetheless, roughly calibrated conceptual models, like we present here, present a useful framework for discussion. Note also that the litter input into aquatic biomass is derived directly from the stocks of foliage and branches/coarse wood.
Ecosystem Modeling
A wide variety of models have been developed to evaluate questions related to nitrogen dynamics, with the variation primarily manifested as different levels of complexity, different scales of application, and different questions of interest. These range from 3-dimensional physically-based research models, which are applicable across a broad range of environments and time scales (ecosys; [14]) to lumped data-based techniques (UNERF; [15]), which are reliant upon long term input output measurements, and maintain no physical basis. In this paper we are primarily interested in forested watershed nitrogen dynamics, which represents only a small subset of this broad range of models. For these ecosystems, modeling strategies generally incorporate associated carbon and water cycling, and while there is considerable overlap, the models tend to focus on one of three basic lines of inquiry. The first of these relates to forest productivity and succession. Productivity models are developed primarily to predict the successional evolution of above ground biomass (e.g., 3PG [16]) or focus on carbon dynamics and primary productivity (e.g., the PnET models [17]). Models in this group tend to include a more robust depiction of primary production, canopy processes, and carbon allocation, with somewhat less detail in terms of soil organic matter processing. The second general class of ecosystem models focuses more heavily on soil nutrient cycling and soil organic matter (SOM) dynamics. This group includes many of agricultural models, some of which have been adapted to represent forested landscapes (CENTURY [18]). A third approach focuses not on vegetation or SOM properties, but rather on hydrology and nitrogen export in an effort to provide a predictive tool to quantify nitrogen leaching potential (MERLIN [10]).
Regional to global scale biogeochemical models, a somewhat different class of simulation tools make use of many of the concepts outlined in the models above, coupling water, carbon, and nitrogen cycles to represent complete regional scale ecosystems. These models display less disciplinary focus, and are used primarily to evaluate the function of whole ecosystems under changing climate or deposition patterns. Representative models in this group include GEM [19] and BIOM-BGC [20] and INCA [21].
Most of these ecosystem scale models maintain a spatially lumped approach, though a number of spatially distributed simulation tools-those that include lateral interaction terms-have been developed (e.g., RHYSSYS [22]). With the exception of MEL [23] none of these models treat the production or mobility of dissolved organic nitrogen, an oversight representing a potentially significant structural error in terms of potential maximum amounts of sequestered carbon [23]. Additionally, while a variety of aquatic simulation models have been developed [2], these concepts have not been explicitly included in any of the ecosystem models.
The modeling we developed here relies fundamentally upon a variety of ideas and algorithms proposed within the range of existing ecosystem models. Our intention isn't to replace any of these models, but rather to select elements from them, and to put those elements together to represent a particular location with a particular set of processes.
Study Site
The HJ Andrews Experimental Forest (HJA) Watershed 10 (WS10) is located in the western Cascades of Oregon. Soils are predominantly composed of weakly developed Inceptisols with local areas of Alfisols and Spodsols made up of thick organic horizons over weathered parent materials [24]. The geology of WS10 is characterized by Miocene volcanics, primarily breccia and massive tuff, predominate [25]. Glacial, alluvial, and mass movement processes have resulted in a deeply dissected, locally steep drainage with highly variable regolith depths [25]. Vegetation is primarily comprised by Douglas-fir (Pseudotsuga menziesii), western hemlock (Tsuga heterophylla), and western red cedar (Thuja plicata). Average annual precipitation ranges from 2300 mm at lower elevations to over 3550 mm at the highest elevations and the climate is Mediterranean, with wet mild winters characterized by long duration low to moderate intensity frontal storms. WS10 is rather unique in that a significant debris flow in 1996 effectively removed the entire riparian area and most of the stream channel currently flows on bedrock. We limit our model explorations in WS10 to the pre-1996 period, when the riparian area was intact. A more complete description of WS10 and other small watersheds at HJA is provided in [26].
Over the 55 years of small watershed studies HJA, researchers have investigated recurring themes including material and elemental budgets, forest hillslope-stream interactions, biogeochemical and hydrologic responses to disturbances, and forest ecology. Framing this long-term research is a unique long term dataset documenting the seasonal input/output response, including organic and inorganic nitrogen and water, of six small catchments for over 30 years [27]. In this paper, we focus on one of the small catchments, WS10, where terrestrial [28,29] and aquatic [30] elemental budgets have previously been developed. These studies provide a key set of measurements which we revisit in this paper as calibration and evaluation terms for the numerical model.
The HJA-N Model
The HJA-N model is cast as a set of mass balance equations, and uses various strategies to represent rate terms. The formulation is designed to explicitly track fluxes and masses that pass through a set of roughly defined environmental storages, differentiated in both vertical and lateral terms. It is a dynamic model that incorporates transient input data at monthly time steps, and as such is useful for evaluating potential effects of seasonality on N cycling and long term trends in N cycling in response to environmental disturbances including climate change, increasing CO2 concentration, and large scale vegetation manipulation. It is, however, not currently designed for the short time scales necessary to consider the time scale of storm events. Table 1 outlines the naming convention used in tables outlining the model. The full list of mass balance equations is included in Table 2, with the rate terms outlined in Tables 3-6. Model parameters are listed in Tables 7 and 8. The model was implemented using the Stella © systems modeling framework [31] to more fully communicate the developing model structure with the disciplinary experts providing the perceptual model of catchment process. wood (i = w) and fine roots (i = r) kg C/m 2 Table 5. Rate terms comprising soil-processing mass balance equations. See Table 7 for parameter definitions.
Rate Definition Units
, Transfer of C from root zone to below kg C/m 2 , , , , Else 1 f q = Table 7. Parameters and auxiliary equations completing the production and allocation portions of the model.
Vapor pressure deficit modifier - Allocation fraction of wood where i = C for Carbon and i = N for Nitrogen -
a a a a a
Root allocation parameters (hyperbola) -
Measured Data
Atmospheric inputs included precipitation, air temperature, radiation, and atmospheric deposition from observational records at the primary meteorological station at the Andrews Forest, composited to 3-weekly sampling intervals. Stream outputs, used for model evaluation, include discharge and fluxes of dissolved organic and inorganic N from stream chemistry sampling at Watershed 10 (WS10) in the Andrews Forest, also at a 3-weekly sampling resolution [27].
Hydrologic Model
The hydrologic model is conceptually similar to models such as HBV [32] in that process descriptions involving filling and drainage of storages in the model are based on first order assumptions. While a variety of more sophisticated techniques are readily available, our simplifications are consistent with the monthly timescales of both input and output data. A complete description of the rate terms is included in Table 3. Five pools are used to define the hydrology ( Table 2). These pools represent the canopy, root zone, below the root zone, instream aquatic environment, and as well as a separate storage representing the riparian/hyporheic zone.
Interception is treated as a linearly decreasing function of canopy storage. Canopy evaporation is calculated independent of evapotranspiration and, along with canopy throughfall, it is treated as a first order loss term from canopy water storage. Evapotranspiration from the rooting zone is calculated using a simple air temperature index approach, which is limited by water content, following from [33]. The runoff generation model is comprised of two vertically-oriented storages. The storages conceptually correspond to the rooting zone and to the region below the rooting zone and above bedrock, which is considered to be impermeable. Surface soils in the region are highly permeable and surface ponding or infiltration excess overland flow has not been observed [34]. Following from these observations, modeled infiltration capacity assumed to be larger than the rainfall rate, and all throughfall enters the upper soil layer. The upper water storage feeds water vertically into the lower storage unit. Downslope flows are assumed to occur within the lower storage as saturated subsurface stormflow, which has been demonstrated in the catchment during high input events [26]. Water exiting the lower soil zone enters the near-stream zone where exchanges between the riparian zone, hyporheic zone and surface water are depicted again using a series of first order storage terms.
Vegetation Model
Carbon and nitrogen in pools representing wood, foliage, and fine roots are included in the model. The woody pool includes coarse roots, logs, as well as branches ( Table 9). The three pools were utilized primarily because they are consistent with a variety of measurements that have been made at HJA and because they are functionally useful in that CN ratios and decomposition rates from these three pools are distinct.
Biomass production follows closely from the 3-PG model [16]. Gross primary production (GPP) is estimated based upon measured net shortwave radiation and using a simple empirical relationship between shortwave radiation and the photosynthetically active fraction (PAR). Beer's law is utilized to approximate light attenuation through the single layer canopy and the fraction of incident PAR absorbed by the vegetation. The leaf area index (LAI) is calculated based upon the simulated foliar biomass, where the specific leaf area is assumed to be a species dependent constant.
A collection of five functional modifiers are utilized to reflect role of environmental conditions in limiting the quantity of absorbed radiation utilized by the vegetation. These modifiers relate to soil moisture, vapor pressure deficit, stand age, air temperature, and in an extension of the original 3-PG concept [16], we have also included the availability of plant-available nitrogen in the root zone. The resulting estimate of utilized radiation governs, in combination with an estimate of canopy quantum efficiency, the estimated GPP. Net primary production (NPP) is then estimated as a constant fraction of GPP. Live allocation of NPP (as carbon) to the three major biomass stocks follows from [16]. Nitrogen storages follow from the production of carbon, and are based on targeted CN ratios for each of the pools. NPP is initially allocated to fine roots, as a function of absorbed and utilized radiation. More limiting growth conditions (captured as the five modifiers defined above and resulting in utilized radiation) result in a larger allocation to roots, following directly from [19]. After fine root NPP is calculated, the remaining fraction NPP is allocated to woody material and foliage using set fractions developed to maintain targeted CN ratios of wood and foliage.
Uptake of N is a rate term which transfers mass from the dissolved inorganic nitrogen (DIN) pool into the living biomass N pools. This rate term is similar to that employed within MERLIN [10]. Michaelis-Menten kinetics are used to develop a non-linear rate which depends upon DIN availability and also plant nutritional requirements (see Table 4), as inferred from NPP and the targeted CN ratio of the three vegetation pools. The dependence of uptake on production is incorporated by allowing KNup to vary linearly with NPP. The rate of change, or the ½ saturation constant likely varies in time, dependent upon the plant CN ratio [8], however the additional complexity is not incorporated into the current model. Nup is allocated to each of the three live biomass storages based upon deviations of the current CN ratios from target live CN ratios (defined as CN/CNt) for the fine root components. As the CN ratio for the fine roots deviates further from the targeted value, a larger percentage of Nup is allocated to the fine roots. The portion of Nup which is not allocated to the fine roots is portioned between the wood and foliar components based upon a constant allocation fraction.
Stoichiometric N requirements of living biomass are also satisfied through N fixation, which occurs primarily in the canopy, given the presence of lichen. The rate of symbiotic fixation is calculated using a maximum fixation rate modified using the air temperature modifier [26]. Nitrogen fixed within the canopy is distributed based upon the nitrogen allocation fractions. Nitrogen is also introduced into the system through asymbiotic fixation, calculated analogously to symbiotic fixation. The moisture status of substrate may plays a role in the rate of fixation, but given the overall degree of model complexity, we did not attempt to include this factor directly. The important point is that we have tried to include key features, and to parameterize them based upon available observations and/or acceptable estimates. A symbiotic fixation rate of 2.8 kg/ha-year has been estimated at WS10 [29] and for our modeling we used a maximum value of 4 kg/ha-year, modified by temperature to result in somewhat more dynamic value that is approximately similar to the older estimate. Asymbiotic fixation was not estimated by [29], but in the intervening years it has become clear that it is a potential N source; one which we did include in our modeling. Without additional information, we assumed the maximum rate was 1 kg/ha-year as an addition into each of the Dead Biomass pools. Turnover of each of the vegetation pools is assumed to proceed as a first order loss rate.
Transfers of C and N from plant residue into SOM are based upon fixed turnover rates, with fluxes dependent upon C and N concentrations and air temperature. The dependency of turnover rates on other physical factors, such as, moisture status, ET, CO2 concentration, fire patterns or surface to volume ratios are not incorporated. A more mechanistic model could provide better estimates, but our goal was to balance model simplicity with an interest in capturing key stocks and flows. Here we felt that simplicity in the SOM turnover was justified. The plant availability of N is largely determined by decomposition; however the microbial populations present within decomposing material typically immobilize any available N prior to its release to SOM. This results in a typical pattern comprised of an initial decrease in the CN ratios of fresh plant residues, with release of N and stability of CN ratios after only some period time [35].
This observation is incorporated into the model through the specification of the stable CN ratio below which N is transferred to SOM. As substrate CN ratios fall below these critical CN values, N is lost to SOM at the same rate as C. Initial CN ratios of the different residue pools exert a strong influence on N losses through decomposition in this lumped model. Refer to Table 4 for a complete description of the rate terms defining the vegetation sub-model.
Soils Model
The soil organic matter (SOM) sub model is defined similarly to that utilized in the PNET-CN model of [17] in that the number of SOM pools is very small, particularly when compared with standard SOM models. The current version of HJA-N includes two SOM pools, the first representing root zone SOM and the second representing below root zone SOM, with carbon and nitrogen explicitly represented in both ( Table 2). Although the inclusion of additional pools could be used to more precisely describe the wide distribution of temporal SOM stability, an evaluation of the simpler definition against the long term measured data represents a useful first step, and is consistent with the soil nitrogen budget developed in WS10 in the early 1980s [29].
Four additional below-ground nitrogen pools are included to represent DIN and DON in both the rooting and below rooting zones. A kinetic sorption isotherm is used to separate soil bound nitrogen from dissolved forms, assuming that the proportion of each stays constant through time. Hydrologic losses are defined based upon the flux rates calculated by the hydrologic components of the model and the concentrations of freely available DON and DIN. Landscape scale denitrification rates are not well understood, but the model does maintain a first order denitrification loss pathway from the DIN storage.
The soil respiration model is defined similarly to that for respiration from plant residues as a first order rate, which includes temperature dependence based upon the q10-based temperature modifier. The respiration rate is assumed to represent the production of both CO2 and DOC. The mobilization of both DIN and DON is calculated as a proportion of the soil respiration rate.
The incorporation of DON production and loss is a key feature of the model. Very few ecosystem models include DON as a component of the nitrogen cycle, however [19] proposed and evaluated four potential definitions of DOC mobilization, and then used soil CN ratios to proportionally estimate the production of DON. These definitions included a constant loss model, a first order model, a model based upon soil CN ratios, and lastly a model where the rate of mobilization was proportional to the microbial respiration rate. The last of these rate definitions is consistent with our definition of SOM production, and as such was incorporated into the model.
At WS10 we have a long term record of streamwater DON and DIN export, but production rates in soils have not been studies. Our model reflects this in its simplicity. We assume that the production of dissolved N, in total, is proportional to the soil respiration rate, depending upon the soil CN ratios. This overall production rate of dissolved nitrogen is then split into DIN and DON assuming a fixed portioning constant. The DON pool includes an additional respiration term that is used to simulate the continuous decomposition processes of DON, which we assume result in the further production of DIN.
A more compelling definition of these dissolved N pools would separate plant available N from unavailable N, rather than organic from inorganic [36,37]. Such a distinction would recognize the fact that organic nitrogen is a term that represents a wide variety of compounds, with a significant range in molecular weights [38]. This would then allow for the lower molecular weight fraction of that distribution to interact more directly with the vegetation and microorganisms. However, in this case we are limited by available long term records of aquatic DON and DIN, which do not support such distinction. To be consistent with these data, we make the simplifying assumption that the DON pools, throughout the model domain, are unavailable forms. A complete description of the rate terms defining the SOM sub-model is included in Table 5.
Aquatic Environment Model
Most watershed-level studies of nutrient retention and release focus primarily upon terrestrial processes, and most watershed-level ecosystem models maintain this focus, and do not explicitly include nutrient dynamics within near stream areas. Yet it is known that the aquatic and hyporheic environment in small streams can exert significant influence on both the quantity and the forms of exported nutrients [32,33]. The residence time of water and solutes can also be extended based upon hyporheic exchange flows [39]. A 15N addition experiment [40] demonstrated that 32.5% of N added over a 6-week period during the growing season was retained by a second order stream in HJA. HJA-N explicitly includes a set of stocks (Table 2) and flux terms (Table 6) designed to capture the potential role of the aquatic system in regulating the export of terrestrial N fluxes.
The model makes use of three pools to represent nitrogen in the aquatic environment, and in this version of the model carbon is not accounted for within the aquatic environment. The pools that are included correspond to DON, DIN, and the aquatic biomass. These pools are assumed to represent the combination of the channel and hyporheic zones. The aquatic biomass pool contributes, through a first order respiration model, to both the DIN and DON pools. In addition, gross immobilization of DIN, as an addition input into the aquatic biomass, is also included as a first order term. Nitrogen is lost from the stream system as DON and DIN export, and also through a first order denitrification term. The aquatic biomass also includes a loss rate associated with particulate export, conceptually associated only with the near stream area environment. Inputs of particulate matter from the upslope region are defined based upon the turnover terms (litterfall and mortality) of foliage and woody material.
The loss of aquatic biomass is treated as a first order rate that is activated only above a discharge-based threshold. At high flows, accumulated biomass is quickly lost from the system, with periods of accumulation during lower flow conditions.
Control Capacity
To facilitate a discussion of the seasonal variation in the features controlling nitrogen cycling, we propose the nitrogen control capacity as a set of normalized rate terms which can be derived from the temporally varying model results to provide insights into these features. Hydrology controls nitrogen dynamics through export of dissolved nitrogen compounds. To capture this flushing behavior, we define the term transport control as: Hydrologists tend to view nitrogen dynamics through the lens of the flushing hypothesis and this term is designed to capture the contribution of flushing to the movement of N in the system. The contributions of simulated DIN and DON to the flushing index were normalized by the area of the stream, rather than the area of the watershed. We used an area of 767 m 2 , following from [30]. The vegetation controls N cycling through nitrogen mobilization, which we define as the difference between litter decomposition rates and uptake. We have elected to group litter and vegetation together, though clearly they could also be treated independently. Under this definition, the above ground control term is defined as: where UDIN,RZ is the uptake rate into vegetation and DN,f, DN,w, DN,r, are the decomposition rates contributing nitrogen from foliar, woody, and root litter respectively. Soil control is defined analogously as the of the net mobilization rate, in this case: where MDIN,RZ and MDON,RZ are the mobilization rates of DIN and DON within the root zone, respectively, MDIN,BRZ and MDON,BRZ are the mobilization rates of DIN and DON below the root zone, respectively, and IDIN,RZ and IDIN,BRZ are the immobilization rate of DIN in the root zone and below the root zone, respectively. Note that DON is assumed to be unavailable to plants or the microbial complex, and as such has no immobilization rate. We then define the aquatic control in a somewhat different fashion, including the net mobilization of nitrogen, but also the simulated rate of particulate export.
where MDIN,IS and MDON,IS are the mobilization rates of DIN and DON from the aquatic biomass, IDIN,IS is the immobilization rate of DIN from the aquatic biomass, and PN,IS is the export rate of particulate N, which again originates from the aquatic biomass. Similar to the simulated values of stream DIN and DON, the instream process control was normalized by stream area, rather than the full watershed area. For comparative purposes, the four values are then normalized by the overall sum of the included rate terms to produce a ratio of control for each term, which varies throughout the model timeframe, given the system dynamics.
Results
The model includes a variety of rate terms, many of which have not been independently measured. In order to accommodate the resulting uncertainty we approached model evaluation using a parameter adjustment strategy based primarily upon expert judgment. During this phase of application, the model was evaluated against both measurements and, for those terms where measurements in WS10 were unavailable, more qualitative estimates of reasonability. The model was run for a total of 80 years, using a repeated 20-year input dataset, which was based on the 3-week compositing of inputs and outputs from 1968 to 1988. It is important to note that the watershed was clearcut in 1975, and that the effects of the harvest were evident in the observed N export. Reported here are only the last 20 of those years, with the first 60 years acting as a period to allow the differential equations which make up the model to come to a relatively steady state with respect to the initial values of all of the state variables.
Evaluation of Budget Estimates of N and C Stocks
The average modeled results of the key nitrogen and carbon stocks are consistent with budget-based measurements that are available from WS10 [29], or have been taken from similar forested regions [41]. Key features of the results include a dominance of carbon storage in woody material (65% of total carbon storage) and nitrogen storage in soils (75% of total nitrogen storage) (Figure 2). Differences between the modeled values and the measurements were anticipated, particularly because the measured stocks were not necessarily binned in the same manner as the model description, and because we are comparing average model results representing 20 years of simulation to measurements that were developed to represent a full year, and in the case of the Wind River data [41], at different location. Direct comparisons between the available budget-based measurements and the model-based estimates (Figures 3 and 4; Table 1) indicate that the model is able to capture the general magnitude of carbon and nitrogen storage, and also the differences between the key environmental compartments.
(a) (b) Figure 2. A comparison of simulated stocks of nitrogen (a) and carbon (b) against budget estimates from [29]. Note that estimates of observed C stocks (except for SOM, labelled above were originally derived by [28] and that we assumed the carbon content of dry mass was 50%. Additionally, we assumed that 50% of the category "Fallen foliage and fine woody litter" from [29] were dead foliage and that 50% were dead wood.
Model Evaluation against Observations
The long-term record, which includes stream water discharge, and DON and DIN export is rare, and provides an opportunity to constrain model operation. A comparison of the time series records to the modeled result is included in Figure 5 and demonstrates that model effectively captures the seasonal pattern that is outlined by the measured discharge and DON. For these variables, the modeled Nash Sutcliffe efficiency [42] values are 0.71 and 0.53 respectively. The efficiency for DIN is −0.12, which clearly indicates that the model does not capture the measured response. This apparent failure of the model is likely because we did not attempt to simulate impacts of the clear-cut harvest that occurred in 1975. The removal of the trees, and pre-treatment activity, led to elevated DIN export after the harvest. The effects on DIN of this disturbance have been explored by [43] and are clearly evident in the long-term record and has been attributed to reduced uptake of N by vegetation. It is worth noting that the calculated Nash-Sutcliffe for years prior to the management activity (1968 to 1973) is higher, 0.33, lending additional support to the suggestion.
Evaluation of against Budget Estimates of N and C Fluxes
Given that the model is capable of representing key long term output measurements of water, DIN and DON, and also the overall trends in storage, the next step is to evaluate the degree to which the model is working for the right reasons-this we accomplish through an evaluation of the internal rate terms. These rates include respiration, mobilization, immobilization, internal solute fluxes and the aquatic processes that follow from them. Here we return to the existing budget studies which provide a set of annual flux estimates which we utilize in calibration, and to better understand model function. Rate terms are broadly grouped into four categories representing carbon fluxes, the sources of nitrogen, SOM dynamics, and processes occurring in the aquatic environment. This division is not to suggest that these categories are independent of one another, but only to facilitate presentation of results. In all cases, we present continuous model results (Figures 6-9) and in addition, time-integrated average yearly values (included in Figures 6-9 and Tables 9 and 10), which can be directly compared against yearly values from the budget studies. . Nash-Sutcliffe efficiencies for discharge is 0.71, for DON is 0.53. For DIN the efficiency is −0.13, as discussed in the text, this is likely due to the clearcut that occurred in the watershed in the early 1970s. The model did not attempt to incorporate the effects of the management. Table 9. Comparison of modeled pool sizes (averaged over the 20 year simulation period) against measured values as reported in (a) [29] from WS10 and from (b) [41] from Wind River, WA. We assumed a carbon content of 50% to estimate C from the dry mass reported in [29]. The standard deviation of the modeled values is included in parenthesis. 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987
Carbon Fluxes
The key rate terms capturing carbon dynamics are gross and net primary production, turnover (including mortality and litterfall), and both autotrophic and heterotrophic respiration. Model results for each rate term were integrated over the 20 year simulation period, and the yearly average over that time period is presented in Table 9. The average yearly gross primary productivity is estimated as 1.4 kg/m 2 -year, with the daily rates ranging from nearly 0 kg/m 2 -year (periodically during the winter period) to over 3 kg/m 2 -year ( Figure 6). The average yearly rate is somewhat lower than the range of 1.4 to 3.3 kg/m 2 -year estimated by [41] for a similar old growth forest, and within the range of 1.08-1.92 kg/m 2 -year estimated using remote sensing techniques across a Douglas-fir western hemlock forest on Vancouver Island, Canada by [44]. The simulated values are considerably lower than the 11.1-21.7 kg/m 2 -year estimated by [28] at HJA in 1970, and different respiration estimates explain the discrepancy. We assumed a respiration value of 47% of GPP, a heuristic model generated as part of the collaborative model development process. The older budgets of [28] resulted in respiration values of 92%-94% of GPP. Given the assumption that Ra is 47% of GPP, the yearly average NPP from the model is 0.60 kg/m 2 -year. This value is similar to other estimates of NPP reported at other similar sites, for example [41] Figure 6. Turnover is treated as a first order model which does not include any outside dependence (for example air temperature, soil water content, etc.). This is reflected in the low temporal variation displayed for the turnover rate. The modeled value of average yearly turnover (0.572 kg/m 2 -year) is similar to that measured by [41] which ranged from 0.370 to 0.690 kg/m 2 -year. Modeled heterotrophic respiration (Rh) is defined to include the production of CO2 and DOC from litter and soils. The average yearly value of 0.397 kg/m 2 -year is of within the range of 0.341 to 0.509 kg/m 2 -year estimated by [43].
Input of N to SOM
External inputs of N include deposition, and both symbiotic and asymbiotic fixation. Internal inputs of N to SOM include both mortality/litterfall and decomposition. Decomposition and turnover rates-internal recycling-are simulated to be at least an order of magnitude larger than the external rates ( Figure 7). The external rates of deposition (which include both DIN and DON) are simply measurements, and the fixation rate terms have been calibrated to mimic the few estimates that are available for these sites. Symbiotic respiration in WS10 have been estimated to be 0.280 g/m 2 -year [29], which is of a similar order to our average yearly modeled estimate of 0.305 g/m 2 -year. The modeled estimate of asymbiotic fixation is 0.107 g/m 2 -year. Actual asymbiotic fixation rates of, on average, 0.45 umol/g/day have been estimated for the tree species which dominated W10 prior to the clearcut in 1975 (Psuedotsuga menziesii) [46]. Given the model's average annual litter estimate of 43.55 kg/m 2 , this measured fixation rate is equivalent to 0.100 g/m 2 -year, in close agreement with the model rate. A complete listing of time integrated average nitrogen fluxes is included in Table 10.
Root Zone N Dynamics
The dynamics of nitrogen in the root zone are defined primarily by a series of seven rate terms, including the mobilization and flushing of DON and DIN, the immobilization and uptake by vegetation of DIN, and the breakdown of DON to produce DIN, which occurs through respiration processes. There is also potential for denitrification as a pathway of loss, however given the well aerated nature of soils in WS10, we assume that denitrification does not occur in the root zone. These terms are outlined in Figure 8, as both time series and box plots. These results indicate a net mobilization (mineralization) of inorganic nitrogen of 2.51 g/m 2 -year, and that mobilization of DON is 2.51 g/m 2 -year. The values are equivalent because, without additional data, we simply assume that total nitrogen mobilization was proportional to respiration and that the product was half organic and half inorganic nitrogen. Mobilized DON is continuously decomposed to further add to the DIN pool, most of which is utilized through plant uptake. The average yearly uptake rate is 3.62 g/m 2 -year, similar to the 2.29 g/m 2 -year estimated by [13].
The rate of flushing for DON (0.337 g/m 2 -year) is considerably larger than for DIN, which was effectively 0 for our simulations. This result is consistent with the well-established high N retention capacity of these watersheds. The rates of flushing periodically fall to zero, indicated as breaks in the time series in Figure 8. This occurs when amounts of DIN or DON are not sufficient to support all of the simulated loss pathways.
3.1.6. N Dynamics below the Root Zone Below root zone dynamics are similar to those simulated in the root zone, with flushing rates of both DON and DIN significantly lower than the rates of internal recycling (Figure 9). However, flushing rates of DON are larger than from within the rooting zone (467.7 mg/m 2 -year root zone compared to 283.6 mg/m 2 -year below the root zone), while flushing of DIN is somewhat higher from below the rooting zone (0.36 mg/m 2 -year root zone compared to 23.61 mg/m 2 -year below the root zone) primarily because vegetation is no longer able to mediate the flux of DIN from below the rooting zone. The model results indicate that the region below the root zone is a moderate nitrogen sink, with a net mobilization value of −99.40 mg/m 2 -year. No measurements exist within this region of the watershed, and Figure 9 therefore represents only one possible result which is consistent with both simulated inputs from the root zone, and more importantly, measured outputs of water and dissolved nitrogen from the catchment. 1968 1970 1972 1974 1976 1978 1980 1982 1984 1986 1988 The discussion of the aquatic environment focuses on mobilization and immobilization of N, as well as flushing. Particulate export of N dominates the model results in the aquatic region, with simulated values of 1.73 g/m 2 -year. A value of 2.53 g/m 2 -year was estimated by [30] for a particular year. The next largest rate term in the aquatic environment, DON export, is 0.024 g/m 2 -year, and because of the approximately two orders of magnitude difference, particulate export has not been included in Figure 10. Results indicate net immobilization of DIN (3.42 mg/m 2 -year) occurs in the aquatic region, consistent with [47] for a somewhat larger stream in HJA. The dynamic model results also indicate, however, that during the growing season, the instream biomass can function as a source of N, primarily because DIN limitation caps the growth rate, yet DIN mobilization is treated with first order dependence on the aquatic biomass, and so proceeds at a relatively higher rate during periods of low DIN availability ( Figure 10). Nevertheless, even excluding simulated particulate export, model results indicate that more nitrogen is lost from the stream environment on a yearly basis in dissolved forms (24.8 mg/m 2 -year) that is retained within it by the aquatic biomass (3.42 mg/m 2 -year).
Discussion
HJA-N was constructed in an effort to explore relationships between biotic and abiotic processes in the retention and release of nitrogen from small watersheds. An elementary, yet key, finding is that the model can be parameterized so as to produce results that are consistent with a wide range of measurements from WS10 or from similar sites. This result is a prerequisite for any further analysis. Having established consistency with available measurements, the model results can be further evaluated to provide a number of intriguing insights into how watershed components interact over seasonal timescales to recycle nitrogen. The important contribution of the model is that it allows us to quantify the seasonal variability of rate terms, greatly extending the budget based estimates of storage and fluxes which comprise a significant amount of available measurements [28][29][30]41].
Relative Importance of Various System Components
A key theme of this research is the development of quantitative, internally consistent estimates of the relative roles of watershed components in the retention and release of nitrogen over seasonal timescales. The overall goal is the exploration of the temporally varying relationship between these components-vegetation, soils, hydrology, and the aquatic environment-in regulating the release of nitrogen from the system. In the hydrologic literature, much work in this direction has focused on the concept of hydrologic flushing [48]. At the same time, it is often assumed in both the soils and forest ecology literature (e.g., [49]) that the temporal variation in net mobilization is ultimately responsible for the availability of any nitrogen that might be flushed out of the system by the hydrology. This mobilization potential is particularly relevant in regions, like the PNW, where atmospheric deposition remains low. All the while, the role of the stream channel, as well as the riparian vegetation [47] in immobilizing significant amounts available nitrogen from the aquatic system, and hence modifying cross-weir export measurements, frequently remains unnoticed in catchment studies. And perhaps even more importantly in systems where nitrogen is strongly retained, the particulate export of dissolved organic nitrogen, over which the aquatic system exerts significant control, is often of a similar magnitude, if not considerably larger, than the export of dissolved nitrogen [30].
Our modeling work indicates that hydrologically-mediated fluxes are much smaller in magnitude (DON + DIN export of 22.1 mg/m 2 -year) than the mobilization fluxes that occur within the vegetation (747.7 mg/m 2 -year N mobilization, decomposition-uptake) or within the root zone SOM (3838.3 mg/m 2 -year) N mobilization. This finding is consistent with the wide variety of observational work at HJA [29] and has also been demonstrated at other forested watersheds [43]. However, the hydrologically-mediated fluxes are larger in magnitude to the immobilization potential of the aquatic environment (3.42 mg/m 2 -year) (Figures 6-10). These observations provide a useful means of understanding the system and ranking system components as to their role in the regulation of nitrogen cycling. In addition, the model results provide data that can be evaluated at finer seasonal time scales.
Control Capacity
We interpret the N control capacity ratio as the degree of control that each system components exerts on the release dynamics of nitrogen cycling within the model domain ( Figure 11). There is, of course, a degree of subjectivity in the definitions, nonetheless evaluating the results in this fashion provides a unique mechanism to evaluate the relative importance of components, and how this degree of control varies with time. These kinds of analyses are available only through the use of the continuously varying results, which are not typically measured over long periods, providing significant further utility in the application and development of conceptual numerical simulation. As expected given the well-established capacity of these types of watersheds to retain N, vegetation and soils exert a primary control on N dynamics. Hydrologic flushing of nitrogen is well-represented during the winter period, though even during periods of elevated N export and flushing, vegetation and soils still represent important controls ( Figure 11). The explanation behind the result is clear-the Mediterranean climate in the Pacific Northwest results in significant winter moisture, which produces increased soil moisture, higher soil water flux, and larger stream discharge than seen during the dry summer period. In addition, the lower temperatures that dominate during the winter period suppress primary production and SOM dynamics. During the growing season, however, hydrologic flushing moves into the background, while control capacity of other components increases. This change in control is most evident in June-August, when mobilization rates tend to increase and flushing rates decrease. SOM contributes a larger portion of available nitrogen than the vegetation during this period of time, and this is primarily because uptake increases and the vegetation acts as a stronger sink of N that SOM during the summer period. This difference is in part a reflection of our lumping of litter and live biomass in our definition of control capacity. Particulate flushing of nitrogen mimics hydrologic flushing because of increased mobilization of aquatic biomass during the wet season, but the signal is muted when compared to hydrologic flushing. The lower seasonal variability develops at least in part because particulate inputs are derived directly from the litterfall/mortality model which did not include seasonal effects.
These results lend credence to the idea that understanding the dynamics of nitrogen, carbon, and water in ecosystems requires a multidisciplinary approach [39]. This approach certainly includes attention to flushing behavior, but the level of attention given to flushing must be on par with that given to production of available nitrogen, which may be dissolved or in particulate forms. Furthermore, the seasonality of the system imposes a series of constraints that result in predictable temporal variation in the activity of different system components. This variation is difficult to approach through standard field-based budget techniques; however numerical modeling can be used to extend budget results to provide a clearer picture of the seasonality.
Limitations of the Modeling Framework
The model framework and analyses presented here represent a step in our evolving understanding of how small catchments at HJA function. There are, however, a number of limitations to this work. The modeling focuses on an approximately monthly time series of input output data, which provides insight into seasonal dynamics. However, this time step is too coarse to understand the finer time dynamics that are often the focus of experimentalists working within the wide variety of contributing disciplines. At the same time, it may be too coarse to understand the longer term evolution of catchments, both in terms of nitrogen availability and changing climate. We envision significant potential to redevelop these modeling ideas to correspond to these different timescales-but the model as presented herein is not suited to either. Similarly, while we included a simple model of instream particulate retention and release, extreme events are not included. These events include for example large storms, mass movement, and fire, and in terms of nitrogen control, it is clear that over long time scales, these are at least as significant as the biotic controls and flushing that we explored here. These limitations simply mean that interpreting the results outside of the timeframe over which the model was run is not possible.
The model outlined here includes a variety of parameters that cannot be measured directly. For example to separate net nitrogen mineralization into gross mineralization and gross immobilization requires a set of measurements that are very difficult to perform at point scales [50]. And further, the relationship between these point scale observations for catchment level simulations is not well-established. Yet competition between vegetation and microbes is a key feature of nitrogen cycling [51], and microbial immobilization and mobilization, as well as plant uptake must be included in a model designed to explore the effects of this competition. The parameters involved in this portion of the model (and others) are developed based solely upon educated guesses, and calibration procedures relying upon evaluation of the resulting rate terms. In the case of models developed for predictive purposes, simpler tends to be more effective, and incorporating net mineralization, which can be more readily measured, would make more sense. Nonetheless, while some of our decisions clearly reduce the predictive capability of the model through increased parameter uncertainty, the more complete structural definition provides a framework to outline both what is known, and what hypothesized about how these catchments function with respect to N cycling.
Some of the most interesting aspects of watershed nitrogen work to emerge in the last two decades involve the role of spatially disaggregated watershed components in the processing of key stocks. This spatial dependence is evident throughout the literature, including terrestrial biogeochemistry, aquatic processes, catchment hydrologic processes, hyporheic zone interactions, climate science, and forest ecology. As constructed, HJA-N does include some quasi spatial distribution, but this distribution simply separates the upslope processes from the near stream processes. From both a biotic and abiotic standpoint, this is a large simplification. An important next step is the incorporation of similar mass balance equations within a spatially-distributed model that would better allow for the incorporation of key differences in spatial distribution of ecosystem processes.
Lastly, a more complete evaluation of the model is needed to provide further guidance in terms of parameter sensitivity and associated model uncertainty. Sensitivity analyses also have the potential to provide insight into the degree of understanding we have regarding different model components, and as such assist in the prioritization of experimental studies designed to improve the both the general understanding of the system, and the predictive capability of the model.
Conclusions
The primary goal behind the development of HJA-N was to distill knowledge from a variety of disciplinary scientists, including key observational datasets, to succinctly describe N dynamics in WS10 at HJA. In doing so, we have produced a temporally dynamic simulation, which produces results that are consistent with existing water, N, and C budgets. One of the motivations was to construct a tool that could be used to quantify the relative roles of vegetation, hydrology, soils and SOM, and the near stream zone in controlling the release of N in retentive regions like HJA. The key finding is that each of the different catchment elements plays a significant role in the retention of N, and that those roles vary seasonally. While in and of itself, this is not entirely surprising, the ability to quantify those contributions provides a useful means to more fully understand how the catchment functions with respect to N dynamics.
|
2016-03-22T00:56:01.885Z
|
2015-10-12T00:00:00.000
|
{
"year": 2015,
"sha1": "a1d7c4d3d494666e62d3fea8ce6e94c1f3964886",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/7/10/5345/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "667981949176f117feaf75013a45469ef5196ff1",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
234844757
|
pes2o/s2orc
|
v3-fos-license
|
Cutting Performance of Austenitic and Duplex Stainless Steels with Drills of Three Cutting Edges
João Marouvo. Polytechnic Institute of Coimbra Coimbra Institute of Engineering, DEM, 3030-199 Coimbra, Portugal I2A Institute of Applied Research Polytechnic Institute of Coimbra, 3030-199 Coimbra, Portugal Pedro Ferreira. Polytechnic Institute of Coimbra Coimbra Institute of Engineering, DEM, 3030-199 Coimbra, Portugal I2A Institute of Applied Research Polytechnic Institute of Coimbra, 3030-199 Coimbra, Portugal Fernando Simões. Polytechnic Institute of Coimbra Coimbra Institute of Engineering, DEM, 3030-199 Coimbra, Portugal I2A Institute of Applied Research Polytechnic Institute of Coimbra, 3030-199 Coimbra, Portugal CEMMPRE Center for Mechanical Engineering, Materials and Processes University of Coimbra, Portugal Corresponding author: Simões Fernando. E-mail address: fsimoes@isec.pt
Intr Introduction oduction
Stainless Steels are Fe-C alloys with more than 11% of Cr. In this family alloys, Austenitic and duplex stainless steels are considered be the best in corrosion resistance. Among the austenitic steels, the AISI 304 grade is very used for its low corrosion and high mechanical properties. However, when it is needed better mechanical properties (tensile and yield strength), not neglecting high corrosion resistance, duplex stainless steels are a good alternative. The development reported in the construction sector indicates emerging applications of duplex stainless steels in structural design. In this sense, the machining study of this materials is an important issue, in order to better understand the performance of the tools and the quality of the parts manufactured for highdemand industries, such as, food processing, chemicals shipping vessels, oil and gas extraction platforms. [1][2][3][4].
Austenitic steels are formed by γ-austenite phases, which is responsible for ductility and resistance to uniform corrosion; Duplex stainless steel consist of equal amounts of α-ferrite and γ-austenite phases and combines the inherent benefits of both phases. The α-ferrite phase contains a body-centred cubic crystal structure. This phase in duplex is responsible for the excellent pitting and crevice corrosion resistance properties. The γ-austenite phase, a face centered cubic microstructure promotes the superior strength and toughness [1].
Stainless steels are often considered as poorly machinable materials, leads to the rapid wear and tool failure. Indeed, Stainless steels are considered as difficult to machine materials due to their tendency to work hardening, their toughness and relatively low conductivity. The high fracture toughness increases the temperatures in the interface of tool/chip, leading to poor surface finish, poor chip breaking and built-up-edge (BUE) formation, even at elevated cutting speeds [2]. In addition to the preceding properties, stainless steels have high alloy content, which form abrasive carbide phases that leads to faster tool wear.
Stainless steels are difficult-to be drilled, particularly without high pressure through the spindle and drill, because of their high ductility. Coolant must blow out new chips from drilled holes, sliding the chips on the rake faces of the tool flutes. The highpressure system improves not only the cooling rate and transportation but also the chip breakage [5].
Drills with three cutting edges are reported in the literature as being capable of drilling holes with better circularity, eccentricity, straightness and cylindricity than ordinary drills with two cutting edges. In the three-flute drill, the whirling vibration frequency associated with two-flute drills disappears [6], and thereby rifling marks do not result on the hole surface. This is partly explained by its geometry, the chisel edges of the ordinary drills are formed by the intersection of two adjacent flank surfaces, giving it the approximate shape of a line. On the other hand, drills with three cutting edges have a star-shaped chisel edge due to the intersection of three flank surfaces. Indeed, the star-shaped chisel edge converges at one point, making these drills more stable [7]. However, drills with three cutting edges are shown to be more sensitive to cutting parameters, while drills with two cutting edges withstand more severe conditions [8] In order to evaluate the effect of work material on the performance of the drills, two distinct alloys of stainless steels, AISI 304 and Duplex GX6CrNiN26-7 (EN 1.4347), were selected as the workpiece material. The chemical composition and relevant mechanical properties were given in Table 1 and Table 2.
The workpieces were prepared in dimensions of 120x70x70 mm 3 and 300x100x40 mm 3 for stainless steel 304 and duplex, respectively. The samples were firmly secured in a vise during drill operation.
T A Mitutoyo SJ-201 surface roughness tester was used to measure the surface roughness of machined holes in two opposite positions, a total of six measurements were made, three in each position. Based on the ISO 4288 standard and the available space, a sampling length of 2.5 mm was used with an evaluation length of 7.5 mm.
The diameter of the holes was measured several times all the way around in order to find the maximum and minimum values. Measurements were made at 8 mm and 25 mm depth with a Bowers XTDU10-BT 3-point internal micrometer, with an accuracy of 0.003mm.
A piezoelectric triaxial accelerometer, model 356B08 manufactured by PCB Piezotronics, was glued to the CNC spindle according to the respective system axes. A data acquisition card, National Instruments NI 9234, was used to convert the analog signal to digital and then processed at a sample rate of 1613 Hz.
The machining performance is evaluated by observing the tool wear, surface roughness, enlargement in hole size and vibration analysis. Electrical discharge machining (EDM) by wire erosion was used for both materials to cut some holes up to total depth, in order to separate the hole into 2 parts and carry out a visual analysis of the hole surface. A total of four tests were carried out under different conditions, as shown in the Table 3. As recommended in the standard, the type of deterioration that to contribute most to the end of useful of tool life, was used as criterion. Thus, for test A and C was considered that the main damage observed was loss of tool fragments in random positions (non-uniform chipping -CH2). In this case was admitted a maximum chipping length of 0.4 mm.
During test B and D, the main damage observed was a progressive development of constant flank wear by abrasion (uniform flank wear -VB1). While for test B was detected a small loss of tool fragment, as shown in Fig. 1 (test B), for test D any chipping was observed up to the end. Once for test B and D, the flank wear stayed away from the criterion VB1 = 0.35 mm, it was decided to stop the test after 60 holes. For all tests, the adhesion of the chip to the tool was observed. However, chip adhesion was more pronounced when external cooling was used and when it comes to AISI 304 stainless steel. For this reason, in test A it was only possible to drill 3 holes, where the built-up-edge is too evident (Fig. 2). When internal cooling is used, the deterioration values tend to stabilize after the first holes, reaching a steady wear stage up to the hole number 60. On the other hand, when external cooling is used, the values tend to increase rapidly until the criterion is reached, as shown in Fig. 3.
The presence of built-up edge raise chipping to the flank surface. The high-pressure system improves not only the cooling rate and transportation but also the chip breakage. Once tensile strength and hardness of duplex stainless steel is higher, it is expected lower adhesion in this material, as effectively observed, extending the tool life. Once for any type of test material, the tool life is always lower for external cooling, leads to believe that type of cooling used has a greater influence on tool deterioration than the test material. Once for any type of cooling, the surface roughness is always lower for duplex stainless steel than AISI 304, leads to believe that test material used has a greater influence on the roughness than the cooling be internal or external. In opposite way, it was observed prior that the cooling be internal or external, has a greater impact on tool deterioration than test material used.
Hole size Hole size
In order to determine the hole diameter, the diameter of the tool was evaluated before carrying out any drilling operation. For a drill with a nominal value of 10 mm, was observed that the tools diameter varied between 10.000 and 10.016 mm. All hole diameter was larger than tool diameter, because of the vibration, chatter, and drilling temperatures.
In test A the tool deterioration has evolved very fast, not allowing a significant sample of values to be taken, making it difficult to draw conclusions. However, there is a noticeable difference between the maximum and minimum diameter, indicating a larger out of roundness (ovalization) compared to other tests.
In tests B and D, in which internal cooling was used for different test materials, the maximum and minimum hole diameter values remain almost constant up to the end. In these tests, the drills practically did not wear out and the tools geometry remained constant. On the other hand, in test C, as the number of holes increase, the hole diameter tends to decrease. When comparing the graph of flank deterioration (Fig. 3 -test C) and diameter variation (Fig. 5 -test C) is possible to conclude that lower diameters registered are related to the faster deterioration of the tool, that occurs from hole number 17 to hole number 30.
It is possible to conclude that the stability of the drilled holes is more related to the maintenance of the tool in good conditions, than to the test material used. As mentioned in other works [9], the phenomenon of cutting during drilling is expressed by the spindle frequency (fs) and tool meshing frequency (fm), whose frequencies correspond to the values calculated by the formulas 1 and 2 and observed in Fig. 7, where corresponds to the spindle speed and to the number of cutting tool teeth. It is also observed a spike at 159.2 Hz, which corresponds to the second harmonic frequency (f2h) of tool meshing (formula 3). Concerning to the spikes observed at 50Hz, this may be associated with the frequency of electrical noise, although other unidentified effects may overlap. In the case of the vibration recorded for hole number 60 of test B and D, no identified spike appears to be related to the deterioration of the tool, as in this case the tool has no appreciable deterioration. The waterfall graph for test D (Fig. 8) shows that during all performed holes of this test, the vibration signal has no significant changes, once spikes are always at the same frequencies, with no new spikes, but their intensity increases, probably associated with uniform flank wear observed. On the other hand, in the case of test A, in addition to the spikes already mentioned (common spikes), new spikes are observed, which are attributed to the strong deterioration of the tool that occurs in hole number 3 (non-uniform chipping). The increase in deterioration will cause a disturbance of the already existing spikes, resulting in an increased amplitude of the sidebands, that is proportional to the damage [10,11]. The sidebands tend to appear near the fundamental frequencies. A considerable increase in these frequencies suggests that the drill geometry is changing.
Conclusion Conclusion
After carrying out this experimentation work, it was possible to obtain several conclusions about drill cutting performance of austenitic and duplex stainless steels, when are applied three cutting edges drills. Concerning to the tool life, it was possible to conclude that the most important factor to increase the number of holes made is the use of high-pressure internal cooling. When external cooling is used, AISI 304 have a worse behaviour than duplex stainless steel, due to greater susceptibility to built-up-edge formation and work hardening. The tool deterioration is mainly non-uniform chipping for external cooling and flank wear for internal cooling.
Cutting Performance of Austenitic and Duplex Stainless Steels with Drills of Three Cutt...
4284/10
For any type of cooling, the surface roughness is always lower for duplex stainless steel than AISI 304, showing that the surface roughness is more related with the material used. A larger out of roundness (ovalization) of the hole happens when the tool is in the worst condition and the holes made on duplex stainless steels are straighter than for AISI 304.
The vibration analysis with Fast Fourier Transform is an effective method to identify and quantify various phenomena related with drilling operation and tool life.
A Ackno cknow wledgements ledgements The authors would like to thank Palbit, Hard Tools Solutions Company (www.palbit.pt) by technical support, tools and stock materials supply and GENE HAAS Foundation by the scholarship granted.
|
2021-05-21T16:57:43.421Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "80964c480a37ef408b0c94854a59d3cb3ed28570",
"oa_license": "CCBY",
"oa_url": "https://popups.uliege.be/esaform21/pdf.php?id=4284",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "110781422faac8c7474ee6b8af30bd58b3258cc9",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
221127927
|
pes2o/s2orc
|
v3-fos-license
|
Broad proteomic screen reveals shared serum proteomic signature in patients with psoriatic arthritis and psoriasis without arthritis
Abstract Objective To identify novel serum proteins involved in the pathogenesis of PsA as compared with healthy controls, psoriasis (Pso) and AS, and to explore which proteins best correlated to major clinical features of the disease. Methods A high-throughput serum biomarker platform (Olink) was used to assess the level of 951 unique proteins in serum of patients with PsA (n = 20), Pso (n = 18) and AS (n = 19), as well as healthy controls (HC, n = 20). Pso and PsA were matched for Psoriasis Area and Severity Index (PASI) and other clinical parameters. Results We found 68 differentially expressed proteins (DEPs) in PsA as compared with HC. Of those DEPs, 48 proteins (71%) were also dysregulated in Pso and/or AS. Strikingly, there were no DEPs when comparing PsA with Pso directly. On the contrary, hierarchical cluster analysis and multidimensional scaling revealed that HC clustered distinctly from all patients, and that PsA and Pso grouped together. The number of swollen joints had the strongest positive correlation to ICAM-1 (r = 0.81, P < 0.001) and CCL18 (0.76, P < 0.001). PASI score was best correlated to PI3 (r = 0.54, P < 0.001) and IL-17 receptor A (r = –0.51, P < 0.01). There were more proteins correlated to PASI score when analysing Pso and PsA patients separately, as compared with analysing Pso and PsA patients pooled together. Conclusion PsA and Pso patients share a serum proteomic signature, which supports the concept of a single psoriatic spectrum of disease. Future studies should target skin and synovial tissues to uncover differences in local factors driving arthritis development in Pso.
Introduction
Psoriasis (Pso) is a common autoimmune disease that causes excessive scaling, redness and itchiness of skin at prototypical sites of the body. Approximately 20% of patients with Pso will at some point in their life develop PsA [1]. A clinical diagnosis of PsA is typically made in a patient with Pso or psoriatic nail disease with concomitant arthritis. PsA is clinically heterogeneous and other manifestations include those of the SpA spectrum, such as enthesitis, dactylitis and SpA. Adding to this heterogeneity is that in $15% of the cases of PsA, arthritis manifests prior to Pso [1]. Both cutaneous and rheumatic manifestations of Pso negatively impact quality of life and should be treated appropriately [2].
Tremendous advances have been made in the treatment options available for Pso. The current and emerging therapeutics can almost completely reverse skin inflammation in a majority of patients, but their capacity to halt arthritis is less impressive [3]. This discrepancy is well-illustrated by examining the current gold standard of trial outcome measures: a 90% improvement for Pso disease severity (Psoriasis Area and Severity Index, PASI90), compared with a 20% improvement for arthritis severity (ACR20). Numerous factors could explain the trailing treatment response in arthritis, including drug bioavailability, the cellular target and cellular turnover at the target tissue, as well as (still unidentified) differences in tissue-specific drivers of pathogenesis [4][5][6].
It is unknown whether the immunologic drivers in Pso vs PsA patients are different [7,8]. This raises the question of whether these diseases are part of the same spectrum or distinct entities [8,9]. Pso is one of the strongest known clinical risk factors for the development of arthritis, thus providing a unique opportunity to better understand arthritis development and improve treatment. It has historically been difficult to identify early PsA in Pso patients in daily clinical practice and there are currently no serum diagnostic biomarkers used in care. This impedes clarification of the presence or absence of a window of opportunity for treating early PsA. To overcome these important open questions, Pso and PsA should be studied head-to-head to uncover potential differences in pathogenesis that could serve as therapeutic targets, as well as to identify possible biomarkers to be used in early diagnosis.
Genetic studies reveal vast overlap between Pso and PsA, in which the few differences found were variants related to chromatin marks on a subset of T lymphocytes and CD8 T cells, and to variants in the IL-23 receptor [10,11]. In comparative studies from peripheral blood mononuclear cells, Pso patients with PsA have higher expression of genes associated with the IFN signature in their monocytes [12,13], and their T cells more readily produce IL-2 and IL-22 upon re-stimulation [14,15]. Recent work has also shown that patients with PsA have higher levels of auto-antibodies directed against two previously identified putative auto-antigens of Pso, namely (carbamylated) LL37 and ADAMTSL5 [16,17]. So far, serum-based biomarker studies have revealed elevated levels of high-sensitivity CRP, pro-inflammatory cytokines (e.g. IL-6, IL-33, TNF-a), adipokines and changes in markers of bone/cartilage damage in the Pso patients with PsA [18][19][20][21][22][23][24][25][26][27].
Overall, there is a scarcity of head-to-head serum biomarker comparisons in well-defined cohorts of Pso and PsA. The current study measured serum biomarkers in the early stage of PsA as compared with Pso matched for skin disease severity. We used a novel highthroughput proteomic platform capable of screening over 950 proteins in a small volume of serum. Previously, this technology proved valuable in providing new mechanistic insights into the pathogenesis of immune-mediated diseases of skin [28,29], but results have not yet been reported in patients with rheumatic disease. The goal was to determine whether this biomarker platform could identify novel serum protein disturbances in PsA as compared with HC, Pso and AS (non-psoriatic reference group), and to specify which proteins best reflected major skin and joint manifestations.
Study design
This study was performed at the University Medical Centre Utrecht and conducted in compliance with the Helsinki principles. Ethical approval was obtained from the institutional review board and all patients signed written informed consent before participation. Clinical parameters and serum samples were collected from a cohort of patients with Pso, PsA and AS as part of larger prospective observational study performed at the outpatient clinic of the Department of Rheumatology and Clinical Immunology.
For this study 79 patients were recruited. The Pso cohort (Pso, n ¼ 20) included patients with a dermatologist-confirmed diagnosis of Pso in whom concomitant PsA was clinically excluded by a rheumatologist (in training). Patients with PsA (n ¼ 20) fulfilled ClASsification of Psoriatic ARthritis (CASPAR) criteria [30]. Patients with a clinical diagnosis of AS (n ¼ 19), all without a history of Pso, were included as a nonpsoriatic reference group. Serum samples were collected from healthy controls (HC, n ¼ 20) from the University Medical Centre Utrecht.
Serum proteomic analysis
Serum samples were collected, centrifuged at 1700g for 10 min at 4 C and stored directly at À80 C. Frozen serum aliquots were shipped on dry ice to the Olink Facility (Uppsala, Sweden) without prior thawing and measured according to manufacturer's instructions as previously published [31]. The Olink high-throughput proteomic platform employs a proximity extension-assay technology, in which oligonucleotide-labelled antibody pairs bind to a protein target. DNA reporter molecules bind to these antibodies, and are amplified to provide relative protein concentrations. One serum aliquot of 250 ll was used to run 11 different Olink platform 'panels' encompassing 1012 proteins, some of which were run in more than one panel
Rheumatology key messages
. PsA and psoriasis have a shared serum proteomic signature. . Expression of ICAM-1 and CCL18 had the most significant correlation to joint disease activity. . Expression of PI3 and IL-17 receptor A had the most significant correlation to skin disease activity. . Only data that passed Olink internal quality control were used for analysis. We removed samples entirely if they did not pass Olink internal quality control in >80% of the data. We removed proteins entirely if they were below the limit of assay detection in >40% of the samples. Some proteins were measured in multiple panels, in which case the protein data with the fewest missing values after quality control was used for analysis.
Statistical approach
For analysis of clinical characteristics, contingency analysis of two groups were performed using Chisquared tests for categorical variables, and independent samples T-tests or Mann-Whitney U tests for continuous variables. Contingency analysis of more than two groups were conducted with one-way independent analysis of variance or Kruskal-Wallis for continuous variables, and with v 2 test for categorical variables. Spearman's rank correlation was used to correlate disease activity parameters to protein levels. Unless otherwise stated, a P-value of <0.05 was considered statistically significant.
The statistical analysis of proteomic data was performed on protein data received by Olink without further normalization (quantile normalization did not impact the overall results, data not shown). Olink protein data are expressed as an arbitrary unit (Normalized Protein eXpression, 'NPX') representing the relative protein concentration based on a log2 scale (i.e. absolute protein quantity cannot be compared across different proteins). Protein levels were compared between groups based on the likelihood ratio test and considered statistically significant at a false discovery rate (FDR)-corrected P-value of <0.05, referred to as differentially expressed proteins (DEPs). Analysis was performed to compare two groups (e.g. HC vs PsA) or to compare multiple groups (HC, Pso, PsA, AS), as specified in the text. Hierarchical cluster analysis was based on Ward's method to create heatmaps (R pheatmap package, version 1.0.12). Classical multidimensional scaling was performed with the R builtin 'stats' package (cmdscale function), using the Euclidean distance matrix between samples based on protein data. The hierarchical cluster analysis and multidimensional scaling were performed using DEPs between groups based on a nominal P-value <0.05. The protein data shown in figures of hierarchical cluster analysis underwent Z-score normalization for the sake of visualization in heatmaps. Venn diagrams were modified from web-based BioVenn tool [32]. Reactome pathway and Kyoto Encyclopaedia of Genes and Genomes (KEGG) pathway enrichment analysis for DEPs was performed based on hypergeometric test using ReactomePA package (version 1.28.0) and clusterProfiler package (version 3.12.0), respectively. Statistical analysis was performed in R (version 3.6) and SPSS (version 25, SPSS Inc., Chicago IL, USA).
Cohort description
Clinical characteristics of the study participants are shown in Table 1. The Pso and PsA groups were matched for age, gender and PASI score. The PsA cohort was recruited early after disease onset, typically with <1 year of disease duration. Except for two patients with PsA, none of the study participants was being treated with DMARDs. Following quality control (see Methods), a total of 951 unique proteins and 77 samples (18 Pso,20 PsA, 19 AS, 20 HC) were retained for further analysis.
Major proteins changes in PsA serum compared with HC serum We first set out to specifically compare the serum of PsA to HC and found 68 differentially expressed proteins (DEPs) (FDR-corrected P < 0.05) (supplementary Table S1, available at Rheumatology online). Most of the top DEPs between PsA and HC have not previously been implicated in the pathogenesis of PsA, which included proteins such as ANXA1, ADAM23 and VIM (supplementary Fig. S1A, available at Rheumatology online). Hierarchical cluster analysis revealed that the serum proteomic profile of PsA patients could be clearly distinguished from the serum proteomic profile of HC (supplementary Fig. S1B, available at Rheumatology online).
Common and unique protein disturbances in serum of PsA
We first examined whether those serum proteins changes were unique to PsA, or if they were also dysregulated in Pso and/or AS. Of the 68 DEPs between PsA and HC, 48 proteins (71%) were also dysregulated in Pso and/or AS (Fig. 1A). The most significant DEPs between the groups were proteins that all had higher serum levels in patient groups as compared with HC (Fig. 1B). This list again included the proteins ANXA1, VIM and TOP2B. In total, 20 proteins (29%) were dysregulated in PsA as compared with HC, which were not dysregulated in AS or Pso as compared with HC (Fig. 1C). This list included proteins ADAM23, Neurogenic locus notch homologue protein 3 (Notch 3) and SLITRK6. Interestingly, many of the proteins in this list were lower in the serum of PsA as compared with that from HC.
We next compared patient groups directly. Importantly, there were no DEPs when directly comparing PsA with Pso based on FDR-corrected P < 0.05. An exploratory analysis (based on nominal P-value) comparing PsA with Pso can be found in supplementary Fig. S2A and B, available at Rheumatology online. We found that CLEC4A and SOD1 were the only proteins Serum proteomic signature in psoriasis and psoriatic arthritis https://academic.oup.com/rheumatology significantly different between patient groups, being elevated in AS (supplementary Fig. S3, available at Rheumatology online). Some specific proteins that have previously been implicated in the pathogenesis of these disease are displayed in supplementary Fig. S4, available at Rheumatology online. The list of DEPs can be found in supplementary Tables S1-S4, available at Rheumatology online. Taken together, we identified 20 proteins uniquely dysregulated in PsA, while the majority of protein disturbances were also dysregulated in Pso and/or AS.
Overall serum proteomic signature is similar in PsA and Pso Hierarchical cluster analysis showed that most patients, regardless of diagnosis, clustered separately from HC. The serum proteomic profile of PsA patients grouped closer to the Pso patients than to the AS patients ( Fig. 2A). Using an alternative method of analysing the data, namely multidimensional scaling analysis, we also found that HC grouped separately from patients, and that PsA and Pso grouped close together (Fig. 2B). Finally, pathway enrichment analysis on the sets of DEPs between patient groups vs HC similarly revealed that very similar pathways were enriched in PsA and Pso (supplementary Fig. S5, available at Rheumatology online).
Proteins reflecting joint and skin disease activity
We next examined which serum proteomic changes best reflected the major disease manifestations with respect to joint and skin disease activity in patients with PsA and Pso. The number of swollen joints had the strongest positive correlation to Intracellular adhesion molecule 1 (ICAM-1; r ¼ 0.81, P < 0.001), C-C motif chemokine 18 (CCL18; r ¼ 0.76, P < 0.001) and dipeptidyl peptidase 4 (DPP4) (r ¼ 0.75, P < 0.001), whereas swollen joint count had the strongest negative correlation to VEGFD (r ¼ À0.73, P < 0.001) (Fig. 3).
When PsA and Pso patients were considered as one group (data pooled together), PASI scores had the strongest correlation to the proteins PI3 (r ¼ 0.54, P < 0.001), IL-17 receptor A (r ¼ À0.51, P < 0.01), MMP-1 (r ¼ 0.47, P ¼ 0.01) and SERPINB8 (r ¼ 0.46, P < 0.01). Surprisingly, there were more proteins that correlated to PASI score when analysing the Pso and PsA cohorts separately as compared with analysing the Pso and PsA patients pooled together (Fig. 4). PASI score was correlated to Gal-4 (r ¼ À0.72, P < 0.001) and IGFBPL1 (r ¼ À0.65, P < 0.01), but only in patients with PsA. PASI score was correlated to PD-L2 (r ¼ 0.68, P < 0.01) and MSR1 (r ¼ 0.67, P < 0.01), but only in patients with Pso (Fig. 4). whom PsA was excluded. From over 950 proteins screened, we were able to narrow down specific proteins of interest correlating to the major clinical manifestations of these diseases. This is one of few head-to-head serum proteomic comparisons in a well-characterized cohort of patients with PsA and Pso. Our PsA cohort consisted of patients with early disease onset and was carefully matched to have similar clinical characteristics (including PASI score) to the Pso patients. From a clinical perspective, our results indicate that none of the evaluated serum proteins (singularly) is a likely candidate for a simple diagnostic biomarker capable of discriminating early PsA from Pso. In other words, a simple blood test to differentiate PsA from Pso may not be a feasible goal for daily clinical practice, at least not based on the proteins we evaluated. Instead, our results primarily contribute to the understanding of the pathogenesis of PsA, which includes specifying potential drug targets. From a pathophysiological perspective, our data support the 'two phenotypes of one disease' hypothesis [8,9].
Our study adds important insight into the question as to which type of tissue sample is best suited to unravel the pathogenesis of PsA. PsA and Pso fall within a spectrum of diseases with shared genetic background and presumably shared immunologic drivers. From a clinician point of view, however, they are distinct: some patients develop (poly)arthritis, which requires specific clinical intervention. Therefore, there must be specific drivers (local and/or systemic) within this overlapping psoriatic spectrum that enable the development of overt arthritis manifestations. Our broad analysis reveals that PsA and Pso are extremely difficult to discriminate based on serum proteomic changes, underscoring that other sites of the body, such as synovial tissue, should be an important target of future research. It will still be important to find methods of incorporating appropriate control groups, ideally Pso patients in whom PsA is Serum proteomic signature in psoriasis and psoriatic arthritis https://academic.oup.com/rheumatology excluded by a rheumatologist, even when studying tissue sites such as synovial tissue. Surprisingly, we found that many serum proteins were related to PASI score when dichotomizing the analysis for Pso only and PsA only. This may indicate there are different primary drivers of cutaneous inflammation and/or secondary systemic responses upon inflammation occurring in PsA compared with Pso. A comparison of the skin in PsA compared with Pso as tissue site has only been addressed in a small number of studies and therefore warrants specific tissue comparisons [33,34].
We here identified specific proteins strongly associated with joint disease activity. ICAM-1 is a molecule important for trans-endothelial migration of leucocytes via interaction with LFA-1. ICAM-1 has previously been identified in the pathogenesis of Pso and PsA [35,36]. In RA synovial tissue it was shown that ICAM-1 expression marked a specific myeloid synovial tissue phenotype [37]. Interestingly, previous attempts to target LFA-1 with mAbs for the treatment of Pso lead to the new-onset arthritis in many patients enrolling in the trials [38], supporting the notion that the balance of leucocyte extravasation mediated by ICAM-1 could be important in arthritis development. VEGFD is one of the members of the endothelial growth factors involved in angiogenesis and lymphangiogenesis in cancer, and while this specific family member has not been described in rheumatic disease [39], VEGF has been implicated in the pathogenesis of arthritis [40]. Considering that we performed a broad, unbiased serologic screening, our data again highlight the importance of angiogenesis in PsA, which is in agreement with existing histologic data in PsA showing increased angiogenesis to be an important feature of PsA synovial tissue [7,35,41]. Two additional proteins were strongly correlated to arthritis activity: CCL18 and DPP4. DPP4 is currently a target for type 2 diabetes mellitus, and the role of DPP4 in development of arthritis is still unclear [42]. CCL18 is expressed by endothelial cells in the synovial tissue of RA and has been identified as a disease activity marker in RA and other diseases [43].
A strength of our study is the broad set of protein panels we have measured. We hence observed that the strongest protein disturbances were not well-known cytokines and chemokines, but rather proteins not previously implicated in the pathogenesis of rheumatic disease, including ADAM23 and Notch 3. ADAM23 is a non-proteolytic member of the 'A disintegrin and metalloproteases' (ADAM) family known for high expression in brain and roles in neuronal differentiation, but also shown to inhibit cell adhesion and cell migration in cancer cells, possibly via interaction with integrin avb3 [44,45]. Notch 3 has very broad functions and is aberrantly expression in psoriatic skin, and was shown to modulate Th cell phenotypes function [46,47]. Our patient cohorts have an expected overlapping pathogenic spectrum (Pso, PsA, AS). Future studies should consider including other rheumatic diseases with more distinct clinical features and pathogenesis (e.g. gout and OA) in order to further address the specificity of the protein changes. While the protein disturbances were not specific to PsA, this per se does not preclude their importance in pathogenesis or their role as potential therapeutic target: many of the current therapeutics (e.g. TNF-a inhibitors) are effective across a range of distinct clinical entities considered to be driven by different pathways.
Some of the more familiar proteins changes included IL-6 and IL-17A, which are known drug targets for rheumatologic diseases. Studies in RA highlight that serum levels of cytokines are unlikely to predict clinical response to mAbs targeting that respective cytokine [48,49]. Nevertheless, we detected elevated levels of IL-6 in PsA and also found a positive correlation between IL-6 levels and joint disease activity measures, which supports current efforts examining IL-6 as a potential therapeutic target for patients with PsA.
Our study was designed to recruit PsA patients without DMARDs use and early after disease onset, resulting in PsA patients with mostly oligoarthritis. The serum proteomic results best represent the oligoarthritis pattern in PsA, but our cohort does not represent the entire spectrum of PsA patients, i.e. those with very severe polyarticular disease. Our choice to avoid patients with DMARDs is underscored by recent data using the same proteomic platform in Pso patients confirming that most proteins undergo vast changes upon initiation of immunomodulatory drugs [29].
A limitation of the current study is the relatively small cohort size, which means that we may have underestimated the number of proteins that are different between Pso and PsA groups due to stringent FDR-correction. Realistically, it is challenging to include large numbers of patients in basic science studies with very severe disease that are not (yet) treated with immunomodulatory drugs. Clearly, it will be necessary to (i) replicate the major protein disturbances identified by our screening and (ii) determine whether the proteins are downstream biomarkers of the disease or directly involved in the pathogenesis. Functional validation will be necessary to determine which of these specific factors or combination of factors contribute to the pathogenesis of PsA.
To overcome some of the aforementioned challenges we recommend that, similar to sharing gene expression data, these proteomic datasets can be publicly shared (e.g. repositories). Firstly, this provides additional scientific transparency of the results. Secondly, by sharing datasets the proteins can be compared across diseases (determine specificity) and allow for rapid validation and identification of those proteins worth pursuing for in vitro experiments. These collaborative efforts should maximize the yield of costly scientific endeavours, whilst ensuring acknowledgement of data in a competitive scientific landscape.
In summary, we have identified novel serum protein disturbances in PsA and furthermore establish that both Pso patients and PsA patients with oligoarthritis have an overall shared serum proteomic signature. manuscript. Ernesto Munoz-Elias and Samuel DePrimo are Janssen R&D LLC employees. The other authors have declared no conflicts of interest.
Supplementary data
Supplementary data are available at Rheumatology online.
|
2020-06-18T09:07:23.006Z
|
2020-08-13T00:00:00.000
|
{
"year": 2020,
"sha1": "204446c3f80b3198db3beb992190fa5ab88e85bf",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/rheumatology/article-pdf/60/2/751/36167961/keaa405.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5701e7c6eda2429d6fe9104ac8a60fbc46e78ec",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
83346682
|
pes2o/s2orc
|
v3-fos-license
|
Digestion kinetics of NDF in dairy cows fed silages from primary growth and regrowth of grass
Two grass silages from primary growth (PG) and two from regrowth (RG) were investigated in a 4×4 Latin square experiment by rumen cannulated dairy cows. Digestion kinetics of NDF was determined by rumen evacuations and the omasal canal fl ow measurements. Dry matter intake of PG was higher, rumen content of indigestible NDF (INDF) lower and passage rate of INDF higher in PG than in RG. Rumen contents of NDF and INDF increased with progressing growth stage of grass within harvest. It is concluded that intake of early cut PG and both RG silages was not limited by rumen fi ll alone.
INTRODUCTION
The intake of silage is markedly affected by digestibility.The decrease in silage dry matter (DM) intake has been on average 0.16 kg and in milk yield 0.3-0.5 kg when D-value (digestible organic matter, g/kg DM) has decreased by 10 g per kg DM (Rinne et al., 2000).The quality and production potential of regrowth silages has not been investigated very widely.In our recent milk production experiment the intake of regrowth silages was smaller and the cows produced less milk than those consuming primary growth silage of comparable D-value (Kuoppala et al., unpublished).
The intake and digestibility of a feed depends on the rate and extent of neutral detergent fi bre (NDF) digestion in the rumen, the rate of passage from the rumen and the particle size reduction.Rinne et al. (2002) reported that the decrease in digestibility clearly affected rumen functions: the passage rate of NDF and indigestible NDF (INDF), rumen pool size of DM, NDF and INDF increased when the digestibility of primary growth silage decreased.
The objective of the present study was to investigate the differences in digestion kinetics of NDF in order to study the intake limiting factors when dairy cows consume silages prepared from primary growth or regrowth of grass.
MATERIAL AND METHODS
Two primary growth (PG) and two regrowth (RG) were made from mixed timothy (Phleum pratense) -meadow fescue (Festuca pratensis) sward in 2002 in Jokioinen, Finland (61 o N).PG silages were harvested on 5 June at early (E) and on 17 June at late (L) growth stage.RG silages were harvested on 29 July (LE; primary growth cut on 17 June) and on 12 August (EL; primary growth cut on 5 June).The grass was cut with mower conditioner, wilted approximately 4 h and harvested with precision chop harvester.Grass was preserved with a formic-acid based additive (5.4 l/t) in bunker silos.The four silages were fed with 8 kg/d concentrates to four rumen cannulated dairy cows in a 4×4 Latin square design.The omasal canal fl ow and digestibility of nutrients were measured by triple-marker method as described by Ahvenjärvi et al. (2000).Digestion kinetics was determined by the rumen evacuation method as described by Rinne et al. (2002).Indigestible NDF was determined by 12 d ruminal incubation in dairy cows fed forage-rich diets and using nylon bags with a pore size of 17 µm (Huhtanen et al., 1994).Digestible NDF was calculated as NDF minus INDF.The in vivo D-value was determined with sheep fed at maintenance level by total collection of faeces.
DIGESTION KINETICS OF NDF RESULTS AND DISCUSSION
The fermentation quality of all silages was good (pH on average 4.1 and ammonium-N 56 g per kg total N).The content of NDF and INDF increased and the D-value of silages decreased with advancing growth stage both in PG and RG silages (Table 1).RG-values of NDF and INDF were intermediate to PG-values.
DM intake of PG silages was higher (P<0.01)than that of RG silages despite of lower content of NDF in RG silages (Table 2).Higher D-value of LE (664 g/kg DM) compared to L (644 g/kg DM) did not increase the intake which differed from earlier results in primary growth (Rinne, 2000).Reason for that may be the confounding effect of DM content, which was lower in LE than in L (228 vs 283 g/kg, respectively).The average rumen content of DM was lowest on E in spite of highest DM intake on that diet.Decreased digestibility of PG induced signifi cant increase in DM, OM, NDF and INDF contents of rumen.With RG silages the increase was signifi cant only in NDF and INDF content.The average rumen content of INDF was lower (P<0.01) in PG silages than in RG silages.The daily ruminal outfl ow of NDF was higher (P<0.05) in PG silages than in RG silages and the progressing growth stage increased it in both harvests.Ruminal and total tract digestibility of NDF decreased with growth stage in both harvests.The difference between harvests was not signifi cant.
There were no differences between PG and RG or within harvests in passage rate of DNDF.Instead, the digestion rate of it decreased with progressing growth stage with no differences between harvests.The passage rate of INDF was lower in RG silages (P<0.01) with no differences within harvests.
CONCLUSIONS
Based on rumen pool data, intake of early cut PG and both RG silages was not limited by rumen fi ll alone.
Table 1 .
Chemical composition of feeds
Table 2 .
The effect of harvest and growth stage of grass silage on daily feed and nutrient intake, rumen contents and digestion kinetics of neutral detergent fi bre (NDF) in dairy cows -primary growth vs regrowth silages, C 2 -E vs L, C 3 -LE vs EL signifi cance: *** P<0.001, **P<0.01,*P<0.05,P<0.10; INDF-indigestible NDF; DNDF -digestible NDF; k i -rate of intake, k d -rate of digestion, k p -rate of passage
|
2019-01-02T04:06:28.915Z
|
2004-08-30T00:00:00.000
|
{
"year": 2004,
"sha1": "ff92186dd0aff3de30ac16a97293ee2147ea5193",
"oa_license": "CCBY",
"oa_url": "http://www.jafs.com.pl/pdf-73757-10538?filename=Digestion%20kinetics%20of%20NDF.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ff92186dd0aff3de30ac16a97293ee2147ea5193",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
190180358
|
pes2o/s2orc
|
v3-fos-license
|
How Does Student's Engagement Build Consumer Green Behavior?
This research is based on the high environmental damage because of environmentally unfriendly behaviour. So that it needs serious and crucial effort to solve these problems, especially through learning. The purpose of this study is to know how the quality of student engagement in learning contributes to the development of consumer green behaviour. This research conducted the survey to 554 students of Adiwiyata Junior High School in Bandung. The data were collected by observation, interview, and questionnaire. Through SEM PLS analysis obtained the result that consumer green behaviour can be built from the student engagement on learning. Keywords—engagement; consumer; green behavior; student
I. INTRODUCTION
The quality of learning that is good measured by the quality of engagement in consumer learning that is environmentally friendly or green consumer describes the condition of consumers who care about the environment, consumer products that are environmentally friendly, and are able to manage waste so as not to have a negative impact on the environment. Thus, there are two green consumer objectives, namely meeting their needs while reducing the impact of environmental damage. In addition, a green consumer in this sense is characterized by an active attitude in consuming green products.
The environmentally friendly behaviour of students can be built through learning, which is expected by teachers and students to develop their ecological intelligence. The ecological intelligence that is built through this learning is called an ecopedagogy. "Eco-pedagogy can be interpreted as an academic movement to make students aware of being an individual who has life understanding, awareness, and skills in harmony with the interests of nature conservation [1]." which can be used to empower students. Consumer Green Behaviour is built through learning because school graduates will act as 1) agent of change in the community, namely agents in developing the behaviour of people who have knowledge, insight, attitudes, and behaviours that uphold sustainability or sustainability. 2) Agents who are aware of the limitations of natural resources and the existence of global warming issues and 3) agents who can apply ecological intelligence or learning applications that are eco-pedagogical in everyday life [1].
One effort to achieve the goal of efficient learning is to improve the quality of engagement in learning. The quality of student engagement can improve meaningful learning [2]. According to Astin, "Meaningful student involves the process of engaging students in every facet of the educational process for strengthening their commitment to education, community, and democracy [2]." The engagement of students which is meaningful can foster a positive attitude of students [3][4][5][6][7]. The quality of engagement in learning is manifested in three ways: cognitive engagement, emotional engagement and behavioural engagement. Based on this, there is a close conceptual relationship between engagement and engagement.
The attachment of students is the time allocated by students to educational activities that contribute to the desired results and in accordance with the expected quality. In this case, the attachment of students focuses on quantity, which is the time allocated for educational activities. The engagement of students in learning influences attitudes through knowledge possessed [8][9][10]. In addition, student engagement can influence behaviour which states that the engagement of students will build behaviour if there are needs, linkages and support [11,12]. In this case, engagement with teachers, parents, and friends is a factor that influences behaviour [11], and teacher and student interactions both directly and indirectly can predict behaviour [13]. Engagement in learning will build behaviour because engagement in learning is a social development process which increases active participation and encourages students to act. Based on this, this study aims to explain how the engagement of students can improve the behaviour of green consumer behaviour.
II. METHOD
This research was conducted through a survey of students in Adiwiyata Public Middle School in Bandung as many as 554 students. Data collection is done through interviews and distributing questionnaires to students. The research instrument consisted of non-test instruments. Non-test instruments in the form of questionnaires with Likert scale. The data obtained are then processed using descriptive statistics and processing of verification data carried out by PLS-SEM because this research aims to develop theory and not have multi-collinearity problems. Before testing the hypothesis, it has been ascertained that the indicators used can measure latent constructs through 2nd order confirmatory.
III. RESULTS
The results of this study consist of descriptive and inferential results. Descriptively describe the quality of engagement in learning and environmentally friendly consumer behaviour. Inferential results describe the results of testing hypotheses that answer the research question about the influence of the quality of engagement in learning on green behaviour consumers. The engagement of students is relatively high in a row of behavioural, cognitive and emotional engagement. Behavioural engagement is shown by activeness in practicum and experiment. Cognitive engagement includes emotional engagement shown with enthusiasm, and joyful feelings when taking lessons. The results of the study regarding the engagement of students and green behaviour consumers are as follows: Based on the results of the study, students' green behaviour is dominated by the ability to save behaviour in the use of fuel and energy, while the quality of student engagement is dominated by behavioural engagement. Here, students' behavioural attachments that are influential on environmentally friendly usage behaviour, this shows that behavioural attachments such as following experiments or practicum affect the ability to save energy. Whereas weak emotional involvement has an impact on weak management behaviour, this means that management behaviour requires pleasure, awareness, and love for the environment.
Inferentially, the influence of the quality of student engagement in learning of green behaviour consumers by testing the following hypothesis: Ho: ρ0 = 0: the quality of engagement in learning does not affect green consumer behaviour. H1: ρ1 ≠ 0: the quality of engagement in learning affects green consumer behaviour.
The path coefficient is 0.204 and the coefficient of determination is 0.0416 p-value is 0.00 and t hits (4.463)> ttab, 1.97.
Based on the results of testing, Ho is rejected, thus concluded that there is an influence of the quality of engagement in learning on green behaviour consumers These results are consistent with the research of Wongwanich [11], Skinner and Michael Belmont [12] and Ringwalt [13] which state that there is a relationship between student engagement and student achievement as indicated by behavioural changes. In this study, 26.7% of students' environmentally friendly behaviour is influenced by the quality of engagement in learning. The magnitude of this coefficient is included in the high category in the standard measure of the coefficient of determination in behavioural research.
IV. DISCUSSION
The quality of student engagement in learning consists of the dimensions of cognitive engagement, emotional engagement, and behavioural engagement. Whereas consumer green behaviour consists of dimensions of buying behaviour, usage behaviour and waste management behaviour. The quality of cognitive engagement is indicated by the completeness of the task, noting important matters relating to environmental issues. Everything that is received from further learning affects the behaviour of everyday life. Emotional engagement is shown with enthusiasm, joy, and interest when taking social studies lessons. Behavioural engagement is shown by active engagement in experiments, engagement in experiments provides practical examples that are relatively easy to imitate students. The quality of engagement in learning is shown by the intensity of engaging in relatively high, enthusiastic discussions and making good observations of examples of behaviour. While green behaviour consumers are mainly indicated by buying behaviour, especially the intensity of observing the recycling code in plastic packaging; Usage behaviour is indicated by a relatively high intensity in conserving electricity and waste management behaviour as indicated by the behaviour of disposing of garbage in its place. In this case, the quality of engagement in the form of relatively high intensity in discussion and observation is able to mobilize students to behave environmentally. This is possible if the discussions that occur in schools are able to give meaning to students so that students can immediately practice the behaviour. In this study, the quality of engagement in learning has a significant effect on green behaviour consumers. Thus, the teacher's efforts are needed to improve consumer green behaviour through learning. Green behaviour consumers in learning are strived both in the scope of the school and the scope of learning in the classroom. In the scope of green behaviour, consumer schooling is enhanced by academic culture, curriculum, infrastructure, and staff. Whereas in the classroom, this green behaviour consumer is enhanced through teacher competencies, learning methods, learning techniques, media, and teaching materials summarized in a learning design. Learning design consists of six components, namely learning, learning objectives (general and special), learning analysis, learning strategies, teaching materials and learning assessment. Before designing / designing learning, it is necessary to pay attention to the competence of the teacher, who will design the design. Teacher competency consists of pedagogic competencies, personality competencies, social competencies and professional competencies. From pedagogic competence, teachers must be able to understand the differences in students' backgrounds in environmentally friendly behaviour and different levels of knowledge of environmentally friendly behaviour. In personality competence, the teacher has a friendly, pleasant, intimate, accommodating personality and is not tense when interacting in class. Teacher's social competence can be seen in the ability to socialize with peers, parents of students and outside parties related to school, professional competence is the skill of planning, implementing and evaluating learning and managing classes so as to create a pleasant learning atmosphere, high student participation, not rigid but still oriented towards learning material.
The first component in learning design is students. The use of learning design components such as methods, instructional media, and evaluation tools, must refer to the characteristics of students including age, gender, academic ability, socioeconomic background, and different conditions learners. The second component is the learning objectives consisting of general instructional goals (TIU) and specific instructional goals (ICT). In formulating goals using operational verbs for the purpose of increasing cognitive, emotional and behavioural engagement. The third component is the analysis of learning or material. Learning material contains learning materials consisting of material knowledge of events, facts, and concepts about consumer's environmentally friendly behaviour. The fourth component is the learning method. Examples of learning methods chosen to improve the quality of engagement in learning, participation in learning and activeness of students are cooperative learning [14,15]; inquiry; Problem Based Instructions. Characteristics of teaching materials that are able to increase student engagement are adaptive, meaning that they contain the latest development issues, technological developments; self-instructional or material-based, self-training and assessment; and self-contained or contains all material and theories. Evaluation characteristics to measure the quality of engagement in social studies learning in the form of cognitive, affective and behavioural engagement are observations and questionnaires that assess students' responses to learning.
V. CONCLUSION
Base on this research, students' green behaviour is dominated by the ability to save behaviour in the use of fuel and energy, while the quality of student engagement is dominated by behavioural engagement. Students' behavioural engagement effects on environmentally friendly usage behaviour, this shows that behavioural attachments such as following experiments or practicum affect the ability to save energy. Whereas weak emotional involvement has an impact on weak management behaviour, this means that management behaviour requires pleasure, awareness, and love for the environment. Consumer green behaviour can be improved through learning specifically by increasing the quality of student engagement in learning. The quality of students' engagement in learning can improve consumer green behaviour as evidenced by the significant influence of the quality of involvement in learning on green consumer behaviour. In the context of learning, increasing green behaviour of consumers is designed in advance by developing learning designs that can increase involvement in learning.
ACKNOWLEDGMENT I would like to express the deepest appreciation to my committee chair Prof. Agus Rahayu, MP. Prof. Disman. MS and Prof, Nana Supriatna, M. Ed who has the attitude and the substance of a genius: he continually and convincingly conveyed a spirit of adventure in regard to research and scholarship, and an excitement in regard to teaching. Without his guidance and persistent help this research would not have been possible.
|
2019-06-15T13:13:22.025Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "310fe980014bb4ab5e62402cacd49d77a104bf32",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/icebef-18.2019.65",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "01f096daf32af01d2f840042e346c6b5d1e02eeb",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
6875650
|
pes2o/s2orc
|
v3-fos-license
|
Intravitreal Ampicillin Sodium for Antibiotic-Resistant Endophthalmitis: Streptococcus uberis First Human Intraocular Infection Report
Purpose. To describe the clinical characteristics, diagnosis, and treatment with intravitreal ampicillin sodium of a postoperative endophthalmitis case due to Streptococcus uberis; an environmental pathogen commonly seen in mastitis cases of lactating cows. Methods. Case Report. A 52-year-old, Hispanic diabetic patient who suddenly developed severe pain and severe loss of vision, following vitrectomy. Results. The patient was diagnosed with postoperative endophthalmitis secondary to a highly resistant strain of Streptococcus uberis that did not respond to intravitreal antibiotics. He was treated with an air-fluid interchange, anterior chamber washout, intravitreal ampicillin sodium (5 mg/0.1 mL), and silicon oil tamponade (5000 ck). The eye was anatomically stabilized, though there was no functional recovery. Conclusion. Streptococcus uberis is an uncommon pathogen to the human eye, which has unique features that help the strain in developing resistance to antibiotics. While treatment with intravitreal ampicillin is feasible, there are still concerns about its possible toxicity.
Introduction
Endophthalmitis is a rare postoperative complication which is potentially devastating to visual function and the structural integrity of the eye [1]. In the postoperative setting, infection generally occurs secondary to contamination with normal periocular flora. Occasionally, it develops from sources which are difficult to identify. Once detected postoperatively, the condition is treated with intravitreal antibiotics and vitrectomy and/or tap as per the recommendations of the Endophthalmitis Vitrectomy Study (EVS) [2].
In recent years, there has been an increase of the number of antibiotic-resistant bacterial strains and new strains which are normally not part of the traditional etiological spectrum of postoperative infection [3,4]. The following case report has the objective of describing the diagnosis, treatment, and unfavorable evolution of one case of postoperative endophthalmitis, secondary to Streptococcus uberis. This environmental pathogen is commonly responsible for a high proportion of cases of clinical (and subclinical) mastitis in lactating cows [5]. The organism is highly resistant to the majority of the latest generation antibiotics which are commonly employed in the treatment of endophthalmitis. Precisely how this patient became exposed to this pathogen remains unclear.
Case Report
A 52-year-old, Hispanic male presented to the retina department of our hospital complaining of a three-month history of progressive visual loss in his left eye. His past medical history was remarkable for Diabetes Mellitus (18 years) with poor metabolic control (last glucose level was 167 mg/dL, with a HbA1C of 14.7%), high blood pressure, chronic renal failure (treated with peritoneal dialysis), and diabetic ischemic foot problems (previous amputation of three toes). The patient also had history of previous abdominal surgeries (23 years ago). As for the ophthalmologic background, the patient had a previous diagnosis of proliferative diabetic retinopathy, which had been treated previously with bilateral panretinal photocoagulation, and vitrectomy OD along with chronic open angle glaucoma OU.
The best corrected visual acuity was 20/40 in OD and counting fingers at 30 cm in OS, and the anterior chamber examination was unremarkable. Ocular motility and pupillary responses were normal. The lens in the left eye was cataractous (C2N3P2, according to LOCS III classification), and intraocular pressure was 16 mmHg OU. Fundus examination revealed a dense vitreous hemorrhage in the left eye. Ultrasound examination of the left eye confirmed the presence of low reflective mobile vitreous opacities, consistent with vitreous hemorrhage, despite not show, evidence of traction retinal detachment.
Based on the existing evidence, we decided to offer the patient phacoemulsification surgery combined with a 23 GA vitrectomy. The surgery was performed without complications shortly after the initial examination, leaving balanced saline solution in the vitreous cavity at the end of the procedure. Although the vitrectomy ports were selfsealing, we decided to place a suture (8-0 Vicryl, Ethicon, San Angelo TX, USA) in all of them. We also placed a suture in the phacoemulsification incision (10-0 Nylon, Ethicon, San Angelo TX, USA).
Twenty-four hours after surgery, the patient complained of severe ocular pain, along with significant reduction of visual acuity (Hand Movements) and tearing. On ocular examination, we found severe conjunctival hyperemia, ciliary injection, clear cornea, hypopyon in the anterior chamber (1.2 mm), and intraocular pressure of 30 mmHg. The posterior pole was not visible. The ultrasound examination revealed images of increased echogenicity which correspond to cellularity in vitreous cavity, pseudomembranes formation, and choroidal thickening (Figure 1(a)). The diagnosis of postoperative endophthalmitis was evident, and we proceeded to immediately obtain aqueous and vitreous cavity samples for staining, cultures, and sensitivity tests. Intravitreal Ceftazidime (2.25 mg/0.1 mL), Vancomycin (1 mg/0.1 mL), and Dexamethasone (0.4 mg/0.1 mL) were injected. The patient was admitted to the hospital, and treatment was started with topical moxifloxacin every hour (Vigamox, Alcon Lab, Dallas Fort worth, TX) and oral moxifloxacin (400 mg). The following day, the visual acuity decreased to no light perception and severe pain and hypopyon continued. At the same day, the microbiology department reported the presence of gram-positive cocci in the vitreous cavity sample (which was classified as Streptococcus uberis two days later (Figure 1(b)). The sensitivity test documented resistance to cephalothin, cefotaxime, ceftazidime, cefuroxime, dicloxacillin, vancomycin, azithromycin, clarithromycin, erythromycin, amikacin, gentamicin, netilmicin, tobramycin, clindamycin polymyxin, ciprofloxacin, gatifloxacin, moxifloxacin, ofloxacin, perfloxacin, and tetracycline (Figures 1(c) and 1(d)). The only known sensitivity was to ampicillin sodium. Due to the lability of the patient and the possibility of systemic dissemination of the bacteria, we offered to the patient an air-fluid exchange, silicone oil as tamponade, anterior chamber washout, and intraocular lens removal after the failure of the first intravitreal antibiotics. However, the patient refused to sign the informed consent form for the second surgery, delaying treatment for three days. After knowing the specific sensitivity of the microorganism, we added an intravitreal injection of ampicillin sodium 5 mg/0.1 mL to the original plan. Finally, after extensive and exhaustive explanation, the patient agreed to the procedure.
The next day, the patient reported decreased pain, and on examination the vision remained no light perception, though there was no evidence of hypopyon and only mild conjunctival hyperemia. The patient remained hospitalized for the next three days, and during that time ampicillin sodium was administered intravenously, at adjusted doses of 1000 mg bid according to creatinine clearance. After discharge, the patient continued treatment with maintenance doses of intramuscular ampicillin sodium for two weeks. The patient continued to improve. Four weeks later, the integrity of the eye was preserved but the vision remained no light perception (Figure 2).
Discussion
Despite the advances in surgical techniques and the technology available to perform ocular surgery, the incidence of postoperative endophthalmitis in the last 10 years appears to be increasing [6,7]. What possibly play a role in this development has been the indiscriminate and inappropriate dosing of broad-spectrum antibiotics by doctors and the inadequate compliance to full treatment duration by the patients. This has led to the emergence of new resistant strains to the latest generations of drugs [3,4,6]. Evidence of this has been seen in the results published by the Ocular Tracking Resistance in the U.S. Today (TRUST) program, which reported an increase of 12.1% of methicillin-resistant Staphylococcus aureus (MRSA) strains, with more than 80% of MRSA being resistant to fluoroquinolones. However, despite the considerable increase of this numbers, it is also important to note that the study has the limitation that they based the bacterial susceptibility to antibiotics on systemic drug-exposure breakpoints and not in local concentration (as in an intravitreal injection) [6].
Streptococcus uberis is an environmental pathogen which is typically responsible for mastitis cases in lactating cows. It is also the predominant organism isolated from mammary glands during the nonlactating period in cows. Although β-lactams are the treatment of choice, the bacteria possess unique mechanism to generate resistance to antibiotics like the mph(B) gene for resistance to macrolide and SOS response-like DNA repair mechanism which may induce SOS-driven adaptive mutations [5,8]. The uncommon strong resistance to antibiotics found in the strain cultured from the patient's vitreous samples could be the result of all these conditions. The reason and circumstances by which this microorganism was able to reach the eye remains hidden to all of us.
Since there was no improvement clinically of our patient after the first intravitreal injection, and the isolated organism was resistant to practically all the intravitreal antibiotics that are commonly employed, we decided to use the only antibiotic to which the organism appeared to be sensitive. Our use of 5 mg/0.1 mL of intravitreal ampicillin sodium was based on two previous reports in which the intravitreal administration proved to be safe. Those reports were based on unpublished data from G. A. Peyman in which he established that the ampicillin sodium could be safely administered intraocularly up to a dose of 10 mg/0.1 mL. However, although the results were published in his book, the original study was never published [9,10].
The fact that almost all the traditional pathogens responsible for endophthalmitis cases are beta-lactamase producing strains limits the use of this antibiotic as part of the first choice drugs for the treatment of postoperative endophthalmitis. The possibility of toxicity-induced damage due to ampicillin sodium is also a factor to be considered, although this patient's vision already showed no light perception prior to administration of the intravitreal ampicillin. In this case, the eye was anatomically salvaged with this treatment regimen, although without visual recovery.
Summary Statement
A paper about postoperative endophthalmitis due to atypical bacteria: Streptococcus uberis, which was not previously described as pathogenic to human eye. This paper describes how the diagnosis was made, the treatment, the poor outcome, and a brief discussion about the treatment with intravitreal ampicillin sodium used in this case.
|
2014-10-01T00:00:00.000Z
|
2010-07-14T00:00:00.000
|
{
"year": 2010,
"sha1": "8b1dfb5b9c7a7bb298fa95a3683b4f4ccf00f32d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/joph/2010/169739.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d9f9bd7a159cb95eeaadd645f4bf5d834a944f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
263783852
|
pes2o/s2orc
|
v3-fos-license
|
Iraqi Geological Journal
Abstract
Introduction
Studies of karst (in general) or limestone (in particular) have been of interest to many geologists (Derek and Paul, 2007;Alobadi et al., 2021;Thinh et al., 2022;Institute of Geology and Minerals, 2005).Vietnam has a carbonate rock distribution area, accounting for about 20% of the territory.The carbonate rock resources are inside in karst regions.The distribution area of carbonate rocks is mainly concentrated in the north and north-central region.The south only has Ha Tien, Kien Giang province (Fig. 1).
Carbonate rocks in Vietnam are mostly found in Dong Giao formations, and Bac Son formations.Their age is Triassic and Carbon-Permian.There are many karst studies in Vietnam.But studies only focus on areas such as karst geology (Tuan, 2004;Tuyet et.al., 1998), karst hydrogeology (Tuyet et al., 1998), hazards on karst (Tuan, 2009;Tuan, 2019) and characteristics of karst poljes (Tuan, 2009;Tuan, 2019;Tuan, 2020).Only a few studies on Carbonate resources are in Vietnam's Direction of sustainable development (Khien, 2010;Tuan, 1999).The problem of exploiting and processing carbonate resources has received little research attention.The article deals with the waste of carbonate rock resources in Vietnam and towards the rational use of carbonate resources for Sustainable Development.
Data Collection
Collect all the data on karst geology, karst hydrogeology, limestone mines, and carbonate resource use fields.
Field Investigation
We have field investigation and research all the territory of Vietnam.This is the basis for determining the limestone distribution areas (Figs. 3 and 4).
Sampling and Sample Analysis
Limestone samples were featured for different formations in the territory of Vietnam (Table 1).At marble mines, limestone is also sampled and analyzed (Tables 2, 3 and 4).In addition, in the detailed study areas, the samples were taken from limestone layers of different colors to determine the CaO content.
Carbonate Rock Formation Process
Limestones are formed in shallow and warm marine environments.Limestone formation became different layers, at the beginning they are a Strike of horizontal bedding.Then, the activities of Endogenous geologies make the limestone layers deviation, where there is a Strike of vertical bedding or form folds.Some places are homogeneous limestone from top to bottom, called massive limestone (Figs. 5 and 6).
Quality of Carbonate Stone
Limestone in Vietnam is distributed mainly in the north, from latitude 17 to the north (Fig. 1).They are present in many different stratigraphies.Karst rocks in the territory of Vietnam are mainly limestone, marble, and dolomite.Limestone is present in stratigraphies of early, mid, late, and early Proterozoic age, has a thin thickness, and often suffers from metamorphosis.In addition, limestone is also found in Cambri, Silur stratigraphy, and most stratigraphy of the age Devonian, Carbon -Pecmi, and Triassic.The total thickness of limestone in Vietnam is about 5,000m; in which limestone has the middle age of Dong Giao formation (T 2 ađg) and Muon Trai formation (T 2 lmt), up to 2,300m thick (Tuyet et al., 1998).Our studies show that carbonate rocks are distributed in 31 provinces and cities of Vietnam.Most of them are in the provinces of Ha Giang, Cao Bang, Lai Chau-Dien Bien, Son La, Quang Binh, Lang Son, Nghe An, Tuyen Quang, Bac Kan.The best quality limestone is carbon-Permian and Triassic aged limestone (Table 1).The results of our field investigation and sampling at the limestone and marble quarries in Yen Bai, Nghe An, and Ha Tien provinces show that the quality of marble and limestone is very good.Total sample taken at mine marble, mine limestone and in the boreholes was 520 samples.The results of the summary chemical analysis on Tables 2, 3, and 4. In Vietnam, on karst areas are exploited violently, depriving resources and environmental pollution.Exploited limestone is mainly used as raw material for Cement production, paving roads.In many places, limestone content as high as 98% CaCO3 content, but not classified, and still exploited as building materials.
Marble of Yen Binh, Yen Bai
According to incomplete statistics, up to now, there are hundreds of carbonate quarries under the Ministry of Natural Resources and Environment and licensing provinces.Currently, the Ministry of Natural Resources and Environment licenses 87 carbonate quarries, including 38 marble quarries.The marble mines are mostly in Yen Bai, Nghe An, Bac Kan, Tuyen Quang, and Ha Nam provinces (Figs. 11,12,13 and 14).In Vietnam, the processing of carbonate stone is not popular.Carbonate stone is mainly used in raw materials for the construction industry.Firstly, there must be a strategy for exploiting and processing limestone, serving for sustainable development.Some provinces, including Yen Bai province, have imported equipment from China, India, Germany, the US, and Spain for deep processing.In 2020, the People's Committee of Yen Bai province in cooperation with the Joint Stock Company exploiting and processing minerals Vu Gia has started a factory building and processing Vu Gia marble.To produce products from carbonate stones of different sizes from raw to fine-grained and super smooth.Estimated output is 350,000 tons/year.Products are supplied to the domestic market and exported to foreign markets such as India, China, Italy, and some European countries.
According to the Department of Geology and Minerals of Vietnam, Vietnam currently has about 98 white limestone production and processing in operation (Table 5) (DGMVN).Tuyen Quang 1 5 Ha Nam 1 Products of white marble include Calcium Carbonate (CaCO3) uncoated, coated Calcium Carbonate (CaCO3), Plastic filler (Tactical), light powder CaCO3, Limestone powder (CaO), Quicklime, etcFrom carbonate rock created products for industries such as the glass industry, glass production, Ceramics industry, Plastic industry, rubber, Detergent manufacturing industry, Environmental treatment industry, and Livestock industry.
Limestone products have been used in many different industries.In the construction industry, limestone is used to produce cement.In the paint industry, calcium carbonate accounts for more than 60%.Besides, limestone will help enhance anticorrosion, etc.In the medical industry, limestone is used as a calcium supplement, which acts as a deoxidizer in the industry medicine.Limestone is also capable of absorbing gases such as NH3, H2S, CO2, etc. cleaning the water environment, whitening porcelain, making chalk for school use, etc.
One of the best enterprise processing products from carbonate stone in Vietnam is the joint-stock company Yabashi Holdings.From carbonate stone, Yabashi Holdings processed the following products (Table 6): Limestone / Crushed Stone we mine limestone of good quality in the method of bench cut mining by throwing stones into a vertical shaft.Uses: the raw material of quick lime / raw material of calcium carbonate / auxiliary raw material of iron manufacture/aggregate for civil engineering White limestone is manufactured by crushing and grinding and screening limestone of high whiteness yielded in Vietnam.It is used in the chemical field such as paper making.These days it is receiving a lot of attention that it is expected to have the effect to increase the power generation of solar power.Uses: paper making / solar power / For gardening Heavy Calcium Carbonate (Granule)The Limestone is dried and sorted to adjust granule size according to different usages 0.3-1.0mm、1.0-3.0mm、4.0-6.0mm Uses: Basicity adjustment for slug / Fluorine gas adsorption / Fluidised bed boiler / Animal Heavy Calcium Carbonate(Powder)It is manufactured by crushing and screening limestone.We can produce various sizes of particles from superfine ground products to granular products to adapt to various use.It is mainly used for rubber, resin, and paint.Uses: paper making / rubber ・ resin / paint / glass / asphalt / flu gas desulfurization / steel manufacture / fertilizer Quick Limestone(Lump)This product has very few impurities and you can choose grain size and reactivity freely.We are especially good at quick lime with high reactivity.
Uses: Pig iron and iron manufacturing / paper/ pulp / general chemistry industry / civil engineering / desiccant / the quality of water and bottom material improvement / fertilizer Quick Limestone (Powder) The main component of this product is calcium oxide (CaO) and it is a white powder.Limestone mined from the mine is washed with water and screened.And after that quick lime is produced by burning limestone at a high temperature of over 1000 ℃ in the kiln.Uses: pig iron and iron manufacturing / paper/ pulp / general chemistry industry / civil engineering / desiccant / the quality of water and bottom material improvement / fertilizer Slaked Limestone It is used mainly in the neutralization of acids, for example, the neutralization of exhaust gas and wastewater.
Uses: garbage incinerator / exhaust desulfurization / waste water treatment / treatment of fresh water and sewage / civil engineering / fertilizer Slaked Lime(Slurry)It is used to neutralize the acidic waste solution and we can adjust the concentration to fit your needs Uses: Waste Water neutralization / General chemical industry usage/Semiconductors MICROSTAR T slaked lime slaked lime fine powder of high purity produced from carefully selected limestone having very little impurity.It conforms to JIS K8575 special grade reagent and food additives standards.Particle size 325mesh (45μm) nothing left behind.Uses: reagent / industrial use High reactive slaked lime is the slaked lime with increased specific surface area and pore volume more than "JIS special grade slaked lime" to improve drastically reactivity.Its improved flow ability can reduce troubles in silo and transport systems.Uses: garbage incinerator Precipitated Calcium carbonate is synthesized by reacting slaked lime slurry with carbon oxide gas.The product with high whiteness or products with various particle shape which is suitable for each user will be manufactured by changing its synthesis method variously.We have slurry products and powder products and they are used in various fields such as paper making etc.We are only one production plant of precipitated calcium carbonate in the Chubu Region.Uses: paper making / rubber・resin / paint / food additives Precipitated Calcium Carbonate (Slurry) "Choral bright" is synthesized fine precipitated calcium carbonate in high conc.slurry form.(High whiteness, high gloss, low viscosity) Maximum solid 70% Uses: Papermaking/ paint CALSIP lime-based desulfurizing agent for refining is manufactured by various quick lime mixing with auxiliary raw materials (AI ash, Sic, and Si, etc.), and it has powder type, granular type, and briquette type.We manufacture various components and various kinds of refining agents to deal with customers' needs.It is used for the improvement of refining ability and the reduction of refining time.In recent years, it has come to be used for refining without fluorine as the replacement of fluorite.
Uses: steel manufacture / cast iron THREAD LIME lime-based fixation agent is a fixation agent to adapt to the nature of the target soil by mixing various lime and auxiliary raw material.Especially it is effective in the improvement of soft ground.It is a natural material, so it is an environment-friendly product.We have a powdery product, dust proof powder product, and briquette product, and it is used in the construction and civil engineering field.Uses: improvement of soft ground/road works / residential land development/foundation work of railways, airports, buildings / surplus soil recycle Geolime Geolime is developed by Yabashi's original lime-based solidifying agent added to the soil that is extracted upon mining limestone to use the soil as a recycled resource.Uses: Landscaping material for factories, residential areas and other facilities Calsand The product's main ingredient is crushed limestone in which the particle distribution has been adjusted to use as the paving material.Uses: Paving material for ground, parks, and other multi-purposed plaza Processing and using carbonate stones (in general), and marble (in particular) in Vietnam have not achieved high efficiency, causing waste of resources.Compare with Fig. 15 (Foundations of Mendip (British Geological Survey) -Stone as a resource).We see the role of carbonate resources is very large, but in Vietnam, it has not been fully utilized.
Reasonable Use of Carbonate Stone Resources for Sustainable Development
To develop karst areas in Vietnam, it is necessary to have a detailed plan, which delineates the mining areas, the limestone, and marble processing areas for socio-economic development; and forbidden area to preserve geological heritage, environment, landscape, or serve national security and defense etc.
Limestone is a nonrenewable resource.Therefore, we must use it economically, and reasonably, to serve the national sustainable development.Here are some suggestions:
Exploiting Problem
First of all, it is necessary to plan and zone for carbonate mining, including zoning for quality limestone.Mining technology must be improved to minimize resource loss and environmental pollution.
Processing Problem
Vietnam must import the advanced carbonate stone processing technology.The non-executing business will be forbidded.The goal is deep processing, creating competitive products with the market, increasing the number of carbonate stone processing factories to meet market demand.
Environment Protection Problem
The environment in the exploitation and processing areas of limestone carbonate is polluted.The pollutions that are easy to see are noise and dust (Figs. 16 and 17).Problems such as smoke, dust, noise, water pollution, and soil pollution need to be addressed soon in the exploitation and processing of carbonate rocks in Vietnam.
Conclusions
Carbonate resources are nonrenewable resources, Vietnam needs to have plans for zoning exploitation areas, prohibited zones for Sustainable Development.The use of raw materials needs to be minimized, and deep processing of limestone and marble minerals should be prioritized.The urgent requirement now is to change limestone exploiting and processing technology in Vietnam as soon as possible.Adopting advanced technology to exploit and process limestone in Vietnam not only increases the value of processed products of limestone but also contributes to the environmental protection, resources preservation, and sustainable Development of the country.
Fig. 1 .
Fig. 1.Map of karst regions in Vietnam (Institute of Geology and Minerals.2005)
Fig. 2 .
Fig. 2. Determined Karst rock and no karst rock on the satellite image
Fig 3 .
Fig 3. Field investigation in Son La province Fig 4. Bedding of limestone Donggiao formations
Fig 6 .
Fig 6.The massive limestones have aged Carbon-Permian in Ha Long Bay, Quang Ninh province.
Fig 9 .
Fig 9.The limestone blocks are flattened Fig. 10.National scenic spots Kemtrong in being exploited for construction stone
Fig. 11 .
Fig. 11.Illegally quarrying up to Marble thousands of m 3 in Chau Loc commune, Quy Hop district, Nghean province
Fig. 15 .
Fig. 15.Foundations of Mendip (British Geological Survey)stone as a resource
Fig 16 .
Fig 16.Limestone exploitation in Ha Nam province Fig 17.Limestone processing in Ha Nam province
Table 1 .
Results of the analysis of the chemical composition of limestone in Vietnam
Table 2 .
Synthesis of chemical analysis results of marble in Yen Binh, Yen Bai
Table 3 .
Synthesis of chemical analysis results of marble in Chau Cuong-Nghe An
Table 4 .
Synthesis of chemical analysis results of limestone of Hà tien-Kienggiang Places recognized as geological heritage must be protected, this is the prohibited zone.However, in Cam Pha district, Quang Ninh province, lying next to Ha Long city is the place where UNESCO is the world's heritage workers, but the limestone quarries are still being exploited(Figs.7and 8) 3.6.Exploiting Carbonate StoneProvinces with many active carbonate quarries are Ha Nam, Yen Bai, Nghe An, Ninh Binh, etc.
Table 5 .
White limestone processing facilities in Vietnam
Table 6 .
Products processed from carbonate stone of Yabashi Holdings Joint Stock Company
|
2023-10-02T15:12:54.449Z
|
2023-09-30T00:00:00.000
|
{
"year": 2023,
"sha1": "6aafc5ad3f17b220e50fd4f4dc29773a289f43ce",
"oa_license": "CCBY",
"oa_url": "https://igj-iraq.org/igj/index.php/igj/article/download/1292/1587",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a5b73e9f2db61589fa6a5143033f9c427746b56e",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
10351776
|
pes2o/s2orc
|
v3-fos-license
|
Evidence for Distinct Roles in Catalysis for Residues of the Serine-Serine-Lysine Catalytic Triad of Fatty Acid Amide Hydrolase*
Fatty acid amide hydrolase (FAAH) is a mammalian amidase signature enzyme that inactivates neuromodulatory fatty acid amides, including the endogenous cannabinoid anandamide and the sleep-inducing substance oleamide. The recent determination of the three-dimensional structures of FAAH and two distantly related bacterial amidase signature enzymes indicates that these enzymes employ an unusual serine-serine-lysine triad for catalysis (Ser-241/Ser-217/Lys-142 in FAAH). Mutagenesis of each of the triad residues in FAAH has been shown to severely reduce amidase activity; however, how these residues contribute, both individually and in cooperation, to catalysis remains unclear. Here, through a combination of site-directed mutagenesis, enzyme kinetics, and chemical labeling experiments, we provide evidence that each FAAH triad residue plays a distinct role in catalysis. In particular, the mutation of Lys-142 to alanine indicates that this residue functions as both a base involved in the activation of the Ser-241 nucleophile and an acid that participates in the protonation of the substrate leaving group. This latter property appears to support the unusual ability of FAAH to hydrolyze amides and esters at equivalent rates. Interestingly, although structural evidence indicates that the impact of Lys-142 on catalysis probably occurs through the bridging Ser-217, the mutation of this latter residue to alanine impaired catalytic activity but left the amide/ester hydrolysis ratios of FAAH intact. Collectively, these findings suggest that FAAH possesses a specialized active site structure dedicated to a mechanism for competitive amide and ester hydrolysis where nucleophile attack and leaving group protonation occur in a coordinated manner dependent on Lys-142.
Fatty acid amide hydrolase (FAAH) 1 is an integral membrane enzyme that degrades members of the fatty acid amide class of neural signaling lipids, including the endogenous cannabinoid anandamide (1) and the sleep-inducing substance oleamide (2,3). Studies of FAAH(Ϫ/Ϫ) mice have confirmed that this enzyme is a key regulator of fatty acid amide signaling in vivo (4,5). For example, FAAH(Ϫ/Ϫ) mice possess elevated endogenous brain levels of anandamide and related fatty acid amides that correlate with enhanced cannabinoid receptor 1dependent analgesia in these animals (4). Likewise, FAAH inhibitors produce analgesic and anxiolytic effects in rodents (6). These findings suggest that FAAH may represent an attractive therapeutic target for the treatment of pain and related neural disorders. Toward this end, a deeper understanding of the catalytic mechanism of FAAH may assist in the design of specific inhibitors of this enzyme.
FAAH belongs to a diverse group of alkyl and aryl amidases known as the amidase signature (AS) family whose members are characterized by a conserved serine-and glycine-rich stretch of ϳ130 amino acids (7,8). Proteins containing the AS sequence have been found in a broad range of organisms, including archaea (9), eubacteria (7, 10 -12), fungi (13), nematodes, plants, insects, birds (14), and mammals (2,3). Despite the evolutionary extent of AS enzymes, their catalytic mechanism has only recently been investigated. A series of mutagenesis and chemical labeling studies of FAAH has targeted residues conserved across the AS family and provided evidence that three of these residues, Ser-241, Lys-142, and Ser-217, are of primary importance for catalysis (15). In particular, these investigations have indicated roles for Ser-241 and Lys-142 as the catalytic nucleophile and a catalytic acid/base, respectively, with the latter residue contributing to the unusual ability of FAAH to hydrolyze structurally similar fatty acid amides and esters at equivalent rates (16,17). Nonetheless, a more detailed understanding of the catalytic mechanism of FAAH has to date been limited by a lack of structural information for this enzyme and the AS family as a whole.
With the recently solved x-ray crystal structures of FAAH (18) and the distantly related bacterial AS enzymes malonamidase E2 (MAE2) (19) and peptide amidase (PAM) (20), the central catalytic residues of the AS family have been revealed to form a novel serine-serine-lysine catalytic triad (Ser-241, Ser-217, and Lys-142 in FAAH). This unusual arrangement of catalytic residues raises intriguing questions regarding the specific roles played by the bridging serine and lysine residues in hydrolysis. For example, based on the structures of MAE2 and PAM, models have been proposed for catalysis in which the bridging serine of the triad plays a primary role in both the base-catalyzed activation of the neighboring serine nucleophile and the acid-catalyzed protonation of the substrate-leaving group, whereas the lysine residue purportedly has a more supportive function mainly in the latter event (19,20). Somewhat inconsistent with this mechanism, however, are previous mutagenesis analyses of FAAH that have indicated a more central participation of Lys-142 in catalysis (17). Here, through a combination of mutagenesis, substrate selectivity profiles, and chemical labeling experiments, we provide evidence that both Ser-217 and Lys-142 contribute to the base-catalyzed activation of the Ser-241 nucleophile of FAAH, whereas Lys-142 appears to play a uniquely important role in the acid-catalyzed protonation of the substrate-leaving group. This latter property of Lys-142, which is unaffected by mutation of Ser-217, may support the unusual ability of FAAH to hydrolyze fatty acid amides and esters at equivalent rates.
EXPERIMENTAL PROCEDURES
Construction of FAAH Mutants-For the studies described below, rat FAAH mutants were constructed in the prokaryotic expression vector pTrcHisA (Invitrogen), which contains an N-terminal His 6 tag. Point mutants were generated using the QuikChange procedure (Stratagene). All of the enzymes contained an N-terminal truncation of amino acid residues 1-29, which constitute a predicted transmembrane domain. This deletion has been shown to have no effect on catalytic activity or membrane binding but does enhance expression and purification (21). For clarity, numbered residues refer to the full-length enzyme.
Expression and Purification of FAAH and Mutants-Enzymes were expressed in Escherichia coli strain BL21(DE3) and purified by sequential metal affinity, heparin-agarose, and gel-filtration chromatography as described previously (21). All of the enzymes yielded ϳ0.5 mg of protein/liter of culture volume.
Circular Dichroism Spectrometry-Protein samples at 0.5 mg/ml (7.75 M) in 10 mM Tris, pH 8.0, 100 mM NaCl, and 0.015% lauryldimethylamine oxide were measured by far-UV circular dichroism at 25°C in a 0.1-cm cell on an Aviv stopped-flow CD spectrometer.
Enzyme Assays-Enzyme assays were performed by following the conversion of [ 14 C]oleamide and [ 14 C]OME to oleic acid using a thin layer chromatography (TLC) assay as described previously (21). Reactions were conducted in a buffer of 50 mM Bis-Tris propane, 50 mM CAPS, 50 mM citrate, 150 mM NaCl, and 0.05% Triton X-100. The pH of the buffer was adjusted using HCl or NaOH. Reactions were quenched with 0.5 N HCl at 4 time points. Oleic acid was separated from oleamide by TLC in 65% ethyl acetate, 35% hexanes and from OME by TLC in 20% ethyl acetate, 80% hexanes. The radioactive compounds were quantified using a Cyclone PhosphorImager (PerkinElmer Life Sciences). FAAH enzymes exhibited Michaelis-Menten kinetics, and apparent K m and k cat values were calculated from Lineweaver-Burk plots of four substrate concentrations run in triplicate. Additional assays conducted with 0.01 and 0.25% Triton X-100-afforded k cat and K m values for FAAH-catalyzed oleamide and OME hydrolysis equivalent to the values obtained with 0.05% Triton X-100, indicating that over this concentration range of detergent (0.01-0.25% Triton X-100), the substrate/detergent ratio did not significantly impact the measurement of the catalytic parameters of FAAH. Enzyme concentrations were estimated assuming an absorbance at 280 nm of 0.8 absorbance unit for a 1 mg/ml solution of FAAH (21).
Fluorophosphonate Labeling-Labeling reactions of 80 nM enzyme and 1 M fluorophosphonate-tetramethyl rhodamine (FP-rhodamine, Activx Biosciences) (22) were allowed to proceed at 25°C for 5 and 20 min using the reaction buffer above and quenched with one volume of 2ϫ SDS loading buffer. Control samples of wild type (WT) FAAH were labeled to completion and used as a reference for 100% reactivity. Quenched reactions were subsequently analyzed by SDS-PAGE with 1.2 pmol of protein/gel lane. The extent of FP-labeling was visualized in-gel using a Hitachi FMBio IIe flatbed laser-induced fluorescence scanner and quantified by measuring the integrated fluorescence band intensities. The rates of FP reactivity of FAAH variants were calculated using Equation 1, where E is the amount of unlabeled enzyme at time point t, E 0 is the total enzyme, and k obs is the calculated rate of labeling. Table IV.
Generation and Purification of FAAH Variants with Mutations in the Serine-Serine-Lysine Catalytic Triad-Previous
studies of FAAH have provided strong evidence that Ser-241 represents the catalytic nucleophile of the enzyme (16), a finding substantiated by the three-dimensional structure of FAAH where covalent modification of Ser-241 by a methoxyarachidonyl fluorophosphonate inhibitor was observed (18). The FAAH structure revealed that Ser-241 forms part of an unusual catalytic triad with Ser-217 and Lys-142 ( Fig. 1), an active site architecture also observed in the structures of the bacterial AS enzymes MAE2 (19) and PAM (20). Given the complete conservation of this serine-serine-lysine triad among members of the AS family, a deeper investigation of the roles that its constituents play in catalysis is warranted. To examine the catalytic functions of Ser-217 and Lys-142, each residue was mutated to alanine and the resulting FAAH variants were expressed and purified as described previously (21). Additionally, the double mutant K142A/S217A was generated and purified. All of the mutant enzymes were properly folded based on gel-filtration profiles (data not shown) and far-UV circular dichroism spectra that matched those of WT-FAAH ( Supplementary Fig. 1).
Comparative Analysis of the Amidase Activities of FAAH Mutants-FAAH mutants were analyzed using [ 14 C]oleamide as a substrate following previously described methods (21). Consistent with past findings (16,17), both the K142A and S217A FAAH mutants exhibited significant decreases in apparent k cat values for oleamide hydrolysis relative to WT-FAAH at pH 9.0 (ϳ40,000-and 3,000-fold, respectively) ( Table I). The K142A/S217A double mutant displayed an even greater catalytic deficiency at this pH (ϳ70,000-fold). In contrast to both the K142A and S217A mutants, which showed wild type apparent K m values for oleamide, the K142A/S217A variant displayed a 3-fold increase in apparent K m for oleamide, suggesting that the mutation of both Lys-142 and Ser-217 had a modest effect on substrate binding. Both of the K142A and K142A/S217A mutants exhibited a strong pH dependence on catalysis ( Fig. 2A), although their severely reduced activities were difficult to measure in the lower pH range (hydrolysis rates below 1ϫ10 Ϫ5 s Ϫ1 approached the sensitivity limit of the substrate assay). In contrast, the S217A mutant was found to display a relatively flat pH-rate profile for oleamide hydrolysis from pH 6.5 to 9.0 ( Fig. 2A). The distinct pH-rate profiles of the FAAH mutants resulted in catalytic defects relative to WT-FAAH at pH 7.0 of 104,000-and 2,000-fold for the K142A and S217A variants, respectively.
Comparative Analysis of the Esterase Activities of FAAH Mutants-Typically, serine hydrolases hydrolyze ester substrates at much greater rates compared with structurally similar amides, reflecting the relative solvolytic potential of these compounds (23). FAAH represents a notable exception to this general principle as this enzyme has been shown to hydrolyze fatty acid amides and esters at equivalent rates by an acylation rate-limiting mechanism (17). Interestingly, this special property of FAAH depends on Lys-142 because a K142A mutant has been shown to strongly prefer OME over oleamide (17). Consistent with these findings, at pH 9.0 the K142A mutant exhibited only a 50-fold decrease in apparent k cat for OME (Table 2), hydrolyzing this substrate at a 320-fold faster rate than oleamide (Table III). In contrast, the S217A mutant was found to hydrolyze OME at an ϳ5-fold slower rate than oleamide (Table III). This preference for amide substrates over esters exhibited by the S217A mutant mirrored the substrate selectivity of WT-FAAH, which displayed a 2.5-fold greater activity with oleamide than OME (Table III). Thus, despite exhibiting considerable reductions in absolute catalytic activity with both oleamide and OME, the S217A mutant maintained wild type amide/ester hydrolysis ratios. The striking differences in the amide/ester hydrolysis ratios of the K142A and S217A mutants resulted in opposite relative substrate preferences for these enzymes with the former enzyme being a 120-fold better esterase but a 13-fold worse amidase than the latter enzyme at pH 9.0 (Tables I and II).
The K142A/S217A mutant exhibited a mixture of catalytic features that individually resembled either single mutant but collectively produced a more complex picture. For example, both the K142A/S217A and S217A mutants hydrolyzed OME at approximately 6000-fold slower rates than WT-FAAH at pH 9.0 (Table II). However, a steep pH dependence was observed for the esterase activity of the K142A/S217A mutant that resembled the pH-rate profile of the K142A mutant (Fig. 2). The catalytic activities of both the K142A and K142A/S217A mutants exhibited a linear dependence on solvent [OH Ϫ ] with a slope of 0.7. In contrast, the S217A mutant showed a flattened pH-rate profile for OME hydrolysis similar to the pH-rate profile displayed by this enzyme for oleamide hydrolysis (Fig. 2B). The differences in the pH-rate profiles of the K142A/S217A and S217A mutants resulted in progressively slower relative rates of catalysis for the former enzyme as pH was lowered ( Fig. 2A). Finally, the K142A/S217A mutant exhibited an ϳ5-fold preference for OME over oleamide (Table III), thus displaying an amide/ester selectivity ratio that fell in between those displayed by either single mutant.
Comparative Analysis of the FP Reactivities of FAAH Mutants-Assuming that the K142A and S217A mutants, similar to WT-FAAH (17), both hydrolyze amide and ester substrates through an acylation rate-limiting mechanism (a premise that is supported by the wild type apparent K m values displayed by these enzymes for oleamide and OME), the reduced catalytic activities of these mutants suggest roles for each residue in nucleophile activation, leaving group protonation, or both. To distinguish the impact of mutation of Lys-142 and Ser-217 on nucleophile activation separate from leaving group protonation, the rates of reactivity of each mutant enzyme with a fluorophosphonate (FP-rhodamine) (22) were measured. Reduced rates of labeling of serine hydrolases by FPs are typically caused by mutations that 1) decrease the strength of the serine nucleophile (15,24) and/or 2) disrupt residues involved in transition state stabilization, such as those composing the oxyanion hole (25,26). Based on the FAAH crystal structure, neither Lys-142 nor Ser-217 appears to participate in the oxyanion hole and, therefore, changes in the FP-labeling rates of the K142A, S217A, and K142/S217A mutants were interpreted to reflect primarily alterations in the strength of the Ser-241 nucleo- phile. A direct comparison of the FP reactivity rates of WT-FAAH and FAAH mutants was made at pH 7.0, because above this pH value, the rate of labeling of WT-FAAH was too fast to measure. At pH 7.0, the K142A and S217A mutants exhibited 6,100-and 2,900-fold reductions in FP reactivity rates, respectively, relative to WT-FAAH (Table IV). The K142A/S217A variant showed an even greater loss of FP reactivity (70,000fold relative to WT-FAAH) that nonetheless appeared specific as the rate of FP labeling of this double mutant was still over an order of magnitude faster than the background labeling of the S241A mutant (Table IV). Interestingly, as the pH of the reaction was raised, the FP reactivity of the K142A mutant increased to a greater extent than the S217A mutant (Fig. 3). Thus, by pH 8.5, the K142A mutant exhibited a 2-fold greater k obs /[I] value than the S217A mutant, which represented a 4-fold overall shift in relative FP reactivity compared with the rates observed at pH 7.0. Collectively, these findings indicate that at physiological pH, the nucleophilicity of Ser-241 depends to a similar extent on Lys-142 and Ser-217, with the role of the former residue being better compensated for by increasing concentrations of solvent hydroxide.
DISCUSSION
The recent determination of the three-dimensional structure of FAAH has demonstrated that this enzyme possesses a serine nucleophile (Ser-241) that forms an unusual catalytic triad with a second serine residue (Ser-217) and a lysine residue (Lys-142) (18). This organization of catalytic residues has also been observed in the structures of the distantly related bacterial AS enzymes, MAE2 (19) and PAM (20). These findings, in combination with the complete conservation of Ser-241, Ser-217, and Lys-142 among the Ͼ80 AS enzymes identified to date argue that each of these residues plays an important role in catalysis. Nonetheless, many questions remain regarding the AS family serine-serine-lysine catalytic triad. For example, how does the Ser-217/Lys-142 portion of the FAAH triad promote the two key steps of acylation, namely the base-catalyzed activation of the Ser-241 nucleophile and the acid-catalyzed protonation of the substrate-leaving group? Additionally, how do these residues function either individually or in concert to impart upon FAAH its unusual ability to hydrolyze amides and esters at equivalent rates? Finally, do all of the AS enzymes invoke the same catalytic mechanism, or alternatively, might these enzymes exhibit differences in how they hydrolyze their respective substrates? To begin to address these important issues, we have conducted a comparative analysis of the roles played by Ser-217 and Lys-142 in catalysis using a combination of mutagenesis, substrate selectivity profiles, and chemicallabeling experiments.
Comparison of the Kinetic Properties of the K142A and S217A FAAH Mutants-The general defects in amidase and esterase activity observed for the S217A and K142A FAAH mutants support central roles in catalysis for both residues. Interestingly, however, the relative impact of each mutation on the amidase and esterase activities of FAAH was quite different. For example, at physiological pH, the S217A mutant exhibited nearly equivalent deficiencies for oleamide and OME hydrolysis (2,000-and 2,500-fold, respectively), whereas the K142A variant showed a much greater reduction in oleamide hydrolysis (104,000-fold) than OME hydrolysis (600-fold) (Tables I and II). The esterase activity of the K142A mutant showed very strong pH dependence (Fig. 2), suggesting that solvent hydroxide was capable of partially substituting for the loss of this lysine residue. This compensatory action of hydroxide resulted in the K142A mutant becoming a fairly efficient esterase at higher pH values (2% WT activity at pH 9.0). In contrast, the K142A mutant displayed a more modest increase in amidase activity at higher pH values, possibly reflecting the absolute requirement for protonation of the amine-leaving group of amide substrates. Unlike the K142A mutant, the S217A variant exhibited little pH dependence with either amide or ester substrates (Fig. 2). The flattened pH-rate profiles of the S217A mutant do not appear to reflect a physical ratelimiting step because this enzyme has been shown to exhibit a significant solvent deuterium isotope effect for oleamide hydrolysis (1.8-fold), indicating that a proton-transfer event probably occurs in the rate-limiting step (15). Although it remains unclear why the S217A mutant fails to show pH dependence, this property may reflect the competing effects of pH on different steps in the acylation reaction. For example, as pH is raised, solvent-driven, base-catalyzed activation of Ser-241 may be accelerated in the S217A mutant; however, this effect could be counterbalanced by a reduction in Lys-142-directed protonation of the substrate leaving group. For such a model to be correct, Lys-142 would have to exhibit an altered pK a value so that the protonation state of this residue was affected in the range of pH 7.0 -9.0. Notably, the K142A/S217A double mutant exhibited a strong pH dependence for OME hydrolysis that mirrored the behavior of the K142A mutant (Fig. 2). These data are consistent with a model where the flattened pH-rate profiles of the S217A mutant are at least in part dependent on Lys-142.
Evidence That Both Lys-142 and Ser-217 Are Involved in Activation of the Ser-241 Nucleophile of FAAH-Several properties of the K142A mutant suggest that this residue participates in the base-catalyzed activation of the Ser-241 nucleophile. For example, the strong pH dependence of the K142A mutant resembles the behavior of serine protease mutants that lack their respective catalytic histidine bases (27)(28)(29). The greatly reduced FP reactivity of the K142A mutant also implicates this residue in nucleophile activation. The special properties of FPs (high electrophilicity, excellent leaving group) make these reagents useful mechanistic probes of nucleophile strength for members of the serine hydrolase family (15,17,24). At physiological pH, the K142A mutant was found to exhibit a 6,100-fold reduction in FP-labeling rate compared with WT-FAAH (Table IV). A similar defect was observed in the S217A mutant (2,900-fold), whereas the FP reactivity of K142A/S217A double mutant was compromised to an even greater extent (70,000-fold). Collectively, these results indicate that both Lys-142 and Ser-217 participate in the activation of the Ser-241 nucleophile of FAAH, possibly through a mechanism outlined in Scheme 1. In the proposed catalytic mechanism shown in Scheme 1, an uncharged Lys-142 initiates catalysis by accepting a proton from Ser-217, which in turn deprotonates the Ser-241 nucleophile to facilitate attack on the substrate carbonyl. This route for formation of the tetrahedral intermediate contrasts with the mechanism proposed previously for the bacterial amidases MAE2 (19) and PAM (20). Based on structural data, the lysine residue of the catalytic triad in these enzymes has been suggested to exist primarily in a protonated or charged state, thus restricting the function of this residue to the acid-catalyzed, leaving group protonation step of acylation. Consequently, the bridging serine is proposed to act independently of the lysine residue as the catalytic base involved in nucleophile activation. Although biochemical data for or against this proposed model for bacterial AS enzyme catalysis remain sparse (neither PAM nor MAE2 has been subjected to extensive mutagenesis or kinetic analysis), it is interesting to note that PAM shows very weak reactivity with FPs and related nucleophile-directed labeling reagents (20). Indeed, in the structure of PAM, the aldehyde inhibitor chymostatin is bound to the enzyme active site but does not form a covalent adduct with the serine nucleophile. These findings suggest that the PAM nucleophile exists in a deactivated state, a feature that contrasts sharply with the serine nucleophile of FAAH, which exhibits rapid rates of labeling by FPs (1.4 ϫ 10 5 M Ϫ1 sec Ϫ1 at pH 7.0) (Table IV). Collectively, these data suggest that individual AS enzymes may operate through different mechanisms, possibly depending on the initial protonation state of the lysine residue of the triad. For AS enzymes similar to PAM, a protonated lysine would be unable to participate as a base involved in nucleophile activation, resulting in a correspondingly weak serine nucleophile; however, for AS enzymes similar to FAAH, an unprotonated lysine would increase the basic character of the bridging serine residue, in turn leading to a strengthening of the serine nucleophile. Finally, it is worth noting that the K142A mutant showed a stronger pH dependence for FP reactivity than the S217A mutant (Fig. 3), a finding that paralleled the more dramatic pH dependence observed for the esterase activity of the former enzyme (Fig. 2). These data indicate that alkaline pH is more capable of compensating for the loss of Lys-142 than for Ser-217 and suggest further that, in the case of the K142A mutant, solvent hydroxide may activate Ser-217 to initiate the acylation reaction.
Evidence (21). Non-linear least-squares regression analysis was used to fit all of the data traces (GraphPad Prism). mutant was found to exhibit Ͼ100-fold lower rates of hydrolysis with oleamide than OME, a substrate selectivity profile that differed dramatically from WT-FAAH, which exhibited a slight preference for oleamide over OME (Tables I and II and Fig. 2). The greatly diminished amidase activity of the K142A mutant suggests that this lysine residue is important for protonating the substrate-leaving group. Indeed, disruption of this step of the acylation reaction may be expected to selectively affect amidase activity over esterase activity because amines are much poorer leaving groups than alkoxy substituents. Based on the structures of FAAH and the related AS enzymes MAE2 and PAM, the impact of Lys-142 on leaving group protonation, as well as on nucleophile strength, would be predicted to occur indirectly via the action of this residue on the bridging Ser-217 of the triad (Scheme 1). In all of these structures, the lysine residue of the triad is too far away (Ͼ4.5 Å) to make direct contacts with either the serine nucleophile or the predicted position of the substrate-leaving group. Thus, it was surprising to find that the S217A mutant, despite displaying defects in both catalytic activity and nucleophile strength, exhibited wild type amide/ester hydrolysis ratios. Although it remains unclear how the S217A mutant maintains a preference for amide over ester substrates, we speculate that water may substitute for the absence of Ser-217 in this enzyme. If so, then the bridging water molecule would appear more capable of transmitting the effects of Lys-142 on leaving group protonation than nucleophile activation. Regardless, the observation that in the absence of Ser-217 FAAH still hydrolyzes amides at a greater rate than esters reveals that for this enzyme substrate selectivity is not coupled to catalytic power. Consistent with this notion, a K142E mutant has been found to display equivalent rates of acylation with both oleamide and OME, despite showing greatly diminished catalytic activity with each substrate (17). Collectively, these findings suggest that FAAH possesses a specialized active site structure dedicated to normalizing the acylation rates of amide and ester substrates. Because disruption of this property of FAAH appears to require conversion of Lys-142 to a residue that is incapable of transferring a proton, we hypothesize that FAAH achieves its unusual ability to competitively hydrolyze amides and esters at equivalent rates by forcing protonation of the leaving group early in the transition state of acylation concomitant with nucleophilic attack on the substrate carbonyl (Scheme 1). As previously predicted by Fersht (23), a mechanism where acid-catalyzed leaving group protonation is tightly coupled to base-catalyzed nucleophile activation could result in nearly identical rates of hydrolysis for amide and ester substrates. Indeed, Komiyama and Bender (30) predicted such a mechanism for amide hydrolysis by serine proteases. However, an alternative mechanism of hydrolysis involving two discrete proton transfer events was proposed for ester substrates to account for their preferential hydrolysis by these enzymes. Thus, what may be special about FAAH is not how the enzyme hydrolyzes amides but rather that it steers esters through the same reaction pathway.
Conclusions and Physiological Implications-It is worth considering why FAAH may have evolved a catalytic mechanism that achieves the competitive hydrolysis of amide and ester substrates. In vivo, FAAH must negotiate the binding and hydrolysis of its fatty acid amide substrates in a background of high concentrations of endogenous fatty acid esters (e.g. monoacylglycerols). If FAAH had invoked a typical serine proteaselike mechanism and hydrolyzed ester substrates in a deacylation rate-limiting manner with acylation rates that exceeded those of amides by 2-3 orders of magnitude (31)(32)(33)(34), the enzyme might have been saturated by lipid esters and failed to function as an amidase in vivo. Thus, we speculate that FAAH has evolved a mechanism that accomplishes the competitive degradation of amide and ester substrates to allow the enzyme to control the magnitude and duration of signals communicated by endogenous fatty acid amides within a complex milieu of structurally similar lipid natural products. SCHEME 1. Proposed mechanism for the acylation step of amide and ester hydrolysis catalyzed by FAAH (shown for amides). Lys-142, initially in a deprotonated state (A), abstracts a proton from Ser-217, which in turn abstracts a proton from the Ser-241 nucleophile (B). Attack of the nucleophile on the substrate carbonyl is proposed to occur in a coupled manner with proton donation from Ser-217 to the nitrogen atom of the amide substrate (C). This latter step requires the coincidental donation of a proton from Lys-142 to Ser-217, resulting in the formation of an acyl enzyme intermediate where both Lys-142 and Ser-217 have returned to their initial protonation states (D). In this mechanism, because nucleophilic attack and leaving group protonation take place nearly simultaneously, the acylation rates of amide and ester substrates would be normalized.
|
2015-03-21T17:44:09.000Z
|
2003-09-26T00:00:00.000
|
{
"year": 2003,
"sha1": "c95034f5c3c0df9e724d286de9357b324661918d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/278/39/37393.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "22ec86dc7ac567de7e12823c625d742f6371315d",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
3034722
|
pes2o/s2orc
|
v3-fos-license
|
Discreet monilethrix: De novo mutation on the example of polish families
Trichoscopy examination revealed the presence of few empty follicles, small broken hairs, black dots, and hair with uniform nodal dilatations with intermittent constrictions at which there was shaft breakage [Figure 2]. The hairs were of varying lengths; many were broken. Hairs with normal morphology were seen interspersed within this beaded hair. The beaded hair showed bending in different directions with a tendency to break at internodes (regularly bend ribbon sign) observed follicular keratotic papules on the scalp. This was consistent with monilethrix.
Sir,
We described an untypical form of monilethrix with discreet symptoms of disease and keratosis pilaris who was observed at the age of 6 years after fever.
A 6-year-old girl born of a nonconsanguineous marriage. Her hair was an easy fragmentation and thinning [ Figure 1]. After fever was noticed hair loss in large quantities. She had normal hair initially after which it got replaced with short and sparse hair.
Macroscopic study of the scalp showed thinning enhanced on the left and the sewing head, but the hair was easily breakable on all head area. Hair was short, sparse, and dry. On cutaneous examination, was described stubby with multiple keratotic hyperpigmented papules all over the scalp. Hairs were bent regularly at multiple locations and had a tendency to fracture at construction sites.
We found keratosis pilaris on the extensor arm.
Hair pull test was positive.
Trichoscopy examination revealed the presence of few empty follicles, small broken hairs, black dots, and hair with uniform nodal dilatations with intermittent constrictions at which there was shaft breakage [ Figure 2]. The hairs were of varying lengths; many were broken. Hairs with normal morphology were seen interspersed within this beaded hair. The beaded hair showed bending in different directions with a tendency to break at internodes (regularly bend ribbon sign) observed follicular keratotic papules on the scalp. This was consistent with monilethrix.
On light microscopic examination, hair [Figures 3a-d] revealed
elliptical nodes resulting in a beaded appearance of the hair shafts; characteristic alternating fusiform or spindle-shaped swellings (nodes) and constrictions (internodes).
Trichogram showed 81% hair in the anagen phase.
Eyebrows and eyelashes were without evidence of disease. There was no nail, dental, or sweat gland abnormality found after thorough physical examination.
Routine laboratory screenings were within range. Her mental and physical growth was normal. Her parents and her brother and sister were examined with no findings of hair abnormalities.
Discreet Monilethrix: De novo Mutation on the Example of Polish Families
In this child was prescribed pharmacological "made mix" (hydrocortisone, pilocarpine, tincture Capsici, and Chinae). After 2 months, mother and we observed improvement in hair density, but the symptoms of monilethrix still persisted and also in subsequent visits. Diagnosis in monilethrix based on clinical, microscopy, trichoscopy, and genetic or histology study.
Several genetic studies have suggested that monilethrix is caused by a hair keratin mutation. Was suggested mutations in the human hair basic keratins hHb1 and hHb6 in this disorder. The most common mutation is the E413K mutation in hHb6. [1,2] Mutations in the keratin genes KRT81, KRT83, and KRT86 lead to autosomal dominant monilethrix whereas mutations in the desmoglein 4 gene cause an autosomal recessive form. [1,3] The hair defect may occur in isolation but usually associated with keratosis pilaris presenting as keratotic papules. In our case, there were also keratotic papules on the extensor arm. [4] We presented a 6-year-old girl with monilethrix with the de novo mutation. The parents noticed the excessive fragility and hair loss after an episode of fever in the child in 6-year-old. Moreover, due to an involvement of only a small number of hair follicles trichoscopy was a decided diagnostic tool.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
|
2018-04-03T03:03:26.841Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "7e061adb2a4693f65a1d6394d8bcb1e362cee709",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "7e061adb2a4693f65a1d6394d8bcb1e362cee709",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224803778
|
pes2o/s2orc
|
v3-fos-license
|
Do Online Trolling Strategies Differ in Political and Interest Forums: Early Results
This study compares the effectiveness of different trolling strategies in two online contexts: politically oriented forums that address issues like global warming, and interest-based forums that deal with people’s personal interests. Based on previous research, we consider trolling as context-bound and suggest that relevance theory and common grounding theory can explain why people may attend and react to certain types of troll posts in one forum, but pay scant attention to them in another. We postulate two hypotheses on how successful (i.e., disruptive) trolling varies according to context: that trolls’ messaging strategies appear in different frequencies in political and interest forums (H1), and that context-matching strategies also produce longer futile conversations (H2). Using Hardaker’s categorization of trolling strategies on a covert–overt continuum, our statistical analysis on a dataset of 49 online conversations verified H1: in political forums covert strategies were more common than overt ones; in interest forums the opposite was the case. Regarding H2 our results were inconclusive. However, the results motivate further research on this phenomenon with larger datasets.
Introduction
Online discussion platforms, such as online forums and news articles' comment sections, connect millions of people daily. There are platforms and topics for everyone, hosting discussions ranging from seeking advice for personal trouble to heated debates on political matters. Many discussion platforms are vulnerable to malicious and disruptive behavior, which wreaks havoc in conversations and causes emotional distress to the people involved. Although online trolling is a diverse phenomenon, and perceptions towards it vary [9, pp. 65-89], the consensus is that it is ubiquitous and mainly disruptive, particularly because of the recent developments in using trolls to amplify polarization and political agendas, as well as to disrupt unwanted conversations and to spread disinformation [1,5].
Considering the widespread agreement that Internet trolling can cause significant societal harm, it is surprising how little is known about the conversational strategies that trolls use. Evidence suggests, though, that trolling may manifest differently across contexts [9,25]. Therefore, the trolling strategies used commonly in interest-oriented discussion forums may differ from the ones used in political debates. Most effective trolls may even be able to adapt their trolling strategies when they switch from one forum or discussion topic to another. Being aware of such differences in trolling strategies would be important in order to combat the ways by which trolls destroy civic conversations. This paper's findings come from a research project that has been launched to address the problem of trolling. Under the course of our research, we have made an initial observation that trolls seem to use different trolling strategies in political and interest discussions. Using a small dataset of 68 online discussions around political or societal themes (climate change, Brexit) and interest themes (cats, fitness), all of which included successful (i.e., response-inducing) trolling, we tested two hypotheses: that successful trolling strategies would indeed be applied with different frequencies depending on the topic of discussion (H1), and that the reply chains to trolls would also differ in their length, depending on the strategy used by the troll (H2). For distinguishing different trolling activities, we utilized the already well-established categorization by Hardaker [15] that describes six different trolling strategies along a covert-overt continuum.
The amount of data is so far limited, but our analysis suggest that H1 holds. We found a statistically significant difference between successful trolling strategies in political vs. interest discussions: in political discussions trolls apply covert strategies (i.e., subtle and non-apparent) more often than in interest discussions, where the strategies contrariwise are predominantly overt (i.e., noticeable and direct). On the other hand, we could not confirm H2 about reply chain lengths. The limited amount of data, however, pointed towards the direction predicted by the hypothesis: that covert trolling would lead to longer derailed discussions in political discussions, while overt strategies would do the same for interest discussion. The lack of confirmation to H2 notwithstanding, our findings have both academic and real-life implications, which we will cover in the Discussion.
Theory
Our hypotheses did not result from serendipitous discoveries but had a theoretical backing that sensitized us to pay attention to their possible existence.
Trolls take advantage of the ambiguities of computer-mediated communication and the vulnerabilities of internet discussion communities to lure others into fruitless, frustrating or circular discussions and to waste their time [16]. Trolling involves a process of learning the social practices of a community, assimilating to them, and then violating these practices to create disruption [8,25]. Trolling behaviors and perceptions of trolling are context-bound: they differ according to platform and community [9,16,25]. The motivations for trolling are similarly heterogeneous, including both amusement and political influence [3,16]. Therefore, also the most common strategies used to successfully troll other participants on a discussion forum are context-dependent.
Previous studies have illustrated various types of trolling. They have often oriented to analyzing and understanding one type of trolling at a time, such as memorial page trolling [22], signalling of in-group/out-group membership [11], LOL trolling [17], and political trolling [1,9]. In more generalizing depictions, differences between trolling styles have been illustrated e.g. by distinguishing between light or humorous trolling vs. (malevolent) serious trolling or ideological trolling [9,10]. Community norms [19], platform, conversational style, motivations, and enabling factors all have an effect on the differences in trolling behaviors, as well as how they are interpreted by community members [9]. Therefore, considering the context-bound nature of trolling, it makes sense to study how trolling strategies vary according to context, and whether trolls behave differently in light conversations as opposed to more serious political conversations. While many of the above-listed studies have not presented typologies of different trolling strategies or styles, Hardaker's [15] categorization of six comparable categories (Table 1) does that, and places different strategies onto a continuum ranging from covert trolling strategies to more overt ones. In our study, we adopt this categorization to classify our data, and to analyze the differences in trolling styles on political and interest forums. Luring others into off-topic discussions by spamming, partaking in cascades or introducing tangential topics (e.g., as in [16]). (Hypo)criticism Excessive criticism of others, e.g. on their punctuation while possibly committing the same errors oneself. Antipathy Creation of a sensitive or antagonistic context through purposeful provocation, in order to manipulate others to produce emotional responses. Endangering Giving out poor advice under an innocent guise, and others are compelled to respond in order to protect others. Shocking Posting about taboos or sensitive subjects, such as religion, death or human rights. Overt Aggression Deliberate and open aggressing of others into retaliating (e.g., by name-calling or foul language).
Hypothesis 1: The Frequencies of Trolling Strategies Are Different in Political and Interest Forums
The relevance of a comment in an online forum depends on the content that has started the conversation. For example, a discussion in an online newspaper's comment section happens in the context of the related news article. Similarly, in Reddit (a popular online news aggregator and discussion forum) a message is visible in relation to a "subreddit" (a discussion section) and an original post within it. Therefore the boundaries for the discussions that unfold are set to a specific topic that also sets the conversational context [18,20]. This affects the expectations people have about the discussion and its style, and thus they tend to accommodate their posts to this context [29].
Relevance theory [26], which builds on Gricean maxims [12,13], may help to illustrate why some posts on these forums manage to attract people's attention far better than others. A post's relevance is determined by not only its relevance to the assigned topic and the on-going conversation, but also its understandability. Relevance theory states that human cognitive mechanisms have a universal tendency of selecting most potentially relevant stimuli out of a variety, and to maximize the relevance of processed inputs, therein using the available processing resources most efficiently [26,. The cognitive principle of relevance deems some messages more appealing or understandable than others, also making them more relevant [26]. We argue that along with contextual norms assigned by the discussion topic, relevance also dictates the conversation's flow -in particular what type of posts (and thus trolling strategies) are deemed more relevant, and which posts incite more subthreads.
Compared to other less serious arenas, political forums discussing larger societal issues orientate more strongly toward more serious deliberative discourse or debate, and exhibit higher levels of interactivity and topical coherence [28] . They are to some extent similar to content-based and knowledge-based discussions on social media [18], and show less off-topic posts, as users' contributions to the discussions are more likely to address previous posts in a manner befitting a real debate [28, pp. 15-17]. News discussion is largely opinion-based, and so participants also expect to be communicating with people coming from varying or opposing viewpoints [18,27]. Thus, the general style of political forum discussion is different compared to interest topics. Consequently, we believe that political forum discussions are more vulnerable to covert trolling attempts by being more neutral, information-centered and less personal.
Contrarily to political arenas, interest forums serve as spaces for bonding with people with similar interests, beliefs or hobbies [4,21]. Central motivations for joining these communities include information exchange, social support, and most of all friendship [24]. Essential for many such groups is creating an environment of camaraderie and supportive solidarity to enhance fun and a sense of belonging, which is why insults are taboo and confrontation minimized [4]. In general, interest forums invite contemplation on personal experiences, friendly exchange of feelings and anecdotes, and supportive information-sharing about the hobby or interest with other enthusiasts [4,14,20,24]. We argue that due to the high relevancy of posts containing friendly support or personal experiences in this context, posts violating its taboos (e.g., insulting others) are also more cognitively relevant. This is because resolving and condemning such posts contributes to maintaining the key elements of the forum, such as a safe and friendly environment. Of course, conversations on online newspapers' comment sections under interest-related articles do not necessarily form even a loose community. However, we consider it likely that these conversational arenas maintain some similar functional features as more close-knit communities like r/cats on Reddit. This is why we maintain that interest forums match with overt strategies, i.e. they are more vulnerable to more personal and visible overt trolling attempts like direct insults. Therefore, in summary, we hypothesize that:
H1:
The frequencies of covert and overt trolling strategies are different in political and interest forums.
In particular, we hypothesize that covert trolling is common in political discussion while overt trolling is common in interest forums.
Hypothesis 2: Trolls Can Derail Others into Longer Futile Discussions by Choosing Trolling Strategies According to the Type of the Forum
Our second hypothesis is derived from the first one. If trolls match their trolling strategy to the type of the online forum, this may be because they know (consciously or sub-consciously) it will be more effective. One method for measuring the effectiveness of trolling is to measure the amount of engagement that a message manages to garner from others in the discussion. Along with relevance theory, the theory of common grounding [6,7] provides a theoretical justification for why trolls succeed in capturing other people into long unfruitful discussions. In well-intended communication, conversational parties engage in common grounding -a 'collective process by which the participants try to reach a mutual belief that they have understood what each other meant' [6, p. 223]. Following the premises of this psycholinguistics-derived theory, all contributions to a conversation need to be grounded, i.e. turned into mutual knowledge, by providing evidence that the message has been understood [6,7]. All participants in the conversation are also expected to engage in resolving breakdowns in the case of possible misunderstandings. An unintelligible action thus calls for an explanation from its performer. This requirement for providing an explanation, in turn, is highly amenable for exploitation if one wishes to act as a troll. By resisting the norms of common grounding and accountability, a troll can prolong the time their posts attract attention.
As mentioned, contextual differences require learning the conversational conventions of a given online forum in order to gain access to the type of interaction others on the forum usually deem relevant [8,9,26]. Similarly, we state that relevant posts are seen as worth the collaborative efforts of grounding in case of breakdowns; in an asynchronous discussion space with a multitude of overlapping posts only discussion-relevant breakdowns are attended to. Consequently, we argue that participants on political forums are more prone to engaging in long grounding efforts when the conversation breaks down due to issues matching with the functions of the discussion space: misunderstandings or view point differences in informational content or correctness. On the other hand, we claim that people on interest forums are more inclined to engage in long conversations on personal experiences and issues related to the individual participant, which is why more collaborative effort will be expended on resolving the matching overt trolling attempts like unintelligible actions or attacks against a participant's person. Therefore, our hypothesis H2 is, as already stated in the section's title: H2: The quantity of replies to trolls will vary in different types of forums depending on the employed trolling strategy.
In particular, covert strategies would incite longer conversations on political forums, whereas overt strategies would have the same effect on interest forums.
Data
Through selective sampling of online forums, we have manually acquired a corpus of conversations containing trolling. Keeping in mind our two hypotheses, we have selected several differing platforms to increase the heterogeneity of conversational and trolling styles. The corpus covers several discussion areas on Reddit and comment sections on English language online newspapers, including the Telegraph, the Guardian and the Washington Post. Having a large readership, these are influential media platforms that are likely to be targeted by trolls.
Considering our interest in both political and interest online discussions (see Sect. 2), our corpus includes two kinds of conversation topics: one around political issues (climate change and Brexit) and the other around interest discussions (cats and fitness). Important political topics, especially climate issues and Brexit , are likely to attract serious or ideological trolls wishing to disrupt or polarize the dialogue (e.g., [2,3,23]) Interest topics, in turn, such as apolitical and more everyday hobby-related discussions, may be vulnerable to "light" trolls if the topic is dear to the community (e.g., horses [14] or soap operas [4]).
In this data collection process, we have continued browsing the above-listed forums and their topic-specific discussion spaces until we have identified 2-5 conversation threads for each topic on each platform. We have particularly looked for activity-rich discussions in order to find successful trolling that has managed to elicit a lot of responses. Here successful trolling has referred to managing to formulate posts and/or responses to others' posts that provoke others into responding directly or indirectly. Comments like 'Don't answer him, he's a troll.' and troll-triggered off-topic arguments among other participants have also qualified as responses. For the online newspaper comment sections, successful trolling has typically meant 8-15 response posts in a thread triggered by the troll, while on Reddit the range has been 15-20 replies. The differing numbers are due to the average number of replies having been smaller in newspaper comment sections as compared to Reddit, and the need for context-sensitivity as some topics inspired more replies in general than others, even within the same platform.
Finally, we have tagged all the trolling content in this dataset following Hardaker's [15] six-category typology (see Table 1) where the trolling strategies can be located on an covert-overt continuum. We have used both conversationalist and researcher intuition to recognize what would have qualified as trolling in Hardaker's study, labeling instances of trolling according to her categorization to gain a comprehensive dataset [14,15].
Results
Most trolling styles in Hardaker [15] could be found in each of the selected topics, with Brexit and climate change on the political axis, and fitness and cats on the interest axis. Table 2 presents examples. Makes me wonder what flat earthers think since the flat earth is surrounded by ice walls.
-AccelHunter, Reddit, April 2019
Hypocriticism Political (Brexit): @Peter Wayde Peter, if you can't even punctuate a sentence "why should we take notice you?" (heavy sarcasm) PS, "the causes will be the causes" is terrible syntax.
Political (climate change):
It's comments like this that make me realize how ignorant the Western left really is To you, the two sides are "the side I agree with personally" and "the side that is inherently wrong and evil". There's no middle ground. Everything is black and white and that's that.
Are the Frequencies of Covert and Overt Trolling Strategies Different in Political and Interest Forums (H1)?
Our first hypothesis (H1), more specifically, was that trolls would be more likely to use covert trolling strategies (digression, (hypo)criticism or antipathy) in political discussions and overt strategies (endangering, shocking or aggression) in interest forums. To evaluate this hypothesis, we counted the frequency of each trolling strategy used in each discussion in our sample. We created two larger groups of trolling (covert and overt) by pooling together the frequencies of the three first and the three last strategies. This resulted in a 2 × 2 frequency matrix whose values are presented in the sub-totals in Table 3.
In the preparation of this table, we removed the following cases that would have confounded our analysis. First, 13 discussions could be classified both as covert and overt trolling. After their removal, each discussion represented exclusively either covert or overt trolling. Second, there were 4 trolls (identified by their nickname) that appeared several times in our data (in 9 discussions altogether). To remove the possibility that their behaviors would be over-represented and would thus skew our data, we used a random number generator to sample only one discussion from each troll in our analysis. In one case, both confoundments were present within the same discussion. As a result, altogether we removed 19 discussions from the analysis. Table 3's content is what remained after these preparations. 19 instead of 20 because one discussion exhibited both hypocriticism and antipathy which was counted as one discussion only in the total. b The count sums to 20 instead of 21 because one discussion exhibited both endangering and aggression which was counted as one discussion only in the total.
Already with a plain visual inspection of the frequencies, our hypothesis seemed to be true: there were more discussions in the political-covert quadrant than in the political-overt quadrant (19 vs. 5), and the inverse held in the interest-covert and interest-overt (5 vs. 20) quadrants. We confirmed the hypothesis by comparing frequencies between categories using a Chi-square contingency table analysis: in political discussions, covert trolling was more frequent while the opposite was true for interest discussions (p < .0001). Thus H1 was confirmed : trolls appear to use more covert trolling styles to (successfully) disrupt political conversations, whereas for invading interest conversations they use more overt styles.
We also studied how the removal of the fore-mentioned 13 discussions (where trolls had applied both overt and covert trolling strategies) had possibly skewed our findings. We included the removed discussions in our analysis by assigning them either to an overt or covert category. We implemented the assignment so that the frequencies between the categories would come as close to each other as possible, thus making it maximally difficult to find differences in a statistical test. Out of the 13 discussions 8 were political, 1 of which included a troll who had also appeared in another discussion in our data. We assigned the resulting 7 discussions to the overt trolling category, resulting in a 19 vs. 12 comparison between covert and overt strategies in political discussions (instead of 19 vs. 5; see Table 3). The remaining 5 discussions that had been removed were interestbased discussions, where covert strategies had been rare. We assigned all the 5 discussions to the covert group, thus yielding a 10 vs. 20 comparison (instead of the earlier 5 vs. 20). We finally repeated our test for frequency differences, and again found a statistically significant difference (p < .05), thus further confirming H1.
A closer look at Table 3 suggests that covert digression and antipathy strategies were particularly common in politically oriented discussions. Aggress trolling was also found in some cases (see Table 3), but the proportional amount of aggress trolling behavior was smaller than in interest conversations. In interest topics, in turn, successful trolls seemed to commonly exploit overt aggress and endanger strategies, attacking others directly or feigning concern about endangering issues like steroid use. It must be noted that in fitness discussions the difference between covert and overt strategies was very small, arguably because trolling instances were harder to find. With a larger dataset the above-stated possibilities may be studied further.
Can Trolls Derail Others into Longer Futile Discussions
Choosing Trolling Strategies According to the Type of the Forum (H2)?
As a follow-up for hypothesis H1, we specifically predicted in hypothesis H2 that the matching pairs of trolling strategy and discussion type (i.e., covert-political, overt-interest) would not only be more frequent but also, from the troll's point of view, more "successful" in luring others into longer arguments. The success could be measured by the number of replies that others would post to the troll's messages. Long chains of replies would best serve the trolls' interest of creating havoc and destroying civic discussion in online spaces. The length of individual posts was not considered due to the fact that it may vary in online discussions for several reasons which cannot be controlled here.
To evaluate hypothesis H2, we counted the number of replies that others had posted to the discussion thread after the trolls' original message. If the trolls themselves engaged in these subsequent discussions, we excluded their messages from these counts. We then compared the lengths of the reply chains in the 2 × 2 quadrants consisting of covert vs. over trolling and political vs. interest discussions. For this comparison, we used ANOVA, which is a method suited for analyzing differences between scalar values between categories. Table 4 presents the data used in the analysis. Similarly with H1, also here a visual inspection suggests that the hypothesis could indeed hold: the covertpolitical and overt-interest matches have longer reply chains than the other pairs. However, this time we could not confirm this impression statistically: in a oneway ANOVA on political discussions, covert trolling did not lead to longer chains than overt trolling (p = .279). In the same analysis on interest discussions, overt trolling did not lead to longer chains than covert trolling (p = .284). We also carried out a two-way ANOVA with the strategy type (covert/overt) and the theme (political/interest) as factors, with an interest in the test's interaction term that could test if the length variable's relationship is inverted when analyzing the two different discussion topics. The interaction term was closer to a statistical significance, but not sufficient for any conclusions (p = .129). Correcting the length variable distributions' skewness by square root transformation, or using non-parametric U tests did not yield significant results either. Thus, H2 was not confirmed. The reason for this failure becomes apparent when one inspects the numbers of cases in each quadrant. The earlier-presented Table 3 shows that the data contained only 5 cases of mismatching strategy-discussion pairs (i.e., politicalovert and interest-covert). Statistically significant findings were not attainable with such a small dataset size.
Discussion
To recap, our first hypothesis was that commonly used successful trolling strategies differ according to the conversational context of the forum: political-covert or interest-overt. It was validated by a Chi-square analysis, which encourages further studies on the phenomenon with larger datasets. The second hypothesis was that covert strategies produce longer futile conversations in political arenas, whereas overt strategies drag on longer arguments in interest conversations. This claim was not supported by our statistical analyses at this point, but the data suggest it plausible for larger datasets to yield better results.
A better dataset would include a larger number or conversations, ranging through a greater variety of topics on the political and interest axes, including also unsuccessful troll posts. It would also allow for a more specific analysis of different trolling strategies, like the ones that Hardaker [15] identified. Our data is, of course, insufficient at the moment due to its size and the limitations of sampling trolling based on conversation-inherent dynamics. For the moment, classification into a category of trolling strategies per Hardaker [15, p. 68] requires several posts from the troll to determine whether the poster could be trolling others. This requirement means that our analysis addresses only successful trolling attempts where even the smallest attempt has led to a desired effect (from the troll's point of view). Sampling and analyzing also unsuccessful trolling is a problem to be resolved in future research, and will allow more conclusive findings.
We also have other considerations that future research needs to address. First, how exactly the nature of the conversational space and its norms (as theorized by Kirman et al. [19]) affects communicational breakdowns. Now, the results of this study already implicate that transgression of contextual norms involves using a matching trolling strategy: trolls create posts that have high cognitive relevance in the discussion space. They also show that trolling style is not bound to individual and unique situations only; there are more general patterns in trolling that transcend forum and topic boundaries (e.g. Brexit), and certain types of forums can be expected to be vulnerable to matching trolling strategies. In political discussions, this means assimilating to the fact-based style, seeming (superficially) well-informed and topically coherent, citing (pseudo-)scientific sources and referring to field specific terminology, while baiting others for instance with antagonistic interpretations of related information, epistemological controversy or incoherence. In contrast, the interest context seems to give focus to trolling that attacks the friendly and supportive discussion's main functions: here successful trolls do not require fact-based or topic-related expertise, high topical coherence or objectivity, but can instead overtly violate contextual boundaries by striking an emotional chord within the community. Thus, in the constant and multi-sided flow of posts with different and possibly overlapping agendas, the cognitive principle of relevance seems to dictate that posts matching with the functions of the discussion space gain most attention and manage to launch further discussions. The relatedness of more general contextual features and (successful) trolling strategies needs to be addressed more carefully in further research.
This also gives rise to further considerations beyond those that we put forward in our hypotheses. In particular, we find it worthwhile to consider relevance theory more broadly in the context of analyzing trolling. A relevance theoretical approach helps to further explicate the relationship between trolling and expecta-tions of context-specific posts. Firstly, why some troll posts are noticed in the discussion while others receive very little attention, and secondly, why people engage in selected communicational breakdowns, despite their redundancy, provocativeness and frustrating effects. In interest discussions, for instance, participants seem to pay attention to overt troll posts because they seek to resolve normviolations in order to reach common grounding and to maintain the friendly atmosphere. Arguably, participants on political forums put emphasis on factual correctness and enjoy sharing knowledge, which is why they are more inclined to be baited by epistemic incoherence or challenges against information they have provided. Thus, another possible course for future studies could involve deepening our understanding on how exactly discussion spaces give higher cognitive relevance to certain trolling strategies than others, e.g. why exactly certain posts are relevant to the people partaking in given discussions.
An issue to be aware of is that the results of the research presented in this paper, and in more extensive studies in the future, might be used for malicious purposes by aspiring trolls and bodies who are interested in large-scale misinformation campaigns. However, we believe that the results we presented here are mostly known to trolls already, whereas other discussants on online forums are probably less informed about trolling strategies. This makes them more vulnerable, which is why the results should yield positive results in raising awareness.
Assuming that the finding from H1 survives the test with a larger dataset, and H2 can eventually be proved, the implications are that we can expect certain types of online forums to be vulnerable to specific types of trolling strategies. The findings of this study already take us a step closer to identifying a given forum's weak spots that enable trolling behaviors, thus helping in predicting and detecting trolling attempts. Developing awareness of the type of lures trolls use to attack different conversational groups would arguably also improve conversants' resistance to trolls' harassment. Future studies with larger sets of data will likely enhance the opportunities for identifying trolling patterns out of larger collections of online conversations, and therefore take us closer to more accurate automatizations of trolling detection and prevention, and moderation practices. Considering the recent developments in organized trolling of political discussions, detecting trolling patterns in these arenas on a larger scale would help in battling trolling used in information operations and to ensure democratic public spaces for online civic discussion. On the other hand, this would also help in ensuring that minority groups, for instance, will have safe spaces for meeting others with similar experiences, not having to be terrorized by trolls who seek only to amuse themselves or to oppress others.
|
2020-10-21T05:04:33.829Z
|
2020-10-19T00:00:00.000
|
{
"year": 2020,
"sha1": "af13346b1082ac70e8d55ae358dfbf0461a17618",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-61841-4_13.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "af13346b1082ac70e8d55ae358dfbf0461a17618",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
219587256
|
pes2o/s2orc
|
v3-fos-license
|
Mature Retina Compensates Functionally for Partial Loss of Rod Photoreceptors
SUMMARY Loss of primary neuronal inputs inevitably strikes every neural circuit. The deafferented circuit could propagate, amplify, or mitigate input loss, thus affecting the circuit’s output. How the deafferented circuit contributes to the effect on the output is poorly understood because of lack of control over loss of and access to circuit elements. Here, we control the timing and degree of rod photoreceptor ablation in mature mouse retina and uncover compensation. Following loss of half of the rods, rod bipolar cells mitigate the loss by preserving voltage output. Such mitigation allows partial recovery of ganglion cell responses. We conclude that rod death is compensated for in the circuit because ganglion cell responses to stimulation of half of the rods in an unperturbed circuit are weaker than responses after death of half of the rods. The dominant mechanism of such compensation includes homeostatic regulation of inhibition to balance the loss of excitation.
INTRODUCTION
Degenerative diseases, injury, and normal aging can cause the death of primary neurons. Understanding the changes that happen in the resulting deafferented neural circuits is critical for diagnostic and therapeutic efforts to preserve and rescue function. Input loss may be propagated through a deafferented circuit, resulting in a decrease in output proportional to the decrease in input. Input loss may also be exacerbated, for example through degeneration of initially unaffected neurons, leading to a decrease in output more severe than the decrease in input. Alternatively, input loss may be compensated for within a deafferented circuit, resulting in a full or partial recovery of the output signal. To differentiate between these possibilities, we must investigate a circuit with known, controllable inputs and highly stereotypic outputs. To pinpoint the origin of compensation within a deafferented circuit, we use the retina, a system with accessible and identifiable neurons.
Previous studies of input loss in mature retina have performed focal lesions in primary sensory neurons and have demonstrated that the ganglion cell's spatial receptive fields fill in the resulting scotoma (Sher et al., 2013;Beier et al., 2017). Our study of partial cone loss further demonstrated that the ganglion cell's receptive field expanded its inhibitory surround yet maintained center-surround organization (Care et al., 2019)-a fundamental function of ganglion cell processing (Kuffler, 1953;Barlow, 1953;Atick and Redlich, 1990). A recent study using the same partial cone ablation suggested that specific bipolar cell types can regain photoreceptor contacts (Shen et al., 2020). These findings suggest the mature retina may compensate for input loss, but the mechanisms and extent of such compensation remain unclear.
Here, in mature retina, we induce death in half of rods, which in the mouse comprise half of the entire population of primary sensory neurons, and measure function throughout the partially deafferented circuit to identify potential sites of compensation. We record the output of the retina from alpha ON sustained ganglion cells (A ON-S GCs) (Margolis and Detwiler, 2007;van Wyk et al., 2009;Krieger et al., 2017). These cells are arguably the most well-characterized and sensitive ganglion cells in mouse retina and would therefore reflect changes in the circuit at low light levels dominated by rod input (Murphy and Rieke, 2006;Margolis and Detwiler, 2007;van Wyk et al., 2009;Krieger et al., 2017). Light responses initiated by rods proceed via synaptic transmission to rod bipolar cells (RBCs) and then to AII amacrine cells, which are electrically coupled to ON cone bipolar cells (CBCs) ( Figure 1A). The rod and cone pathways converge in the ON cone bipolar cells' axon terminals, which synapse onto ganglion cell dendrites. Similarly, we record from alpha OFF transient ganglion cells (A OFF-T GCs), which rely on a pathway that diverges at the AII amacrine cellto-OFF cone bipolar cell synapse, to generalize our results. We use these well-defined pathways to examine the consequences of partial rod death on the deafferented circuit.
We show that by the retinal output, the ganglion cells have largely compensated for half rod loss in their rod-mediated spikes and excitatory input currents. We localize compensation at the level of the rod bipolar cell, where reduced excitatory input currents are compensated for by reduced inhibitory currents resulting in recovered voltage outputs. Intriguingly, in the same ganglion cells that show recovered rod-mediated light responses, cone-mediated light responses are enhanced. These changes in cone-mediated, but not rod-mediated, responses are recapitulated by half stimulation of control retina, allowing us to differentiate reduced rod input from subsequent compensatory changes within the partially deafferented circuit.
Selective Ablation of Half of Rod Photoreceptors in Mature Mouse Retina
To induce partial rod loss, we injected diphtheria toxin (DT) into mice expressing the diphtheria toxin receptor (DTR) under the rhodopsin promoter at postnatal day 30 (Rho-DTR) ( Figure 1A). In this system, half of the rods are ablated upon examination 1 month after DT injection ( Figure 1B; STAR Methods). Control mice, either DTR-positive ornegative, were injected with saline. Rod death was confirmed in cross-sections of retina by quantifying the rows of somata present in the outer nuclear layer (ONL), which is composed of 97.2% rods (Jeon et al., 1998; Figure 1C). Two injections of DT consistently reduced the rod population by 50%-60% ( Figure 1D, ONL) (Control, 10.3 ± 0.44, n = 10; DTR, 4.1 ± 0.22 rows of somas, n = 7; median ± interquartile range [IQR], p = 4.57e-05, rank sum). To examine off-target effects, we quantified the rows of somata present in the inner nuclear layer (INL), which is composed of bipolar and amacrine cell somas, and found no change after rod death ( Figure 1D, INL) (control, 4.47 ± 0.27, n = 10; DTR, 4.33 ± 0.20 rows of somas, n = 7; median ± IQR, p = 0.371, rank sum). Quantification of cones, horizontal cells, rod bipolar cells, starburst amacrine cells, ganglion cells, and microglia immunostained in flat mount retina revealed no reduction ( Figure S1; Table S1). Furthermore, when the Rho-DTR mouse line was crossed to a fluorescent reporter mouse line, fluorescence was confined to rods ( Figure S2). With this system that selectively ablates rods, we aimed to understand how the mature retina reacts to input loss. The retinal reaction to input loss may be understood functionally either as propagation, exacerbation, or compensation of such loss ( Figure 1E). For instance, the functional effect of loss of 50% of the rod input may be propagated through the circuit, resulting in a loss of 50% of the retinal output. This would be evident as smaller responses ( Figure 1E, curve 1) and/or maximum amplitude ( Figure 1E, curve 2). If, on the other hand, the functional effect in the circuitry is exacerbated, then down-stream neurons will likely perform worse than input loss alone predicts. Alternatively, the functional effect of loss may be compensated for within the circuit, i.e., by an increase in gain, resulting in a restoration of the retinal output. Compensation would be evident as a response equal to ( Figure 1E, curve 3) or even greater than that of control ( Figure 1E, curve 4). We discriminate among these possibilities by using the well-defined retinal pathways to ganglion cells.
Rod-Mediated Charge and Spiking Output Recover Partially in Ganglion Cells after Rod Loss
To understand how rod pathways in mature retina react to the loss of half of their inputs, we measured the output of the retina by recording rod-mediated spikes from A ON-S ganglion cells. To stimulate rods, we presented flashes (10 ms) of blue light (470 nm) doubling in intensity from darkness. We used this stimulus in all experiments in which rods were preferentially stimulated. We recorded rod-mediated spike responses from control and Rho-DTR retina in cell-attached patch-clamp recordings ( Figure 2A) and quantified responses by plotting the total number of spikes elicited by each flash intensity ( Figure 2B). We fit these data with the Hill equation and used the fit parameters to compare responses from control and Rho-DTR retina ( Figure 2C; Table S2). After the loss of 60% of rods, the rod-mediated spike response of A ON-S ganglion cells showed decreases in the maximum response (R max ) and in the light intensity at half the maximum response (I½). This indicates that, after partial rod loss, the rod-mediated response of A ON-S ganglion cells has fewer spikes but responds at lower light levels. Such results could be consistent with the propagation of reduced rod input through the circuit. However, the average loss of rod-mediated spikes (R max reduced by 22%) is less than the average loss of rods (60%), suggesting that compensatory mechanisms in the mature retinal circuit act to mitigate the functional effects of rod loss. To understand the extent of this effect, we measured rod-mediated spikes in A OFF-T ganglion cells and found a similar mitigation of the effects of rod loss ( Figure S3; Table S3). Furthermore, the increase in sensitivity of the rod-mediated spikes suggests that gain of function can occur after partial rod loss.
To understand how these spike responses are generated, we recorded the rod-mediated input currents onto A ON-S ganglion cells (Figures 2D and 2E). Excitatory current amplitude was unchanged, but the R max of the integrated rod-mediated excitatory currents (charge transfer) was reduced after partial rod loss ( Figure 2F; Table S2). This indicates that the amplitude or duration of rod responses are diminished. The reduced charge may explain the reduction in rod-mediated spikes that we observed. The charge transfer of rod-mediated inhibitory currents was not significantly different in the fit parameters between control and Rho-DTR retina (Figures 2G-2I; Table S2). This indicates that rod-mediated inhibitory currents onto A ON-S ganglion cells are unaffected by or have recovered from partial rod loss.
At Rod Light Levels, Intrinsic Excitability Is Maintained in A ON-S Ganglion Cells after Rod Loss
One possible cause of a decrease in rod-mediated spikes is a change in the current-to-spike transformation in the ganglion cell, i.e., intrinsic excitability. The transformation from currents to spikes includes voltage-gated conductances that are eliminated in the voltageclamp recordings described above. Thus, to measure the current-to-spike gain in the cells for which intensity-response relationships were recorded for both spikes and currents, we calculated the ratio of the number of spikes to the peak charge elicited at each flash intensity ( Figure 3A). A change in this ratio between control and Rho-DTR conditions would indicate that changes in voltage-gated conductances contribute to the observed decrease in spikes. We found no significant difference between the current-to-spike gain of cells from control and Rho-DTR retina at any of the flash intensities tested. This provides one line of evidence that compensation for input loss is not due to changes in the intrinsic excitability of A ON-S ganglion cells.
To further confirm that compensation occurs prior to the ganglion cell intrinsic excitability, we directly measured the current-to-spike transformation by injecting current into A ON-S ganglion cells in perforated patch-clamp configuration. This technique enables the simultaneous injection of a fluctuating white-noise current and measurement of the cell's spiking response (Kim and Rieke, 2001) ( Figure 3B). To capture the current-to-spike transformation, we estimated the linear filter and nonlinearity that generated the spike response from the input current for each cell ( Figure 3B, box). We found no significant differences in the linear filters or nonlinearities between cells from control and Rho-DTR retina in darkness, the condition that best simulates rod levels (linear filter: control versus Rho-DTR, p = 0.522; nonlinearity: control versus Rho-DTR, p = 0.667; control [Rho-DTR], n = 19 [20], permutation test). Both the current-to-spike ratio calculation and current injection support the conclusion that intrinsic excitability is maintained in A ON-S ganglion cells after rod loss. Thus, the site(s) of compensation are prior to the ganglion cell.
The Rod Bipolar Cell Is a Site of Compensation
Next, we examined other potential site(s) of compensation within the retinal circuit, upstream of A ON-S ganglion cells. For a population readout of photoreceptor and bipolar cell responses, we measured the electroretinogram (ERG) in vivo in control and Rho-DTR mice under dark-adapted (rod-mediated) conditions ( Figure 4A). In Rho-DTR mice, we found a significant reduction in the a-wave amplitude of the dark-adapted ERG, a measure proportional to the rod dark current. This finding indicates an overall decreased rod response in Rho-DTR retina ( Figure 4B). In contrast, the amplitude of the b-wave, which is a measure proportional to the overall rod bipolar cell and Müller glia cell responses, had recovered at these same light levels 1 month after DT injection. The ratio between the a-and b-waves at a subsaturating light level (9.73 photons/μm 2 /s) was significantly different in control and Rho-DTR mice ≥1 month after DT injection (1 month control [Rho-DTR]: 0.349 ± 0.091 [0.173 ± 0.073], mean ± SD, p = 0.009, t test; ≥ 4 months: 0.363 ± 0.075 [0.255 ± 0.090], p = 0.036, t test; Figure S6), demonstrating that the b-wave amplitude was greater than predicted from simple propagation of the a-wave amplitude loss. In contrast, this ratio between the aand b-waves was not significantly different between control and Rho-DTR mice 3 days after DT injection, before rod death was complete (3 days: 0.276 ± 0.064 [0.242 ± 0.066], mean ± SD, p = 0.363, t test; Figure S6; STAR Methods), demonstrating that the reduction in b-wave amplitude was propagated from the reduction in the a-wave amplitude in the Rho-DTR mice. This indicates that, after 1 month, rod bipolar cell output can be maintained despite a decrease in rod input ( Figure 4C). This finding suggests that compensation for rod loss occurs between the inner segment of the rods and the voltage output of the rod bipolar cells, which could include rod synaptic release, rod bipolar cell postsynaptic sites, and the currentto-voltage transformation within the rod bipolar cell.
The same mice were also stimulated under light-adapted (cone-mediated) conditions. In light-adapted conditions, in which the a-wave reflects cone activity and the b-wave reflects primarily ON cone bipolar cell responses, we observed no reduction in the amplitude of the a-or b-waves, indicating that population responses of cone photoreceptors and ON cone bipolar cells are not affected after 60% rod death ( Figures 4D-4F).
Decreased Excitatory and Inhibitory Inputs to Rod Bipolar Cells Yield a Recovered Voltage Response
To further investigate whether compensation for rod loss occurs between the rod inner segments and the rod bipolar cell voltage output, we recorded directly from rod bipolar cells. We measured rod-mediated responses from rod bipolar cells in whole-cell current-clamp and voltage-clamp configurations ( Figure 5). Recordings were made in the slice preparations and confirmed in whole-mount retina. Rod bipolar cells were identified by their location within the inner nuclear layer and ON light response in combination with polarity reversal at the reversal potential for excitatory currents. In contrast, ON cone bipolar cells had light responses that could not be reversed because of gap junctions (Veruki and Hartveit, 2002). The peak amplitude of the rod-mediated response was used to construct intensity-response relationships for individual rod bipolar cells. As described for the ganglion cells, these data were fit with the Hill equation and the parameters for the best-fit curves were used to compare across cells. After partial rod loss, we found no change in the peak amplitude of the voltage response of rod bipolar cells (Figures 5A-5C; Table S4). This aligns with results from the ERG and indicates that full compensation for the decreased rod input is achieved before the rod-mediated signal leaves the rod bipolar cell.
To understand how this voltage response is generated, we measured the excitatory (Figures 5D-5F) and inhibitory currents (Figures 5G-5I) onto rod bipolar cells under voltage clamp.
Following partial rod loss, we found a significant decrease in the R max for both excitatory and inhibitory currents (Table S4). Excitatory currents were reduced on average by 53%, which reflects the percentage of rod loss. Inhibitory currents were reduced on average by 94%. The nearly complete loss of inhibition indicates that the effect of rod loss on inhibition is greater than the loss of excitatory input.
One explanation for the enhanced voltage response is that a compensatory mechanism is engaged to reduce inhibition in order to balance reduced excitation. Alternatively, these data may suggest that inhibition is stimulated in an all-or-nothing manner. In addition to a loss of input, partial rod death could also change the rod bipolar cell response to remaining rods. Such a change would be evident as an increase or decrease in sensitivity, represented by the I½ parameter. We found no change in the sensitivity of excitatory currents, which suggests that the remaining rod-to-rod bipolar cell synapses are unchanged, and that the site of compensation is the inhibitory currents. Anatomically, quantifications of rod bipolar cell synaptic ribbons and inhibitory postsynaptic puncta densities show no significant difference between control and Rho-DTR. Thus, physiological adaptive changes are not matched by anatomical synaptic changes in this case ( Figure S4). Preliminary results show that the number of dendritic tips in rod bipolar cells is halved, suggesting the total membrane area of the rod bipolar cell has decreased and the input resistance has increased ( Figure S5; Anastassov et al., 2019). An increase in resistance in Rho-DTR rod bipolar cells could contribute to the recovery of voltage output with diminished current input. Thus, we interpret that a reduction in inhibition potentially balances the reduction in excitation due to rod loss, allowing rod bipolar cells in Rho-DTR retina to generate voltage outputs comparable to control retina.
To summarize thus far, after 50%-60% rod death, the output of the rod population is reduced, and both excitation and inhibition onto rod bipolar cells are reduced. Consistent with the larger reduction in inhibition than in excitation, the voltage output of rod bipolar cells is maintained. The excitatory input onto A ON-S ganglion cells is thus partially recovered, which generates partially recovered spike responses to rod stimuli.
Cone-Mediated Charge and Spiking Output Increase in A ON-S Ganglion Cells after Rod Loss
In the primary rod pathway, rod-mediated signals reach ganglion cells via the axon terminals of ON cone bipolar cells ( Figure 1A). Therefore, another possible site for signal amplification through the primary rod pathway is at the cone bipolar-to-ganglion cell synapse. To isolate this section of the circuit, we used a short-wavelength (S)-conepreferring stimulus composed of flashes (10 ms) from a short-wavelength LED (370 nm) doubling in intensity on a blue mean to adapt rods. To understand whether signaling through the cone pathway is affected by partial rod loss, we recorded the cone-mediated spike response from A ON-S ganglion cells ( Figures 6A-6C). We found significant increases in R max and I½, as well as a decrease in the exponent in Rho-DTR retina (Table S5). This finding indicates that the cone-mediated spike response is increased in amplitude and decreased in sensitivity after rod loss, and more generally, that the loss of rods affects signaling through the cone pathway. To further investigate the source of this increased spiking, we measured the cone-mediated excitatory (Figures 6D-6F) and inhibitory (Figures 6G-6I) currents onto A ON-S ganglion cells. We found that cone-mediated excitatory currents, similar to the cone-mediated spike output, showed an increase in R max and decrease in the exponent (Table S5). Cone-mediated inhibitory currents showed no change after rod loss. This suggests that the increased spiking in response to cone-preferring stimuli may be driven by increased excitatory input from the ON cone bipolar to the ganglion cell. Alternatively, the loss of rods might directly affect cone signals (see Discussion).
We had previously eliminated the possibility that changes in intrinsic excitability in the A ON-S ganglion cells underlie changes in rod-mediated spikes. Here, we consider the possibility that retinal neurons are in a different light-adaptation state at cone light levels, thus explaining the increased spiking in A ON-S ganglion cells. We compared the ratio of cone-mediated spikes to excitatory current responses and found no significant differences between control and Rho-DTR conditions ( Figure 6J). Furthermore, we injected white noise current with a rod-adapting mean to measure the intrinsic excitability of A ON-S ganglion cells for cone-mediated signals and found no change in either the linear filter or the nonlinearity after rod loss ( Figure 6K; linear filter: control versus Rho-DTR, p = 0.125; nonlinearity: control versus Rho-DTR, p = 0.410; control [Rho-DTR] n = 12 [12]; permutation test). Both experiments demonstrate that the increased spiking does not arise from increased intrinsic excitability within the ganglion cell itself.
Partial Stimulation of Rods in Control Retina Does Not Mimic Rod-Mediated Light Responses after Partial Rod Death
We next aimed to understand whether the changes in A ON-S ganglion cell light responses after 50%-60% rod death were attributable to the propagation of lost input through the retinal circuit, mechanisms existing in control retina, and/or active compensation for lost input. To answer this question, we designed an experiment to measure the retinal response to 50% of inputs without the contribution of any circuitry changes, e.g., caused by cell death or prolonged deficit of input. Control cells were stimulated either fully with a spot of light or partially with only half of the spot on the cell's receptive field. The response to the half stimulus is a direct readout of 50% of inputs, thus providing a benchmark for what the light response in Rho-DTR retina might be if no compensatory mechanisms were active after the death of 50% of rods. Comparison of responses to the full and half stimulation against control and Rho-DTR retina reveals how the remaining partially deafferented circuit in Rho-DTR retina differs from control retina.
In the rod-mediated spike response to the half stimulus, we found a significant reduction in R max , indicating a decrease in response amplitude, as well as an increase in I½, indicating a decrease in sensitivity ( Figures 7A and 7B). R max decreased on average by 49% and the sensitivity decreased on average by 52% (Table S6), suggesting that stimulating half of rods generates a proportional decrease in response amplitude and sensitivity. In contrast, in Rho-DTR retina, R max of the rod-mediated spike response decreased by only 22%, suggesting that compensatory mechanisms have partially recovered the response after rod death. Furthermore, the sensitivity of the spike response decreased with half stimulation but increased in Rho-DTR retina, a further indication that the Rho-DTR light response is not simply passive propagation of reduced rod stimulation. Since these rod-mediated responses to half stimulation of control retina differ from those in the Rho-DTR retina, these data demonstrate that stimulating half of the rods is functionally distinct from ablating half of the rods ( Figure 7C).
Partial Stimulation of Cones Mimics Cone-Mediated Light Responses after Partial Rod Death
To understand whether the changes we observed in the cone-mediated light responses in Rho-DTR retina were attributable to existing mechanisms in control retina, rod death, and/or subsequent compensation, we presented cone-preferring stimuli in the full versus half conditions. To half stimulation, R max and I½ of the cone-mediated spike response in control retina increased (Figures 7D and 7E; Table S6). This finding mimics the results from Rho-DTR retina, and therefore indicates that this increase in cone-mediated spiking is generated by a mechanism that is present in control retina and not a result of rod death. The half stimulation does not replicate the condition of 50% rod stimulation and 100% cone stimulation, which occurs with Rho-DTR retina, because cones can only be selectively stimulated with the rod-adapting background. Instead, the half stimulation achieves 50% rod stimulation and 50% cone stimulation. Despite this partial stimulation of cones, the results from half stimulation match the results from Rho-DTR retina ( Figure 7F). In erring on the side of less cone stimulation than occurs in Rho-DTR retina, we are unable to draw conclusions about the magnitude of this result, but we are able to conclude that conemediated signaling increases in the half stimulation condition. Results show that conemediated spiking in A ON-S ganglion cells is enhanced with partial stimulation, similarly to partial rod death. These findings demonstrate that mechanisms that enhance cone-mediated signaling exist in control retina and are independent of compensation within the rod pathway after partial rod death.
DISCUSSION
To understand the functional impact of primary neuron death on the mature circuit requires control over the timing and extent of death. In this study, we induced death of approximately half of the rods after development ( Figure 1) and recorded light responses throughout the circuit. At the output of the retina in ganglion cells, we found partial recovery of excitatory current charge and the number of spikes elicited by rod stimuli (Figure 2), which was not accounted for by the intrinsic excitability of these cells (Figure 3). While output of rods is reduced by a degree consistent with rod loss, rod bipolar cell output is reduced less than predicted by the reduction in rod responses (Figure 4). These results suggest that recovery happens between the rod and rod bipolar cell output. Direct recordings from rod bipolar cells indicated that decreased excitatory and inhibitory currents may balance to generate recovered voltage responses ( Figure 5). To probe the circuitry components that are part of both the primary rod pathway and the cone pathway, we measured ganglion cell light responses to cone stimuli and found an increase in cone-mediated spiking, driven by increased excitatory current charge and not by amplification in intrinsic excitability ( Figure 6). Finally, half stimulation of control retina revealed circuit changes different from those observed after half ablation of rods. We demonstrated that the changes in cone-mediated light responses after death of half of the rods in Rho-DTR retina are similar to those that occur with stimulation of half of photoreceptors in control retina, indicating that the cone pathway withstands partial rod death. In contrast, we demonstrated that rod-mediated light responses in Rho-DTR retina differ from those that occur with half stimulation of control retina, indicating that after rod death the mature retina engages de novo mechanisms to restore functional output (Figure 7).
Effects of Cell Death on the Resting Activity in the Deafferented Circuit
One question is whether rod death in the mature retina could be comparable to raising the background light level in control retina because, presumably, the overall resting glutamate release has decreased following partial rod loss. In one respect, in Rho-DTR retina, ganglion cells exhibit faster responses consistent with light-adapted retina; however, in another respect, their sensitivity is increased rather than decreased, inconsistent with light-adapted retina. This increase in sensitivity can be accounted for by decreased inhibition onto rod bipolar cells, which acts as a compensatory mechanism for rod loss. Taken together, these results strongly suggest that the compensation mechanism after rod death is distinct from those responsible for adaptation. Below, we discuss these effects.
An important feature of circuit function is the resting neurotransmitter release, which sets the state of the circuit. Ablation can change this state. In partial ablation of the vestibular system, resting activity was reduced after deafferentation (Shimazu and Precht, 1966;Hoshino and Pompeiano, 1977). Similarly, in our study, we speculate how resting glutamate release by photoreceptors, which contributes to the retina's adaptation state, is disrupted after partial rod ablation. The half stimulation experiment can distinguish between some of the effects of partial stimulation and partial ablation; however, we acknowledge how the effects of stimulation and ablation may be different. The half stimulation of control retina and half rod ablation in the Rho-DTR have distinct consequences that allow us to draw conclusions about the partially deafferented circuit. We compare three conditions: (condition 1) full stimulation of control retina, (condition 2) half stimulation of control retina, and (condition 3) full stimulation of Rho-DTR retina. We consider rod-mediated, then conemediated light responses.
In signaling rod-mediated light responses on a dark background, Rho-DTR is missing rods that would otherwise convey resting activity, i.e., signaling darkness, to the retina. As a prediction, the Rho-DTR (condition 3) would be light-adapted compared to half stimulation of control retina (condition 2), because only half the resting activity is being conveyed by the remaining rods. As signatures of a light-adapted retina, the Rho-DTR (condition 3) neurons would be expected to have faster and less sensitive responses. Indeed, ganglion cells in Rho-DTR have rod-mediated responses faster than in control retinas, indicative of a light-adapted retina. However, Rho-DTR ganglion cells have rod-mediated responses that are more, rather than less, sensitive, indicating that compensatory mechanisms other than light adaptation are engaged.
For cone-mediated light responses on a mean background, full stimulation of control retina has the full complement of rods stimulated by the mean background (condition 1), i.e., most light-adapted. Half stimulation of control retina has half of rods stimulated by the mean background, while the other half of rods signals darkness (condition 2), i.e., least lightadapted. In between these extremes, Rho-DTR retina are missing rods that would signal the mean background, i.e., the retina signals less light than is presented (condition 3). Indeed, ganglion cells in the full stimulation (condition 1) have lower sensitivities and faster responses, both signatures of light-adapted retina. In contrast, ganglion cells in both the half stimulation (condition 2) and Rho-DTR (condition 3) have greater sensitivities and qualitatively slower responses, both signatures of dark-adapted retina. In addition, ganglion cell responses in Rho-DTR (condition 3) show greater sensitivity than those in half stimulation (condition 2), despite the expectation that Rho-DTR is more light-adapted, providing further evidence for compensatory mechanisms that are invoked by ablation but not by half stimulation.
Influence of Input Loss on Sites of Compensation
Classic studies of the deafferented circuit have been done in the lesioned vestibular system and found compensation in vestibulo-ocular and vestibulo-spinal functions for input lost after removal of one vestibular labyrinth (Precht, 1986). Following input loss, inhibition is decreased in the vestibular system (Shimazu and Precht, 1966;Markham et al., 1977). Previous work in visual cortex has also demonstrated that inhibitory circuits exhibit more structural plasticity than excitatory circuits after monocular deprivation (Villa et al., 2016). Similarly, our findings localize the site of compensation for input loss to the inhibitory currents onto the rod bipolar cell. Rod bipolar cells have excitatory currents reduced by 53%, and inhibitory currents reduced by 94%. The recovery of voltage responses could result from the rod bipolar cell losing part of its dendritic tree, becoming more electrically compact, increasing its input resistance, and thus enabling a smaller current to produce a voltage response comparable to control conditions. The end result is that more distal sites of compensation in the rod pathway are obviated by recovery at the rod bipolar cell voltage.
Our previous study on partial cone loss revealed that inhibitory surrounds of A ON-S ganglion cells expanded, while excitatory centers of these same cells shrank, indicating not only that the inhibitory surround is affected by partial cone loss, but that it may also be a site of compensation for input loss (Care et al., 2019). In contrast to these findings, the present study demonstrates the inhibitory circuits onto ganglion cells remain relatively stable. The difference between our previous study with cone ablation and present study with rod ablation could be the relatively greater photoreceptor loss with rods, i.e., 50% of total photoreceptors, compared with cones, i.e., 1.5% of total photoreceptors. We speculate that the degree of input loss may determine the site of compensation. With less input loss, as was the case after partial cone ablation, the site of compensation is late in the circuit because such mild deficits require greater convergence between the input and postsynaptic cell to detect, i.e., greater spatial integration. With greater input loss, as demonstrated in this study with partial ablation of rods, the site of compensation can be earlier in the circuit because major deficits require less convergence between the input and postsynaptic cell to detect, i.e., less spatial integration. Multiple sites of compensation harken back to our understanding of multiple sites of gain control that operate under different light levels: at dim light levels, gain control occurs later in the circuit between secondary and tertiary neurons; for brighter light levels, gain control occurs earlier in the circuit at primary sensory receptors (Dunn et al., 2006;. Perhaps similar distributed compensatory mechanisms at multiple sites are engaged after varying degrees of input loss.
Implications for Neurodegenerative Diseases, Acute Loss of Sensory Input, and Earlier Diagnosis
Acute ablation of rod photoreceptors in mature retina can mimic the degree of photoreceptor loss at a specific time point in genetic diseases broadly classified as retinitis pigmentosa in which rods die for various reasons; however, our method of ablation did not mimic the degenerative aspects of these diseases. At intervals of ≥4 months since ablation, the remaining rod population remained stable and initial rod death did not induce further rod loss (Table S7). In this regard, acute rod ablation cannot mimic progressive degeneration, yet perhaps we can garner insight from results reported here. First, the relative stability of secondary neurons upon ablation of primary sensory receptors in a mature circuit is consistent with observations in the auditory system upon ablation of mature hair cells (Tong et al., 2015;Kurioka et al., 2016). Such insight provides promise for therapeutics that capitalize on existing circuits, e.g., electrical stimulation of or genetically engineering light sensitivity into the surviving neurons (reviewed in Wood et al., 2019). Second, as discussed below, the physiological state of a circuit with half its sensory receptors appears distinct in ways that diagnostic tests could capitalize upon.
Our work provides evidence for the independence of rod and cone pathways despite convergence at the cone bipolar-to-ganglion cell synapse. When rods are ablated, conemediated responses in ganglion cells in control retina can be mimicked with half stimulation, i.e., the cone pathway remains intact. Such findings, alongside evidence for compensation within the deafferented circuit, may explain why photoreceptor degeneration evades detection both by the patient reporting vision loss and by diagnostics of visual sensitivity and acuity (Ratnam et al., 2013). Greater than half of the cones must be missing before visual deficits start to present clinically. Commonly used tests for visual sensitivity and acuity are conducted at a single background light level. In our simulation of photoreceptor loss by partial stimulation, we have uncovered how the deafferented retinal circuit, while generally functional, differs from that of an unperturbed retina. One prediction is that at a single background, changes in kinetics and sensitivity following photoreceptor loss may be subtle enough to be mistaken for normal. However, if threshold detection is measured at multiple backgrounds, photoreceptor loss may present as kinetic or sensitivity changes that are consistent with a more light-adapted state than expected in unperturbed retina. Consistent with this prediction, the electrical response filter, i.e., spike triggered average, of ganglion cell responses to electrical stimulation in rd10 retina, a model of rod degeneration, is faster as if light-adapted compared to the control ganglion cells (Sekhar et al., 2017). Whether the light-adapted state can be used as a diagnostic tool requires that the rest of the visual system has not masked changes at the level of the retina or that the method of testing isolates retinal responses, e.g., electroretinogram. The present study has the potential to link mechanistic insight gained from mouse retina to clinically relevant efforts to create diagnostic tests for earlier detection of photoreceptor loss.
STAR★METHODS RESOURCE AVAILABILITY
Lead Contact-Further information and requests for resources and reagents should be directed to and will be fulfilled by the Lead Contact, Felice Dunn (Felice.Dunn@ucsf.edu).
Materials Availability-Unique reagents generated in this study are available upon request with a completed Materials Transfer Agreement.
Data and Code Availability-The code generated during this study for image analysis are available at https://lucadellasantina.github.io/ObjectFinder/ The code generated during this study for physiology analysis is available upon request.
The datasets supporting the current study are available upon request.
EXPERIMENTAL MODEL AND SUBJECT DETAILS
Mice-All procedures were done in accordance with the University of California, San Francisco Institutional Animal Care and Use protocols. The following transgenic mouse lines were crossed: Rho-iCre for Cre-recombinase expression in rods and Rosa26-loxP-stop-loxP-DTR (Buch et al., 2005) for Cre-dependent expression of the simian diphtheria toxin receptor. When crossed to a fluorescent reporter line Ai6 (Madisen et al., 2010), the Rho-iCre revealed high specificity to the rod population, with no cone pedicles and extremely rare cell bodies in the inner nuclear layer and ganglion cell layer labeled ( Figure S2). These transgenic mice were back-crossed into the C57BL/6J background. Male and female mice were used for experiments. Diphtheria toxin injections were done between P30-40 at dosages of 100ng/g for 2 injections administered 7 days apart (Care et al., 2019). Mice injected with an equivalent volume of saline were used as littermate controls. Quantification of rod loss demonstrates uniform 50-60% rod loss across each section. All experiments in the main figures were conducted 1 month after DT injection. Two additional cohorts of mice were examined: (1) one at 3 days after the second DT injection revealed < 10% rod death by this time point (number of rows of cell bodies in the ONL was significantly lower after DT injection: control [DTR] = 11 ± 0.67; number of observations and animals: 12 and 6 [10 ± 2; number of observations and animals: 14 and 7], median ± IQR, p = 0.0037, rank sum test; but not significantly different in the number of rows in the INL: control [DTR] = 4.67 ± 0.33 [4.33 ± 0.67] median ± IQR, p = 0.101, rank sum test; data not shown), and (2) another at ≥ 4 months after DT injection that revealed rod death remains at 50-60%, but there are further changes to the bipolar cell responses (Figures S6 and S7; Table S7). In this ablation system, rod death reaches an asymptote between 3 to 30 days following DT injection.
METHOD DETAILS
Tissue preparation for immunostaining-Immunostaining protocols were identical to those described previously (Care et al., 2019). Reagents are listed in the Key Resources Table. Quantification of cell death-To count photoreceptor and interneuron cell bodies, sample preparations were identical to those described previously (Care et al., 2019). (Bleckert et al., 2014) and rod bipolar cells were identical to those described previously (Care et al., 2019). Recordings were done in ventral-nasal retina where the largest of the A ON-S ganglion cells reside and where short (S)-wavelength sensitive opsin dominates.
Electrophysiology tissue preparation-Procedures for the flat mount preparation of recording from alpha ON-sustained ganglion cells (abbr. A ON-S ganglion cells)
Procedures for slice preparation for recording from rod bipolar cells were similar to those described previously (Dunn et al., 2006). Briefly, the isolated retina was embedded in 3% low melting agar in oxygenated HEPES buffered Ames and sliced at 200μm sections on a Vibratome 1200S (Leica). Slices were chosen based on the accessibility of the rod bipolar cells and the intactness of the entire section.
Patch-clamp recordings-Patch-clamp recordings from ganglion cells were identical to those described previously (Care et al., 2019). Patch-clamp recordings from rod bipolar cells were made with electrodes pulled from borosilicate glass (Sutter Instruments) on a DMZeitz to 10-15 MOhm resistance. The electrode internal solution was either cesium methane sulfonate (Care et al., 2019) or potassium aspartate containing (in mM): 125 potassium aspartate, 1 MgCl, 10 KCl, 1CaCl, 10 HEPES, 2 EGTA, 4 ATP, 0.5 GTP, adjusted to pH 7.2 with KOH, adjusted to 273-279 mosm with potassium aspartate, and 0.04% Lucifer Yellow dye. For perforated patch-clamp recordings in Figures 3 and 6, amphotericin-B (0.05 mg/ml) was added to the potassium aspartate internal solution in the back portion, but not the front tip, of the electrode.
Cell identification-Identification of A ON-S ganglion cells included sustained spiking response to a 500ms light step and immunolabeling described in previously (Care et al., 2019).
Rod bipolar cells were targeted by the soma location in the outermost layer of the inner nuclear layer, next to the outer plexiform layer. Identification of rod bipolar cells included ON light responses that could be reversed at positive holding potentials and immunolabeling that revealed large axon terminals in the innermost layer of the inner plexiform layer and colocalization with protein kinase C alpha (PKCalpha).
Light stimuli-Light stimuli were generated by three LEDs with single peaks at 390nm, 405nm, and 470nm. For rod-mediated stimuli, a 10ms flash of the 470nm was presented on a dark background. For cone-mediated stimuli, a 10ms flash of the 370nm or 405nm LEDs was presented on a mean background of 4000 rod isomerizations per rod per second (Rh*/rod/sec) with the 470nm LED to adapt the rods. The stimuli were presented through a circular aperture 900 μm in diameter. For partial stimulation, this spot was displaced so that approximately half of the spot was on the ganglion cell's receptive field.
Electroretinogram recordings-Procedures for the electroretinograms (ERGs) were identical to those described previously (Care et al., 2019) with the following differences. Diagnosys LLC is located in Lowell, MA. The b-wave amplitude was measured from the peak of the a-wave to the second highest positive peak because oscillatory potentials occur during the b-wave. The a-wave to b-wave ratio was measured as the ratio between the amplitude of a-wave and the amplitude of b-wave in response to a flash of 9.73 photons*μm −2 *s −1 , at which the b-wave amplitude of dark-adapted ERG approaches saturation (Perlman, 1983).
QUANTIFICATION AND STATISTICAL ANALYSIS
Quantification of intensity response relationships-Responses of each cell were measured 5-10 times at each light level. Analysis parameters (number of spikes, peak current amplitude, charge) were measured for each individual response and averaged at each light level within each cell. For each cell, these averages were plotted against the light intensity which elicited the response and the resulting plot was fit with the Hill equation in Igor Pro: The parameters of this fit included the baseline (base), maximum (referred to as R max ), Exponent (referred to as Exp), intensity at half maximum (referred to as I½) are shown as histograms in Figures 2, 5, and 6. The intensity at half maximum is referred to as "sensitivity" though we only accounted for the average, and not variability, in the responses.
To construct the average intensity response plots in panels B, E, and H of these figures, as well as the left side of Figures 7B and 7E, the average intensity response plots generated for each cell were averaged before the Hill equation fit. Points at light intensities within 30% of each other were combined (shown with horizontal error bars which are smaller than the point marker in most cases).
Linear-Nonlinear Filters-To determine the intrinsic excitability of A ON-S ganglion cells, we made perforated current-clamp recordings as described above. After establishing access, either the background was kept dark (Figure 3) or rods were adapted down with a constant blue mean at 4000 Rh*/rod/sec ( Figure 6). White noise current was injected with a 1000 Hz frequency cutoff and 500pA standard deviation with an upper and lower limit of ± 200pA. The standard deviation was determined empirically to obtain a full input-output function. The mean was at 0 pA unless holding current was required to keep spontaneous spiking less than 1 Hz and to keep the resting membrane potential at approximately −60mV.
To calculate the linear filter we followed Baccus and Meister (Baccus and Meister, 2002). Briefly, the linear filter, F(t), was the correlation of the stimulus, s(t), and the response, r(t), normalized by the autocorrelation of the stimulus. To then calculate the nonlinear response function, N(g), we convolved the stimulus with the linear filter, to get the generator potential g(t), which was then plotted against r(t), averaging over values of r over bins of g containing an equal number of points. The nonlinearity was fit with a sigmoid function: Figures 3 and 6).
Statistical analysis-In the histograms, medians are indicated by arrowheads. To identify significant differences between conditions, a Wilcoxon rank sum test (abbr. rank sum) was used for Figures 1-6, and a Wilcoxon sign rank test (abbr. sign rank) was used for paired data (Figure 7). A permutation test was used to compare linear-nonlinear filters (Figures 3 and 6). The permutation test took the root mean squared difference between the average of the populations. This difference was compared to random chance by permuting the categories of cells to form two populations 10,000 times and calculating the root mean squared difference. The differences from the actual and permuted populations were compared to determine the p values.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
Mature retina recovers functionally from loss of half of rod primary sensory neurons
• Reduced inhibition at secondary neurons functionally compensates for excitation loss
•
Compensation from rod loss is not recapitulated by half stimulation of control retina • Although using the same output neuron, the cone pathway withstands loss of half of rods 1 or 2). If, however, the effects of rod ablation are partially or fully compensated by postsynaptic neurons, then light responses postsynaptic to the rod could exhibit partial or full recovery, e.g., response that is greater than predicted based on the degree of rod loss (curves 3 or 4). See also Figures S1, S2, and S7, and Tables S1 and S7. (I) As described above for fits to the inhibitory charge for individual cells. See also Figure S3 and Tables S2 and S3.
Figure 3. Intrinsic Excitability of A ON-S Ganglion Cells Is Maintained at Rod Light Levels after Rod Loss
(A) Cartoons and example traces of the recordings used (left) in the calculation of the ratio of rod-mediated spike count to charge for each A ON-S ganglion cell in which both measurements were made sequentially in the same cell (right). Points are mean ± SEM. (B) Test of intrinsic excitability. Example of current injected through the patch pipette (left) and the resulting spikes (right) recorded in perforated patch configuration. Background was kept dark during the duration of the current injections, which were 40-s epochs for six or more repeats. (Box) Time-reversed spike-triggered average (left in box) and the nonlinearity for the example cell (right in box). Nonlinearity fit with a sigmoid function (gray).
(C) Time-reversed spike-triggered average (left) and average nonlinearity (right) of the linear-nonlinear model calculated from spike responses to white noise current injections (mean ± SEM). For the nonlinearity, abscissa represents the convolution between the spiketriggered average and the stimulus in units of standard deviation, i.e., linear prediction or generator potential. Ordinate represents the spike rate. The nonlinearity for each cell was interpolated and smoothed with a spline function. Permutation test shows that neither the linear filter nor nonlinearity are significantly different between control and Rho-DTR conditions.
Figure 4. Rod-Mediated Responses Compromised but Postsynaptic and Cone-Mediated Responses Preserved in the Electroretinogram
(A) Example in vivo electroretinogram of control (black) and Rho-DTR (red) mice taken in the dark-adapted, rod-mediated condition at 2.919 photons*μm −2 *s −1 . Amplitude of a-wave was measured from baseline to the trough of the first negative peak. Amplitude of b-wave was measured from the trough of the first negative peak to the second-highest positive peak. (B) Average amplitude of the dark-adapted a-wave, which is the rod-mediated voltage response in the waveform, as a function of light intensity. Points are mean ± SEM. Significant differences between response amplitudes at each light intensity are denoted by asterisks above each pair of points (t test). Light intensities (p value); 0.973 photons*μm −2 *s −1 (0.030); 1.946 (0.0173); 2.919 (0.0043); 9.73 (0.0043); 29.19 (0.0087); and 97.3 (0.0043).
(C) Average amplitude of the dark-adapted b-wave, which is the rod bipolar cell-mediated voltage response in the waveform, as a function of light intensity. Points are mean ± SEM.
|
2020-06-11T09:06:51.282Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c50e0d3dbfab1692b9d9c49f46736868fb79f49b",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S2211124720307105/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a7359bb6d28b11141ad253f954383dbfe73951d4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
267092400
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Socioeconomic Inequalities on Dental Caries Status in Sardinian Children
Background: The association between oral health of schoolchildren living in the North Sardinia area and socioeconomic deprivation was assessed to evaluate a potential spatial correlation. Methods: A total of 10,947 subjects were examined (5281 aged 3–5-years, and 5666 aged 6–11-years). The WHO dmft index score was calculated following clinical examination by calibrated examiners. The Sardinian Deprivation Index (IDMS) of the children’s municipalities was also considered. Descriptive, bivariate and multinomial data analysis was conducted to assess the association between clinical data and socioeconomic deprivation. The presence of systematic spatial variation regarding caries experience (dmft) and deprivation status was investigated using a spatial autoregressive analysis. Results: Caries figures were statistically different in the two age groups (dmf > 0, 13.79% in the younger group vs. dmf > 0, 34.20% in the older one, p < 0.01). In a multinomial logistic regression model for caries experience, all the covariates were statistically significantly associated (p < 0.01) in comparison with the base outcome “caries-free”. Linear regression analysis showed a dependence of dmft on IDMS (p < 0.01). Based on this equation, the dmft of the 39 municipalities that did not participate in the survey was estimated. IDMS was statistically significantly associated (p < 0.01) with caries prevalence in the spatial regression model. Conclusions: The deprivation index significantly increased the risk of caries for all categories of caries experience and prevalence compared to caries-free. The relationship between IDMS and caries data was also confirmed by spatial analysis.
Introduction
Dental caries result from a years-long chain of events in which clinical, microbiological, behavioral, and social factors play a role.In developed countries, over the past 40 years there has been a substantial decrease in the prevalence of dental caries in both children and adults [1,2].Nevertheless, dental caries remains the most common oral disease and a major public oral health problem [3].
Several studies on health inequalities have shown that social deprivation and living environments play a major role in childhood dental caries [4][5][6][7].
In this regard, there is consensus across the literature that a child's health status is affected by social determinants such as income, education, health care, the built environment (including housing), home neighborhood, community and family circumstances [8][9][10][11][12][13].Health-related disadvantages, in fact, are transferred from one generation to the next, and the relationships between socioeconomic status and health can be observed from a very young age, with their effects manifesting even in the prenatal stage [14,15].
Moreover, parents' limited awareness of the importance of oral hygiene, lack of knowledge about the transmission of oral pathogens, diet, and oral hygiene habits are also relevant in the etiology of dental caries in children, and are significant in the prevention of common oral diseases.It has been shown that infants whose mothers have poor oral hygiene and high levels of cariogenic bacteria have an increased risk of infection, and may therefore be more susceptible to developing caries in early childhood.Similarly, behavioral patterns assumed by parents, such as tooth brushing after meals, regular visits to the dentist, and a hypoglycemic diet, are factors positively associated with their children's dental and oral health [10,14,15].
The provision of dental care for children differs greatly from country to country and within the same geographic area, and is largely absent in many developing countries and in regions defined as high deprivation index [16][17][18].
This association has been previously described in Italy, where several deprivation indicators have been developed as tools for a health planning framework by helping to describe the characteristics of the Italian population, with particular attention paid to the differences among local areas [10][11][12][13][19][20][21].
In this regard, in Italy's territory there is a significant divide between regions in northern Italy and those in southern Italy.Sardinia, an island located in southern Italy, is currently seeing a sharp increase in the number of socially marginalized people; this is also attributable to the increase in absolute and relative poverty [22,23] partly exacerbated by the COVID-19 pandemic.
To assess socioeconomic differences at the area level, an Index of Multiple Deprivation was developed in the region (Indice di Deprivazione Multipla della Sardegna-IDMS) using data from the 2011 Sardinian General Population Census [22].Taking elementary variables as a starting point, it was possible to adequately describe the characteristics of the Sardinian population, focusing on the differences between local situations.In fact, the index was constructed from socioeconomic variables, namely income disadvantage (absolute poverty), level of education, presence of primary services (e.g., schools, pharmacies, post offices, family doctors, etc.), unemployment rate, mortality, environmental pollution, and crime rate [22,23].
The IDMS has previously been used to assess the relationship, if any, between socioeconomic deprivation and influenza vaccination coverage in the over-65s for the specific urban and peri-urban area of the Municipality of Sassari, confirming the relationship between deprivation status and health outcomes (vaccination adherence) in the elderly population [24].Given the distinctive features that define the island (elderly population, low birth rate, and low per capita income), placing it among the regions considered to be at high risk of socio-health inequalities, it is essential to follow a Health Equity Assessment process.Such a process would include investigating the possible relationship between deprivation and other health outcomes (e.g., dental caries) concerning at-risk population cohorts such as the childhood demographic target [25].
Based on these premises, the present study was designed as a cross-sectional observational survey of the oral health status of schoolchildren in northern Sardinia.In addition, the study aimed to describe the data on caries and its ecological association with socioeconomic deprivation and to assess a potential spatial correlation between the two.The method of measuring caries lesions chosen by the authors, in line with standardized diagnostic methods for the purpose of enabling comparison of caries status in different populations and/or countries around the world, was the dmft/DMFT (decayed (D), missing (M), and filled (F) teeth (T), where upper case denotes permanent dentition and lower case primary dentition), developed in 1938 [26] and adopted by the WHO with various advantages and disadvantages [27][28][29][30][31][32][33].
Study Setting
The study was carried out during the 2018/2019 school year (from September 2018 to June 2019) in Sardinia, Italy.Sardinia is the second-largest Mediterranean island, and the regional territory is divided into 5 provinces (Metropolitan City of Cagliari; Sassari; Nuoro; Oristano; South Sardinia) and 377 municipalities.The resident population during the survey period consisted of 1,611,621 individuals, the population target (3-11 years) consisted of 103,238 children, while in the study area, there were 35,076 children of which 17,028 (48.50%) were female.
Overall, 544 primary schools are present in the region and 157 in the study area [34].The survey concerned the province of Sassari, which comprises the northern territory of the island covering an area of 7692 square kilometers (the largest province in Italy) (Figure 1).Ninety-two municipalities were involved (Table S1).
Study Setting
The study was carried out during the 2018/2019 school year (from September 2018 to June 2019) in Sardinia, Italy.Sardinia is the second-largest Mediterranean island, and the regional territory is divided into 5 provinces (Metropolitan City of Cagliari; Sassari; Nuoro; Oristano; South Sardinia) and 377 municipalities.The resident population during the survey period consisted of 1,611,621 individuals, the population target (3-11 years) consisted of 103,238 children, while in the study area, there were 35,076 children of which 17,028 (48.50%) were female.
Overall, 544 primary schools are present in the region and 157 in the study area [34].The survey concerned the province of Sassari, which comprises the northern territory of the island covering an area of 7692 square kilometers (the largest province in Italy) (Figure 1).Ninety-two municipalities were involved (Table S1).
Study Design and Sample Size
The present study was designed as a cross-sectional observational survey and, in line with Italian legislation, the study proposal was submitted to the Ethical Committee.The experimental protocols could start after 60 days, even in the absence of their reply, as it did not require ethical approval according to Italian law [35].The local section of the Italian Medical Association sponsored the survey, which was designed by the University of Sassari in agreement with the Italian Dental Association.
Sample size was assessed using the freeware online application openepi (http://www.openepi.comVersion 3 (accessed on 30 June 2018)), taking into account the Italian caries prevalence reported in the literature [10] with a population size equal to 35,076, a hypothesized frequency of 46.5%, and a confidence level of 97.The returned number was 8783 subjects, which was further increased by 10% to compensate for any unforeseen problems (i.e., excluding expected dropouts, incomplete or invalid records,
Study Design and Sample Size
The present study was designed as a cross-sectional observational survey and, in line with Italian legislation, the study proposal was submitted to the Ethical Committee.The experimental protocols could start after 60 days, even in the absence of their reply, as it did not require ethical approval according to Italian law [35].The local section of the Italian Medical Association sponsored the survey, which was designed by the University of Sassari in agreement with the Italian Dental Association.
Sample size was assessed using the freeware online application openepi (http://www.openepi.comVersion 3 (accessed on 30 June 2018)), taking into account the Italian caries prevalence reported in the literature [10] with a population size equal to 35,076, a hypothesized frequency of 46.5%, and a confidence level of 97.The returned number was 8783 subjects, which was further increased by 10% to compensate for any unforeseen problems (i.e., excluding expected dropouts, incomplete or invalid records, students not present at the time of visits, etc.), for a total of 9961 subjects to be examined in the classroom.This strategy provided a sample that was self-weighting.Each child's parents/caregivers received a leaflet explaining the study's aim and requesting the child's participation.Only children with the consent form signed by parents/caregivers were examined.
In total, 35,076 children were recruited and a total of 10,947 were examined; children with no parental consent and not present in the classroom at the moment of the examination were excluded.The study sample thus represented 36.29% of the total study area population aged 3-11 years.
The survey was carried out after a theoretical calibration process with caries lesions detected in images.The calibration method involved two calibration levels: the first (L1) involved an inter-examiner agreement between four main investigators (MD, AA, PC, GC), the group leaders (GLs) at the following level; the second level (L2) involved four groups of 32 pediatric dentists and inter-examiner agreement assessment according to the GLs in each group.A total of 128 dentists acted as examiners.The strength of agreement associated with kappa statistics was labelled as described in the literature [36].
The survey was designed following the WHO methodology for oral health surveys [27].Oral examinations were undertaken in school rooms using a dental mirror, a probe, and a headset light.The following characteristics of the primary dentition status were recorded: decayed (d), missing (m), and filled (f) teeth, and the dmft (d + m + f) index was then calculated for each subject.
The IDMS as deprivation index was used within a range from 0 (minimum value of deprivation) to 1 (maximum value of deprivation).This indicator had already been successfully used by the research team for similar purposes [24,34].
The variables IDMS and age were grouped into categories.The IDMS continuous variable was grouped into 4 categories of deprivation status: very low (0-0.14);low (0.15-0.23); medium (0.27-0.38); and high (0.39-1.00).Age was grouped into 2 categories: 3-5 year old subjects with only primary teeth, and 6-11 year old subjects with mixed dentition.Caries experience was converted as a dichotomous variable based on the dmft index (0 = caries-free; 1 = at least one tooth with a history of caries disease regardless of active lesion, tooth extracted for caries, or filled).Caries prevalence (d subgroup) was also transformed into a dichotomous variable (0 = not decayed; 1 = decayed).Caries experience was also coded as follows: caries-free subjects, subjects with an experience of 1-2 teeth, subjects with an experience of 3-4 teeth, and subjects with an experience of more than 4 teeth.Caries prevalence as regards d lesions was also coded with the same intervals [11,37,38].
Qualitative variables were described with absolute and relative frequencies.Associations between categorical variables were tested with Pearson's chi-square.A nonparametric test for trend (linear-by-linear trend test) across exposure categories was calculated.Quantitative variables were represented by measures of position and variability.ANOVA one-way was used to evaluate the differences between parametric variables.
Multinomial (polytomous) logistic regression models were run to assess the relationships between dependent (caries experience and caries prevalence categories) and independent variables.For this analysis, the base outcome was "caries-free" subjects both for caries experience and prevalence.
The dependence between dmft and IDMS was evaluated by linear regression analysis.For municipalities with missing data, dmft was estimated through the same linear equation starting from the continuous figure of the variables.
All 92 municipalities were then inserted into a spatial autoregressive model, evaluating the spatial correlation between dmft and IDMS.
The statistically significant level was set at p < 0.05 for all the analyses.
Autoregressive Analysis
Spatial Autoregressive Models (SAR) were run using datasets that contain observations on geographical areas, and spatial autoregressive analysis was performed to describe the presence of systematic spatial variation in the study area regarding caries experience (dmft) and deprivation status.
The Sardinian map was retrieved from GeoportaleSardegna [39], and ArcGIS (version 10.8.2, Redlands, CA, USA) was used for geographic mapping and shape file elaboration.After that, the shape file was imported into STATA17.0® statistical software and analyzed using the "spregress" command for spatial relationships.
Results
The strength of agreement, recorded as categories of kappa statistics, within raters and between raters and benchmarks was 'substantial' (0.81-0.90).
Overall, 54 municipalities out of 92 (58.70%) and 124 schools out of 157 (79.99%) in the area (see Figure 1) agreed to take part in the survey.
In total, 10,947 children were examined: 5281 in the younger group (3-5 years) and 5666 in the older group (6-11 years) (Table 1).No statistically significant differences were observed regarding sex distribution between age groups (p = 0.65).Caries indicators were different in the two age groups, both for caries experience (dmf > 0 = 13.79% in the younger group vs. dmf > 0 = 34.20% in the older group, p < 0.01) and for caries prevalence (d > 0 = 12.65% in the younger group vs. d > 0 = 28.64% in the older group, p < 0.01) (Table 1).Caries scores, dmft, and subgroups were statistically significantly higher in the older group (Table 2).In particular, dmft = 1.07 ± 2.02 in the older group vs. dmft = 0.45 ± 1.55, p < 0.01 in the younger group.The classification of the dmft and d subgroup were seen to be associated (Table 3) with sex, age groups, and deprivation (p < 0.01).Moreover, a linear trend across exposure categories was noted (p < 0.01).For caries experience classification, the estimated coefficients of the multinomial (polytomous) logistic regression transformed to relative ratios were statistically significantly associated (p < 0.01) in comparison with the base outcome "caries-free" (Table 4).Being female had a protective effect on caries prevalence classification.Very similar features were also present if caries prevalence classification (d) was used as a dependent variable; in subjects having more than four active caries lesions, the sex protective effect (being female) was not statistically significant (p = 0.07) (Table 4).
Based on this equation, the dmft of the 39 municipalities that did not participate in the survey was estimated.
The autoregressive analysis showed the spatial relationship among the 92 municipalities (Figure 2).
Based on this equation, the dmft of the 39 municipalities that did not participate in the survey was estimated.
The autoregressive analysis showed the spatial relationship among the 92 municipalities (Figure 2).The IDMS was statistically significantly associated (p < 0.01) with caries prevalence in the spatial regression model (Table 5).
Discussion
The role of socioeconomic status in the epidemiology of dental caries is quite noteworthy.Determinants such as educational level, lack of knowledge about oral health and eating habits, parents' income levels, and limited access to dental services, influence the oral health status of children and adolescents [40,41].Moreover, it is well known that children from socioeconomically deprived families show a higher prevalence of caries associated with more severe clinical manifestations than those from less deprived settings [10,42,43].
In Italy, few studies have focused on the specific health needs in specified administrative areas ripe for intervention.In Italy, caries in preschool children is The IDMS was statistically significantly associated (p < 0.01) with caries prevalence in the spatial regression model (Table 5).
Discussion
The role of socioeconomic status in the epidemiology of dental caries is quite noteworthy.Determinants such as educational level, lack of knowledge about oral health and eating habits, parents' income levels, and limited access to dental services, influence the oral health status of children and adolescents [40,41].Moreover, it is well known that children from socioeconomically deprived families show a higher prevalence of caries associated with more severe clinical manifestations than those from less deprived settings [10,42,43].
In Italy, few studies have focused on the specific health needs in specified administrative areas ripe for intervention.In Italy, caries in preschool children is statistically significantly linked to various socioeconomic indicators (e.g., GNI, GINI, unemployment rate) [10].
This study focuses at the level of municipalities using a purpose-built indicator [22].This study aimed to investigate the experience and prevalence of primary tooth caries (dmft) among preschool (3-5 years) and school-age (6-11 years) children residing in northern Sardinia by describing the geographical association of caries data with socioeconomic indicators.A secondary objective was to assess the potential spatial autoregressive pattern between the recorded dmft and the Sardinian deprivation index in the micro areas of interest (municipalities in the province of Sassari).
The study area (Sardinia), due to the insularity and territorial contiguity which preserves the region from outside interferences, is an optimal setting for investigating social and epidemiological dynamics [44].
In the past, studies on single towns in Sardinia have shown a link between caries risk and socioeconomic factors, although geographical variations outside urban areas were never assessed.Thus, there was a need for a new comprehensive cross-sectional epidemiological study to understand the current trend of caries in Sardinian children.
The sample enrolled in the present survey was homogeneous regarding gender distribution among the age range.Caries experience (dmft) was statistically significantly higher in children aged 6-11 years than in younger ones as expected, as caries is a developmental and cumulative event; a greater experience of caries tends to be observed as the years progress [45].
The caries data from this survey underline that in the study area, the prevalence of caries is higher than in other Italian regions.
The largest contribution to the dmft index comes from the 'decayed' component (d), underlining the continuing need for intervention within the study area, even though studies show a general improvement in the population's oral health conditions over time [41].
The sub-category analysis of caries experience showed a statistically significant association with sex (with a higher prevalence of caries-free subjects being female) and age (with dmft values ≥1 almost trebling in school-age subjects).In older subjects, the need for treatment is more than twice as high as it is in younger subjects.The protective effect of sex (being female) is more evident in the logistic regression analysis, except with active caries values above 4, in which case the effect does not reach significance.This can be attributed to the low number of females with high caries lesions.
To the authors' knowledge, this is the first study to consider the role of a precise deprivation index (IDMS) in the epidemiological study of dental caries in several municipalities in Sardinia.Previous analyses have indicated that socioeconomically disadvantaged individuals are greatly affected by oral health problems, and determinants such as low socioeconomic status, minority status, and unemployment, are associated with low levels of preventive dental care and high rates of dental disease [4,41,45,46].The findings of this paper emphasize that deprivation scores are significantly associated with caries [11,27].
Furthermore, logistic regression analysis revealed that IDMS significantly increased the risk for all categories of caries experience and prevalence compared to caries-free subjects.From this perspective, deprivation indices are useful to identify small areas with high levels of need for dental care and increased oral health promotion and prevention services [12,21].
The spatial relationship between caries experience and IDMS returned a statistically significant association, as mentioned above.Spatial autocorrelation analysis enabled observation of the phenomenon's distribution throughout the entire study area, albeit in the absence of a clustering effect [46].
The study's main limitations can be ascribed to the typical characteristics of an observational study.The study design is not suitable for testing etiological hypotheses but only for formulating them, as it does not allow an assessment of the cause-effect relationship (i.e., whether the supposed factor causes the disease or vice versa).The present survey aimed to (i) describe the distribution of caries at a population level, and (ii) to associate the data with certain risk factors.The huge number of subjects enrolled and examined enabled us to infer the outcomes for the whole population.
The deprivation values were significantly associated with caries data.In particular, logistic regression analysis showed that the deprivation index significantly increased the risk for all categories of caries experience and prevalence compared to caries-free subjects.The relationship between IDMS and caries data was further confirmed by spatial analysis.
This study again adds emphasis to the pressing need to push for educational interventions, particularly in schools where, due to compulsory education, the population is more easily reached.In addition, the study has made it possible to detail those areas in North Sardinia in greatest need of intervention, and this can make equity-based primary prevention strategies more efficient.
Figure 1 .
Figure 1.Map of Italy, Sardinia, and municipalities involved in the study.
Figure 1 .
Figure 1.Map of Italy, Sardinia, and municipalities involved in the study.
Figure 2 .
Figure 2. Spatial analysis among the 92 municipalities by dmft range.
Figure 2 .
Figure 2. Spatial analysis among the 92 municipalities by dmft range.
Table 2 .
Caries index dmf and subgroups (means and Standard deviation) in the two age groups.
Table 3 .
Association among caries experience (dmf) and prevalence (d) classifications and sex, age groups, and deprivation index.
Table 4 .
Multinomial ordinal logistic regression model using caries experience (dmf) and prevalence (d) as dependent variables and classifications and sex, age groups, and deprivation index.
|
2024-01-24T06:17:22.974Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "3b1d29702c4167c4631b877aaed1ff03dabec9b9",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "22f2d0fbbb65de2c84063335fd6dd54b2a663ca3",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118640663
|
pes2o/s2orc
|
v3-fos-license
|
Probing the Stau-Neutralino Coannihilation Region at the LHC with a soft tau lepton and an ISR jet
We present a feasibility study, to search for dark matter at the LHC, in events with one soft hadronically decaying tau lepton and missing transverse energy recoiling against a hard $p_{T}$ jet from initial state radiation. This methodology allows the search for Supersymmetry in compressed mass spectra regions, where the mass difference between the lightest neutralino, $\tilde\chi_1^0$, and the stau (the tau superpartner), $\tilde{\tau}$, is small. Several theoretical models predict a direct connection between thermal Bino dark matter and staus within this scenario. We show that compressed regions, not excluded by ATLAS nor CMS experiments, are opened up with the increase in experimental sensitivity reached with the proposed methodology. The requirement of a hard jet from initial state radiation combined with a soft tau lepton is effective in reducing Standard Model backgrounds, providing expected significances greater than 3$\sigma$ for $\tilde{\chi}_{1}^{\pm}$ masses up to 300 GeV and $\tilde{\tau}$-$\tilde{\chi}_{1}^{0}$ mass gaps below 25 GeV with only 30 fb$^{-1}$ of 13 TeV data from the LHC.
I. INTRODUCTION
The identity of Dark Matter (DM) is one of the most interesting and relevant topics in particle physics today. Currently, there are several direct and indirect searches for DM performed by different experiments, such as su-perCDMS [1], LZ [2], AMS2 [3], ATLAS [4] and CMS [5], among others. These experiments are trying to find evidence of the existence of DM particles motivated by hypothetical models, in some cases, or by indirect cosmological observations. Nevertheless, there is no conclusive evidence thus far that could shed some light on the particle nature of DM.
At the CERN LHC accelerator, the ATLAS and CMS experiments have an extensive physics program to search for DM, especially in new physics models of Supersymmetry (SUSY) [6][7][8][9][10], which resolves many problems inherent in the Standard Model (SM) and naturally provides a DM candidate in the form of the lightest neutralino (χ 0 1 ). A broad set of final states have been used to probe theχ 0 1 using cascade decays of heavier colored and electroweak SUSY particles [11][12][13][14][15]. The production of these DM candidates has been excluded, by both experiments, for χ 0 1 masses that range from 100 GeV to roughly 800 GeV, depending on the final state studied and on the physics model used to interpret the data. Nevertheless, compressed mass spectra regions, where the mass difference ∆m between the heavier SUSY particles and theχ 0 1 is small, are very difficult to probe at the LHC, due to constrains driven by the ability to trigger, with low enough rate, on events containing low p T objects in addition to experimental difficulties involved with identifying them with high enough efficiency amongst the large hadronic activity associated with a proton-proton collider. For example, searches for chargino (χ ± i ) and neutralino (χ 0 j ) production in final states with one or more leptons and missing transverse momentum exhibit limited sensitivity to models with SUSY particles that decay predomi-nantly to τ leptons, with exclusion limits of ≈ 100 GeV for ∆m < 50 GeV, due to the larger backgrounds associated with τ lepton reconstruction.
The main focus of this letter is to propose a new search at the LHC to target compressed mass spectra regions in the electroweak sector, in models which predominantly produce τ leptons, where the current experimental sensitivity is limited. The study of compressedτ 's is of special interest in thermal Bino DM cosmology models consideringτ −χ 0 1 co-annihilation, as it is proposed in several papers [16,17], in order to obtain the correct relic DM density observed today.
The use of Vector Boson Fusion (VBF) topologies to target difficult compressed mass spectra scenarios for the production of SUSY withτ 's, has been proposed as a new experimental handle at the LHC [18]. This search has been recently published by CMS [19], showing better sensitivity in very compressed regions with respect to previous searches by ATLAS and CMS [20,21]. Although VBF is a good tool to probe compressed spectra and DM [22], with better signal-to-background ratios due to its rejection power for QCD processes, the small VBF signal cross-sections motivate us to find a complementary method with higher production rate, which translates in less luminosity needed for a potential discovery in the short term. We propose a complementary handle to target compressed staus, searching for the production of one hadronic τ lepton (τ h ) and at least one high p T jet from initial state radiation (ISR).
The SUSYτ 's can be produced directly in pairs or through cascade decays of the lightest chargino,χ ± 1 , and the next-to-lightest neutralino,χ 0 2 , in processes such as 1χ 0 2 →τ ν ττ τ and χ ± 1χ 0 1 →τ ν τχ 0 1 . Hadronic decays of τ leptons have the largest branching fraction and thus final states with a τ h provide the best experimental sensitivity.
While the above processes result in final states with multiple τ leptons, the compressed mass spectra scenario arXiv:1606.08878v2 [hep-ph] 16 Dec 2016 of interest in this paper results in low p T visible decay products, making it difficult to reconstruct and identify multiple τ leptons. Furthermore, semi-leptonic decays of τ leptons result in lower average p T than hadronic decays, while also being largely indistinguishable from prompt production of electrons and muons. Therefore, the above characteristics motivate us to focus on events with one τ h candidate. Similar to the monojet searches, the use of a high p T ISR jet in the event topology is expected to create a recoil effect that facilitates both, the detection of missing transverse momentum in the event (p miss T ), and the identification of the soft τ h due to the natural kinematic boost. Additionally, the inclusion of a high p T jet in the event topology provides an experimental handle to trigger on these type of events with soft τ h candidates.
II. SAMPLES AND SIMULATION
Signal and background samples were simulated using an interface between MadGraph (v2.2.3) [23] for the events generation, PYTHIA (v6.416) [24] for the hadronization process and Delphes (v3.3.2) [25] to include detector effects. The main background sources come from the production of Z and W vector bosons with associated jets, referred to as Z+jets and W+jets. Background events with up to two associate jets were generated. Jet merging and matching was performed based on the MLM algorithm [26]. This algorithm requires the optimization of two variables related to the jet definition, qcut and xqcut. The xqcut is defined as the minimal distance required among partons at MadGraph level. The qcut is a measure of the minimum energy spread for a clustered jet in PYTHIA. The optimization is performed by studying the differential jet rate distribution until obtaining a smooth transition between events with zero and one jets, and between events with one and two jets. The optimal values found for our simulations were a xqcut of 15 for both backgrounds, and a qcut of 35 GeV for Z+jets and 30 GeV for W+jets. At generation level, leptons were required to have a p T ( ) > 10 GeV and |η( )| < 2.5, while jets are required to have a minimum p T threshold of 20 GeV and |η| < 5.0. For Z+jets events, an additional constrain on the reconstructed mass of the two leptons was applied, in order to suppress events with masses below 50 GeV.
The signal samples were produced considering two cases in the context of the R-parity conserving Minimal Supersymmetric Standard Model (MSSM). The first case considered direct production ofτ pairs and an ISR jet and the second case included additional production ofτ events through cascade decays ofχ ± 1 orχ 0 2 . The benchmark signal samples were produced under the three following assumptions. First, theχ ± 1 and theχ 0 2 are winolike and mass degenerate, while theχ 0 1 is mostly Bino. Second, we considered only scenarios where the mass difference between theτ and theχ 0 1 is always less or equal to 25 GeV, aimed at theτ -χ 0 1 co-annihilation region, and with mass equal to m(τ ) = 0.5m(χ ± 1 ) + 0.5m(χ 0 1 ). Finally, we studied regions where the mass difference between theχ 0 1 and theχ ± 1 is below 50 GeV, in order to study areas of the SUSY phase space where the ATLAS and CMS searches have limited experimental sensitivity. We scaned the regions of interest usingχ ± 1 masses ranging from 100 GeV to 400 GeV, in steps of 100 GeV, and ∆m(τ ,χ 0 1 ) from 5 GeV to 25 GeV, in steps of 5 GeV.
III. EVENT SELECTION CRITERIA
The event selection criteria used in the analysis is summarized in Table I. The p T threshold for the highest p T jet (p lead T (jet)) was defined through an optimization process, based on the S/ √ S + B figure of merit, to estimate the signal significance. The p lead T (jet) selection was also chosen to provide an experimental handle to trigger on these types of events at ATLAS and CMS. In order to focus on events where the ISR jet can naturally boost the p miss T in the opposite direction, jets are constrained to be within the tracker acceptance region, |η jets | <2.5. We selected the highest p T jet in the event, as the ISR jet. The highest p T jet correctly identifies the ISR jet with greater than 95% accuracy. Events containing an isolated electron or muon, with p T > 20 GeV, have been removed in order to suppress the contribution from the W+jets, Z+jets and tt backgrounds. The contribution from di-boson events, is heavily suppressed after vetoing events with two or more leptons. Events with top quarks become negligible after vetoing jets, tagged as bottom quarks, with p T > 20 GeV and |η| < 2.5. Events are required to have exactly one τ h with 15 < p T (τ h ) < 35 GeV and |η(τ h )| < 2.3. The selection criteria on the pseudo-rapidity of τ h , |η(τ h )| < 2.3, is also motivated by the geometric acceptance of the tracker sub-detectors in both experiments and the isolation cones placed around the τ h candidates, commonly used to reject jets from QCD processes that can mimic the signature of a τ h . Jets and τ h candidates passing the outlined selection criteria are required to be well separated in η − φ space by a cut of ∆R(τ h , jet) = ∆φ 2 + ∆η 2 greater than 0.4. The p T (τ h ) and p miss T thresholds were optimized in a two dimensional plane, after passing the selection criteria described above, allowing us to find the most suitable combination of the two variables. The signal benchmark sample with m(χ 0 1 ) = 150 GeV, m(χ ± 1 ) = 200 GeV and m(τ ) = 175 GeV, was used for the optimization. The best significance is achieved when p T (τ h ) is within the range 15 GeV < p T (τ h ) < 35 GeV, with a p miss T requirement above 230 GeV. After requiring a p miss T threshold of 230 GeV, the contribution from QCD events becomes negligible. Figure 1 shows the results of the p max T (τ h ) vs. p miss T optimization process, using events selected with p lead T (jet) > 100 GeV, p T (τ h ) > 15 GeV, and satisfying the extra lepton and b-jet vetoes. The increase in signal significance due to the requirement of a soft τ h , as TABLE I. Cuts used to select events with one τ h and at least one high pT ISR jet. The highest pT jet is tagged as the ISR jet. Figure 1, highlights the importance of having good τ h identification at low p T . On average, a 20% improvement in the overall signal significance for very compressedτ −χ 0 1 scenarios, is observed by lowering the p T (τ h ) threshold from 20 GeV to 15 GeV.
Other sets of topological variables were analyzed, such as different combination of variables related to the angular difference in the φ plane between the highest p T jet, the τ h and the p miss (1) Figure 2 shows the m T (τ h , p miss T ) distribution for the main backgrounds and two different signal points, after applying all the event selection criteria outlined in Table I. The backgrounds are stacked on top of each other while the signal is overlaid with the expected background yields. The bulk of the background distribution resides at low m T , while the signal begins to dominate in the tails of the distribution (e.g. m T ∼ 175 GeV for the benchmark signal samples shown in Figure 2).
IV. RESULTS
The proposed shape based analysis of the m T distribution is performed using a binned likelihood following the test statistic based on the profile likelihood ratio, using the ROOTFit [28] toolkit. As can be seen from Figure 2, the signal sensitivity with the integrated luminosity considered is dominated by the signal and background yields in the tails of the m T distribution, where statistical uncertainties are expected to be more important than systematic uncertainties. However, since the proposed search strategy entails a fit of the entire m T distribution, it is appropriate to consider reasonable experimental systematic uncertainties to calculate projected significance as this fitting procedure can have important correlations to the background and signal uncertainties at low m T , where statistical uncertainties are small. The dominant sources of systematics are expected to be the uncertainty on τ h identification (6% based on [29]), p miss T trigger efficiency (1% based on [30]), modeling of ISR (5% based on [30]), pileup effects, and the uncertainty on transfer factors used to estimate the backgrounds. While it is beyond the scope of this paper to perform studies on background estimation methods, we refer to the monojet searches with 8 TeV data [30] as a reasonable choice for the uncertainty on the transfer factors used to estimate backgrounds (∼ 5.1% for p miss T > 250 GeV). Therefore, a 10% total systematic uncertainty on the signal and background yields is a reasonable choice. In our studies, the systematic uncertainties are incorporated via nuisance parameters following the frequentist approach. A local pvalue is calculated as the probability under a background only hypothesis to obtain a value of the test statistic as large as that obtained with a signal plus background hypothesis. The significance z is then determined as the value at which the integral of a Gaussian between z and ∞ results in a value equal to the local p-value. Figure 3 shows the expected signal significance without considering any systematic effects. Figure 4 shows the expected signal significance after considering a flat 10% systematic effect, completely correlated across m T bins, in the signal and background yields. The proposed methodology can provide 5σ (3σ) significance for χ ± 1 masses up to approximately 250 GeV (300 GeV) and with m(τ ) − m(χ 0 1 ) < 25 GeV, allowing the ATLAS and CMS experiments to probe previously unreachable parts of theτ −χ 0 1 co-annihilation phase space important to the connection between particle physics and cosmology.
The assumption of a completely correlated systematic uncertainty with respect to m T is based on the belief that the τ h identification and p miss T trigger efficiencies do not depend on the value of m T . This assumption depends on the performance of the improved and updated reconstruction algorithms of the CMS and ATLAS experiments under future running conditions, which is outside the scope of this paper. However, for the luminosity considered, the conclusions have been tested to be independent of the assumption of a completely correlated systematic uncertainty with m T .
Although the benchmark signal samples considered thus far focus on the case where theχ ± 1 /χ 0 2 is mostly Wino and the LSP is mostly Bino (when co-annihilation can give rise to the correct LSP DM relic density), a study is also performed on the impact of Wino and Bino compositions of theχ ± 1 /χ 0 2 and LSP, respectively, to the signal sensitivity. This allows for a more general overview of the impact of the proposed search to compressed SUSY, independent of the connection to cosmological DM. For this purpose, signal samples were produced by fixing thẽ χ ± 1 /χ 0 2 and LSP masses and varying the µ parameter, which controls the gaugino mixing. For example, for m(χ ± 1 ) = 100 GeV and m(χ 0 1 ) = 50 GeV, the µ parameter was decreased to produce LSP Bino compositions ranging from 50% to 97%. Decreasing the µ parameter in order to decrease the LSP Bino composition makes the Higgsinos more important and thus simultaneously decreases the Wino composition forχ ± 1 /χ 0 2 (i.e. they are no longer mostly wino-like). The Wino compositions for χ ± 1 /χ 0 2 range from ≈ 40% to 99%. Figures 6 and 5 show the expected signal significance, using an integrated luminosity of 30 fb −1 , as a function of m(χ ± 1 ) and LSP Bino composition for fixed ∆m of 25 GeV and 5 GeV respectively. For a fixed set of masses, the predicted signal yields decrease as the LSP Bino andχ ± 1 /χ 0 2 Wino com- positions decrease, resulting in ≈ 55% decrease in signal significance for a LSP Bino composition of 50%. The signal significances shown in Figures 6 and 5 were calculated using the same statistical procedure outlined above and similarly considering a 10% systematic uncertainty.
V. DISCUSSION
The main result of this paper is that theτ -χ 0 1 coannihilation region with ∆m < 50 GeV, where experimental sensitivity is limited from current searches performed at the LHC, can be probed using a search strategy of one soft hadronically decaying tau lepton and large missing transverse energy recoiling against a hard p T jet from initial state radiation. These regions of SUSY also play a decisive role in thermal Bino DM cosmology models which requireτ -χ 0 1 co-annihilation to obtain the correct relic DM density observed today. A major highlight of the proposed search strategy is the ability to select low p T hadronic tau decays, facilitated by the use of p miss T triggers from the boost effect of the high p T ISR jet, in order to maximize signal acceptance in these compressed scenarios while simultaneously providing large reduction against SM backgrounds. The ability of the ATLAS and CMS experiments to provide good τ h identification at low p T is a key ingredient. We find that for m(τ ) − m(χ 0 1 ) < 25 GeV, gaugino masses up to 300 GeV (250 GeV) can be probed at 3σ (5σ) level with 30 fb −1 of 13 TeV data from the LHC. We emphasize that the experimental constraints for the SUSY parame- 74 1.75 1.79 1.80 1.85 1.90 2.27 2.64 3.04 3.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 ter space with m(τ ) − m(χ 0 1 ) < 25 GeV with the ATLAS and CMS data to date do not exceed those of the LEP experiments, and thus the proposed new search can nicely complement the current analyses performed at the LHC.
|
2016-12-16T15:46:24.000Z
|
2016-06-28T00:00:00.000
|
{
"year": 2016,
"sha1": "99fdf019e02fec9f77c7dfe67731d5bccc75a22b",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.94.073007",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "99fdf019e02fec9f77c7dfe67731d5bccc75a22b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219645060
|
pes2o/s2orc
|
v3-fos-license
|
A method of marks and indices for linear modal logic
In the paper a method to check termination of history-free proof for linear modal logic S4.3 is proposed. This method improves the method proposed by the authors for modal logic S4. Analogously as for S4, instead of history we use marks and indices that allow us to eliminate loop checking. The method proposed in this paper specifies some kind of formulas which allow us to check termination of derivations in more effective way in comparison with S4.
Introduction
In [3] the notion of history to ensure termination of derivations in some non-classical logics was introduced. The history allows us to achieve efficient loop checking by means of an information about previous parts of a derivation. The history based method nowadays is widely used constructing derivations in non-classical logics. In [4] a method called marks and indices method (denoted M&I ) for modal logic S4 was proposed. In M&I , instead of history marks and indices that allow us to eliminate loop checking are used. In the present paper an improved version of M&I method for linear modal logic S4.3 is described. The logic S4.3 is obtained from modal logic S4 adding the linearity axiom ( A ⊃ B) ∨ ( B ⊃ A). The logic S4.3 is interpreted as discrete linear time logic. The aim of this paper is to construct an invertible sequent calculus for modal logic S4.3 without loop checking changing and extending the technique from [4].
Invertible calculus with specialized reflexivity rule
Formulas in the considered calculus are constructed in traditional way from propositional symbols using the classical logical connectives and the necessity modality . Along with modality a marked modality * is introduced. This marked modality has the same semantical meaning as non-marked modality and serves as a device to restrict a backward application of reflexivity rule. A formula of the shape A is called a modal one. The language considered does not contain the modality ♦ assuming that ♦ A = ¬ ¬A. We consider sequents, i.e., formal expressions For simplicity we consider sequents not containing branching formulas (see, e.g., [2]).
A sequent S is a primary one if (1) S has the shape 1 , * → 2 , , where i (i ∈ {1, 2}) is empty or consists of propositional symbols; * is empty or consists of formulas of the shape * M; is empty or consists of formulas of the shape M, and (2) antecedent and/or succedent of the sequent does not contain several occurrences of the same formula.
Cut-free sequent calculus with specialized reflexivity rule GS4.3 for modal logic S4.3 is defined by the following postulates (see, e.g., [1]): Axiom: , P → , P where multiset is permitted to contain some formulae of the shape * B, i.e., modality can be marked.
Modal rules:
where in the conclusion of the rule the outmost occurrence of modality in the formula A is not marked but some of modal formulas from can be marked. The rule ( * →) is called reflexivity rule because it corresponds to reflexivity axiom A ⊃ A.
where the conclusion of the rule is a primary sequent such that 1 ∩ 2 is empty. The rule ( ) is called linearity rule because it corresponds to linearity axiom From [1] it follows that the calculus GS4.3 is sound and complete. Using traditional proof-theoretical methods we get that each rule of the calculus GS4.3 is invertible in GS4.3.
Loop-check-free calculus for S4.3
To construct backward proof search without loop checking a notion of an indexed modality is introduced and sequents containing occurrences of the indexed modality are considered. Let us introduce the following indexation technique.
A positive occurrence of modality in a sequent S is a special one if it occurs within the scope of a negative occurrence of modality in S. A special occurrence α of modality in a sequent is an isolated one if within the scope of α there is a negative occurrence of modality . We distinguish two sorts of isolated occurrences of the modality. An isolated occurrence α of modality in a sequent is strongly special if within the scope of α there are no isolated occurrences of . A special occurrence of which is not strongly special is simply special.
Let us introduce two sorts of indexes used only for special occurrences of modality, namely, index i where i ∈ {1, . . . , n} and n is the number of simply special occurrences of modality in a sequent, and index •k, where k ∈ {1, . . . , m} and m is the number of strongly special occurrences of modality in a sequent. The modality σ (σ ∈ {i, •k}) is an indexed modality.
For example, let S = ¬ (¬ Q∨ ( ¬ ¬P ∨ ¬ ¬ P )) → , then S ind has the shape ¬ 1 (¬ Q ∨ ( •1 ¬ ¬P ∨ 2 •2 ¬ ¬ 3 P )) → ; the occurrences of 1 , 2 , •1 , •2 in S ind are isolated ones, and the occurrence of 3 in S ind is not isolated one; there are no isolated occurrences of the modality within the scope of the occurrences of •1 , •2 , but there are isolated occurrences of the modality within the scope of the occurrences of 1 , 2 .
Along with the marked modality * (introduced in the previous section and used only for negative occurrences of modality) let us introduce one more marked modality, namely, + . The marked modality + serves as a stopping device for a backward application of the linearity rules. The marked and indexed modalities have the same semantical meaning as non-marked and non-indexed modality . Let A be a formula from a sequent S, then an indexed formula A ind is a formula obtained from A by replacing any simply (strongly) special occurrence of in A by the indexed modality i ( •k , correspondingly) in such a way that different special occurrences of get different indices. Let S be a sequent, then an indexed sequent S ind is a sequent obtained from S by replacing every formula in S by appropriate indexed formula in such a way that different special occurrences of in an indexed sequent S ind get different indices.
A simply special occurrence of modality , i.e., indexed modality of the shape i , in S ind is dependent if within the scope of i there is at least one occurrence of some indexed modality σ (σ ∈ {i, •k}). In opposite case the occurrence of i in S ind is independent.
For example, let S = ¬ ( P ∨ ¬ P ) → , then S ind has the shape ¬ 1 2 ( 3 P ∨ 4 •1 ¬ P ) → ; the occurrence of 3 in S ind is independent one, and occurrences of 1 , 2 , 4 in S ind are dependent ones.
Let us introduce an operation σ + (σ ∈ {i, •k}). Let A be any indexed formula from an indexed sequent S ind . Then application of the operation σ + to A is denoted as A σ + and the result of this application is a formula obtained from A by replacing the occurrence of σ in A by marked modality + . If A does not contain occurrences of σ then A σ + = A. The notation σ + means A σ + 1 , . . . , A σ + k , where k 1 and is a sequence of indexed formulas A 1 , . . . , A k .
Let us note that only positive occurrences of the modality may get indexes or the mark + and only negative occurrences of may get the mark * .
Let us introduce the following notions which allow us to check termination of derivations in more effective way in comparison with checking for S4 described in [4].
Let B be a formula entering in a sequent S. A subformula of B is a modal one if it has the shape µ M, where µ ∈ {∅, i, •k, +, * }.
A modal formula B is a passive formula if • B occurs in a sequent S positively and has the shape i 1 . . . i n M (n 1), where M is a formula containing at least one occurrence of index-free modality (probably, marked modality) and does not contain any occurrences of indexed modality; B is called a passive formula of the first type; • B occurs in a sequent S positively, has the shape τ 1 . . . τ n M (n 1), τ j ∈ {i, +}, j ∈ {1, . . . , n} and there exists j such that τ j = +; B is called a passive formula of the second type; • B occurs in a sequent S negatively and has the shape * m times . . . M (m 0), where M is a formula composed of the first and/or the second kind passive formulas using logical symbols; B is called an passive formula of the third type. Any modal formula that is not passive one is active formula. For example, let S be a sequent * ¬ 1 P , * ¬ 2 + Q → 1 P , 2 + Q, 4 ( + P 1 ∨ + Q 1 ), R. Then the formula 4 ( + P 1 ∨ + Q 1 ) is the passive formula of the first kind; the formula 2 + Q is the passive formula of the second kind; the formula * ¬ 2 + Q is the passive formula of the third kind. Formulas * ¬ 1 P , 1 P and R are active formulas. An indexed sequent S is a primary one if (1) S has the shape 1 , * → 2 ,˜ , where i (i ∈ {1, 2}) is empty or consists of propositional symbols; * is empty or consists of formulas of the shape * M;˜ is empty or consists of formulas of the shape µ M (µ ∈ {∅, i, •k, +}, and for any formulas A and B, if σ A ∈˜ (σ ∈ {i, •k}) and A = B then for the same index σ, σ B / ∈˜ , and (2) antecedent and/or succedent of the sequent S does not contain several occurrences of the same formula.
Taking into account the introduced notions of active and passive formulas let us specify the shape of the succedent part of a primary sequent. Namely, the part˜ of primary sequent has the shape ∇, λ , + where ∇ is empty or consists of active non-indexed formulas, λ is empty or consists of active indexed formulas, and + is empty or consists of the first and/or the second kind passive formulas.
Let G 1 S4.3 be a calculus obtained from the calculus GS4.3 replacing the rule ( ) by the following linearity rule: where the conclusion is a primary sequent such that 1 ∩ 2 is empty, + is empty or consists of passive formulas of the first or second type; σ 1 A 1 , . . . , σ j A j , . . . , σ n A n , where σ j ∈ {∅, i, •k} (1 j n), consists of active formulas; σ in the notation of the rule ( σ p ) denotes the sequence σ * 1 , . . . , σ * j , . . . , σ * n , where σ * j ∈ {∅, σ j , σ j +}. For every j (j ∈ {1, . . . , n}), the shape of the j th premise of this rule and the meaning of σ * j in σ depend on the shape of the j th main formula σ j A j in the conclusion of this rule. For the sake of simplicity, we can imagine that each premise of the rule ( σ p ) is obtained applying one-in-three following rules depending on the shape of the main formula: Non-indexed rule: Weak indexed rule: where in the conclusion of this rule contains a dependent occurrence of i or contains an occurrence of •k for some k. Strong indexed rules: where λ ∈ {i, •k} and if λ = i then in the conclusion of this rule contains an independent occurrence of i and does not contain an occurrence of •k for some k, i.e., conditions indicated in the rule ( i p ) does not hold. It is important that, as it follows from the shape of the linearity rule ( σ p ), this rule satisfies the following conditions: • the passive formula can not be the main formula of the linearity rule ( σ p ) and passive formulas entering in the conclusion of the rule are not preserved in any premise.
• if the j th main formula of the linearity rule ( σ p ) is an indexed formula σ j A j such that σ j = i (but not σ j = •i) and in the conclusion of this rule contains a dependent occurrence of i or contains an occurrence of •k for some k, then in the premise S j the operation σ j + is not applied to in * .
Example 1. (a) Let S be the indexed sequent of the shape
to S we get two premises: The sequent S 1 is the weak indexed premise and S 2 is the non-indexed premise.
(b) Let S be an indexed sequent of the shape is the passive formula of the third type and 1 + (P ∨ Q) is the passive formula of the second type, backward applying ( 3+ p ) to S we get the strong indexed premise ¬ + (R ⊃ R), * ¬ + (R ⊃ R) → (R ⊃ R).
From the shape of the linearity rule ( σ p ) it follows that there is the one way to construct the premises of this rule. From this fact we get that the rule ( σ p ) is invertible. A primary sequent of the shape 1 , * → 2 , + , where 1 ∩ 2 is empty and * ( + ) is empty or consist of formulas of the shape * M (passive formulas of the first and/or second type, correspondingly), is a final one. It is impossible to apply any rule to a final sequent.
A derivation V of a sequent S in the calculus G 1 S4.3 is a successful one, if each branch of V ends with an axiom. In this case a sequent S is derivable in G 1 S4.3. A derivation V of S in the calculus G 1 S4.3 is an unsuccessful one if V contains a branch ending with a final sequent. In this case a sequent S is non-derivable.
Let us note that in calculus G 1 S4.3 derivation of indexed sequent is constructed and indexed end-sequent S ind of a derivation is obtained from arbitrary sequent S which does not contain any indices and marks. Thus, end-sequent S ind does not contain marked modalities * , + .
Since using invertibility of the rules of G 1 S4.3 and technique from [4] we can prove that the calculi GS4.3 and G 1 S4.3 are equivalent, we get THEOREM 1. The calculus G 1 S4.3 is sound and complete.
Analogously as in [4] we can show that complexity of each sequent constructing backward derivation of any indexed sequent S in G 1 S4.3 decreases. Thus, backward proof search in G 1 S4.3 terminates.
|
2020-05-28T09:15:06.288Z
|
2009-12-20T00:00:00.000
|
{
"year": 2009,
"sha1": "61ab079c52640dd3699ec36444e7b91d911029da",
"oa_license": "CCBY",
"oa_url": "https://www.journals.vu.lt/LMR/article/download/17976/17137",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "945f2ed52072136a6ce1ef73efb3f3ae787301e8",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
232160727
|
pes2o/s2orc
|
v3-fos-license
|
Development and Evaluation of an Inter-professional Education Course at a Medical School in Korea
Background Interprofessional collaborative practice (IPCP) is emphasized in medical care for patient safety. As patient care is provided by teams, interprofessional competence is required to ensure the quality and safety of care and should be taught as early as possible. In this study, we introduced a 2-week interprofessional education (IPE) curriculum and attempted to describe and evaluate its effectiveness among medical students. Methods We developed a 2-week IPE course and gave it to third- or fourth-year medical students (n = 166) from 2018 to 2019. The curriculum was composed of interactive lectures, discussions, small-group discussions, and simulation and was given to diverse medical students. Students were asked to report their satisfaction with the IPE program, write a reflection paper, and complete readiness for interprofessional learning scale (RIPLS) questionnaires before, immediately after, and 4 months after the curriculum. We also obtained 360° evaluations of the students by other health professionals 1 year after the training. Results The IPE program changed students' attitudes about interprofessional learning, from less favorable to more favorable. The 360° evaluation by nurses revealed that students became more favored as teammates (overall satisfaction with them as teammates increased from 3.1/5 to 3.4/5) compared to medical interns before IPE training, and complaints from nurses about medical interns were significantly less frequent 1 year after the training. Conclusion The IPE program was effective in preparing medical students for team based collaborative practice even though it was short and exposed once in the curriculum. Further extension to other medical schools is recommended.
INTRODUCTION
The importance of interprofessional collaborative practice (IPCP) in health care is now being emphasized more than ever. IPCP has been shown to improve patient outcomes such as blood pressure and glucose and cholesterol control, and thus reduce mortality. This change in health care delivery accelerated the introduction of interprofessional education (IPE) courses for physicians, nurses, pharmacists, and social workers. 1 -5 IPE is a collaborative approach for teaching and learning that fosters teamwork among two or more health-care professionals from different educational backgrounds. 6 - 8 If students from different health-care professions learn together, they might be prepared to collaborate more efficiently and effectively in practice. 9 -11 However, many undergraduate students of medicine, nursing, and other professions still have only limited opportunities to learn how to collaborate with other members of the health-care team and continue to be educated in isolation. 12,13 Recently, in countries such as Canada, Australia, the United States, and Europe, many schools for the health professions and academic health centers have made great efforts to implement IPE. The accreditation bodies for health science programs in Canada and the United States require the inclusion of IPE in curricula. 14 - 16 Although growing amount of research have described various types of IPE programs with diverse curriculum content, course duration, participating group and outcomes and studies have provided evidence of a long-term impact of IPE on IPCP, 17 - 19 we have limited experience of IPE program in Korea.
In fact, although an IPE program with active interactions among various health professionals would be ideal, implementing it has various practical limitations. This study thus aimed to develop an IPE program that can be implemented in medical schools that afraid to implement IPE due to various obstacles. In this study, we describe the process of developing and implementing an IPE program and evaluate the IPE curriculum based on Kirkpatrick's model.
METHODS
We developed and implemented an IPE curriculum for third-and fourth-year medical students before they started clinical practice.
The curriculum development and implementation
In early 2017, we developed an IPE curriculum to prepare health professionals for IPCP. We focused on third-or fourth-year medical students because we believe that students should learn interprofessional collaboration and communication during their clinical clerkships and before working as medical interns, as team members providing health care.
We invited experts from various fields to constitute an organizing committee for the IPE curriculum. They were from medical education, internal medicine, emergency medicine, nursing, and so on. Through consecutive meetings, discussions, and a workshop, we came to agreement on the core values and desired course outcome for IPE (Supplementary Data 1). The 2-week IPE curriculum was designed based on the core competencies for IPCP guidelines. 20 (Tables 1 and 2 These topics were taught using appropriate teaching methods such as interactive lectures, discussions, shadowing, small-group discussions, simulation, role playing, etc. (Tables 1 and 2).
To assess student performance, we used multiple approaches including essays, performance in role play, video clips, and attitude toward participation.
Subjects
The subjects of the study were third or fourth-year medical students at the Chung-Ang University College of Medicine (Republic of Korea) from 2018 to 2019. In its first year, we launched the IPE program for fourth-year medical students. We subsequently switched it to the third year, thinking that earlier exposure to IPE is required before entering clinical clerkship. In the first year of implementing IPE, the main curriculum participants were medical students only (fourth-year medical students, n = 72). But in 2019, the second year, we added simulation sessions for medical and nursing students (third-year medical students (n = 94) and fourth-year nursing students). Seventy-five fourth-year nursing students from the Sung-Shin University College of Nursing participated in this class.
Outcome measures
For IPE course evaluation, we used Kirkpatrick's educational outcome model. We evaluated the participants' reactions, modification of attitudes and perception, acquisition of knowledge and skills, behavioral change, and change in organizational practice based on Kirkpatrick model. Level 1 -Reaction: After the training, we asked all students to describe their satisfaction with the IPE program. We asked them to each write a reflection paper and to make a video clip describing their experiences, lessons learned, and feelings during the program.
Level 2 -Modification of attitudes/perceptions & acquisition of knowledge/skills: To analyze perceptual changes of students related to readiness for interprofessional learning, we asked all students to complete a readiness for interprofessional learning scale (RIPLS) 19-item questionnaire (see Table 3 for items) before, immediately after, and 4 months after the training. The RIPLS is a commonly used tool for evaluating attitudes and perceptions of students regarding IPE that was developed by Parsell and Bligh. 21 The scale is as follows: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree. *P value for paired t-test comparing values before and immediately after the program; **P value for paired t-test comparing values before and 4 months after the program.
and consists of 19 questions and 4 subscales including teamwork and collaboration, negative professional identity, positive professional identity, and roles and responsibilities. 22 - 24 In order to help students understand it, the questionnaire was written in both Korean and English. Students gave responses to the statements using a five-point scale (5 = strongly agree; 1 = strongly disagree).
Levels 3 and 4 -Behavioral change and change in organizational practice & benefit to patients:
By the end of the medical internships of the medical graduates who first participated in the IPE program, we obtained 360° evaluation by diverse health professionals during the medical internships one year after the training. During 1 year of their internship, we received feedback repeatedly from senior doctors and nurses. At the end of each year, nurses have scored satisfaction with interns as team members using 5-point Likert scale (5 = very satisfied, 1 = very dissatisfied), and we compared satisfaction scores before and after IPE training. As for improvement of patient outcome and benefit to patients, we are planning a long-term follow-up comparative investigation to see if there was any change in patient outcomes after the IPE training.
Data analysis
Statistical analysis was performed using the SPSS (version 23) statistical package (SPSS, Chicago, IL, USA). Changes in RIPLS score after the IPE program were analyzed using a paired t-test and overall satisfaction score with working with interns P values of < 0.05 were taken to indicate statistically significant differences.
Ethics statement
The Seoul National University College of Medicine Institutional Review Board (IRB) provided study approval and waived the requirement for written informed consent (IRB No. 2003-055-1108).
Participant satisfaction with the education program
A total of 166 medical students in 2018 and 2019 (72 and 94, respectively) and 75 nursing students in 2019 participated in the curriculum. The reflection papers showed that students had learned and felt a lot during the training. The video clips that the students made in small groups showed that they also had a chance to think deeply, considering different perspectives.
At the end of the program, students were asked to give descriptive feedback about the curriculum. Most participants answered that they were fully satisfied with the program. They said that shadowing and interaction with other professionals helped them to understand and acknowledge other health professionals in the hospital and that they could understand the common situation of conflict among diverse health professionals. Comments about the educational program included the following: -I'm glad I could participate in this curriculum before my clinical clerkship.
-I was surprised to see that there are so many different jobs in the hospital that I had not heard of before. -It was only a few days' program, but as I toured various facilities in wards and hospitals, I realized that hospitals could never be run by doctors alone. -It was good chance for me to see the system of the hospital as a whole before my clinical clerkship.
Interprofessional Education
-I really enjoyed shadowing. After visiting various departments and experiencing other jobs indirectly through friends' presentations, I could feel that so many people were working together in the hospital. When I asked my seniors who are currently in practice, they said they didn't know there was such a department in the hospital after a year of practice. -I now understand that the hospital needs a wide variety of other jobs besides doctor or nurse to function. -It was an opportunity to feel how hard nurses were working and realize that patient care is not something that doctors can do without their cooperation. -The simulation session with nursing students was so fun, and I couldn't help but admire the professionalism of the nursing students.
Reliability of the English with the Korean version of the RIPLS
The internal consistency of the RIPLS was overall good (α = 0.849). Cronbach's α estimating the internal consistency of the four factors 'teamwork and collaboration,' 'negative professional identity,' 'positive professional identity,' and 'roles and responsibilities' were 0.819, 0.710, 0.828, and 0.371, respectively.
Students' perceptual change following the IPE program
We used the RIPLS to survey students to find out whether the IPE course had influenced students' attitudes or perceptions about interprofessional learning and collaborative care. All 166 students (72 for 2018 and 94 for 2019) answered the RIPLS questionnaire on the first day of the course, at the completion of the planned curriculum, and again 4 months later.
In 2018, there was no noticeable significant change in RIPLS scores immediately after the training ( Table 3). But 4 months after the IPE program, students' RIPLS scores changed to be significantly more favorable to interprofessional learning (IPL). However, in 2019, all RIPLS scores changed to be significantly more favorable toward IPL immediately after the students completed the IPE program, and the changed scores were still valid 4 months later ( Table 3).
When we compared the 2018 with the 2019 RIPLS scores, there were no differences in the initial scores (before the IPE program). Immediately after the program, the average RIPLS score in 2018 was significantly less favorable to IPL compared with 2019, but 4 months after the program, most of these gaps had diminished (Supplementary Table 1).
The curriculum in 2019 is not significantly different from 2018 in terms of learning objectives or learning activities, but the order of lectures and practice was changed slightly and more participatory practice was reinforced. Above all, the biggest difference in the curriculum between 2018 and 2019 was the participation of nursing students and interaction with them during a simulation. The direct interaction with nursing students might have influenced medical students' attitudes toward IPL. Considering that there were no obvious differences between 2018 and 2019 in the responses before the training, direct interaction between medical and nursing students might have caused the difference.
Behavioral change one year after the IPE curriculum
The next outcome measurement was behavioral change, the third level of Kirkpatrick's model. We followed up on the IPCP-related performance of students who had participated in the IPE curriculum in 2018 and worked as medical interns in 2019. An assessment by a nurse was carried out as part of a 360° evaluation. The number of complaints related to interns decreased significantly compared with the prior consecutive 2 years, from 34 cases per 6/11 https://jkms.org https://doi.org/10.3346/jkms.2021.36.e69 Interprofessional Education year to 17 cases per year. And overall satisfaction score with working with interns improved significantly, from 3.1 out of 5 points (a total 799 out of 975 nurses had answered) to 3.4 out of 5 points (a total 778 out of 982 nurses had answered) after IPE training (P < 0.05).
DISCUSSION
In this study, we introduced an IPE curriculum for training prior to medical internship and found that students' attitude toward IPCP improved after participating in the 2-week IPE program. Their positive attitudes toward IPL were still valid at 4 months and at 1 year after the training.
In spite of the significance of IPCP and the recognized need for IPE, there are some obstacles to developing and sustaining IPE. Common obstacles include institutional leadership, physical distance between different disciplines' institutions, student diversity, and preexisting diverse curricula for different health professionals. The IPE concept and lack of an accredited, efficient teaching method are also common barriers. 25 - 27 We intended to create an interactive IPE program in which students in diverse health professions would learn about, from and with each other, but it was difficult to gather them due to the geographically distant locations of their different disciplines' institutions and their diverse undergraduate curricula. As the curriculum for undergraduate students was already overwhelming, making space for IPE was not easy. In spite of these difficulties, we did start a 2-week IPE program and were able to add a simulation session with nursing students in the second year of the program.
As there is no gold standard for teaching interprofessional skills, we made an effort to match learning activities to each objective. A recently published review paper reported that didactic, small group discussion, patient case analysis, simulation, and shadowing are major educational strategies currently available for IPE. 28 We started various educational tools for IPE and focused on experiential learning because we thought IPE skills could be acquired through experience. In our curriculum, we minimized one-way teaching such as lectures, and maximized interactive learning, self-directed learning, and diverse experiences by which students could discover the importance of IPE and interaction with other professionals. Among diverse learning activities, a majority of students rated the simulation session with nursing students as the most satisfactory class, and the nursing students were also satisfied with the class. We concluded that real interaction with diverse health professionals is the most effective way to learn IPCP. The more favorable change in students' attitudes toward IPCP at their completion of the training in the second year compared with the first year might have been caused by their direct interaction with nursing students in the second year. Besides adding the simulation session with nursing students, in that year we also added various learning activities such as a team building game or communication game, while small group discussions were the main activity in the first year. Insufficient inter-professional interaction in 2018 might be the cause of the insignificant RIPLS score change right after IPE course. But 2-week IPE training would have influenced students' attitudes resulting in change in RIPLS scores after 4 months.
A favorable view of IPE by hospital leadership is necessary for IPE implementation, not just for implementing IPCP in the hospital, but also for introducing an IPE curriculum in medical school. We need support from other health professionals and cooperative hospital leadership for shadowing programs and creating a collaborative culture in hospitals. Thus, successfully implementing IPE is a kind of IPCP process itself. However, IPE program leadership is the most important factor in IPE implementation and maintenance. Persuading the health care community of the importance of IPCP, appealing to other professions and hospital leadership to elicit cooperation, and operating IPE courses using various educational methods are possible only when the IPE program leader has a strong will and drive.
Interprofessional learning is known to help prepare students for team-based practice, but the appropriate time for intervention is not well understood. In our study, we introduced IPE in the third or fourth year of medical school, before the medical internship. Traditionally, curricula for doctors and medical students have more focused on clinical skills such as diagnosis or treatment of disease and paid little attention to teamwork, communication, or cooperation with other health professionals. But nowadays these skills are more emphasized than clinical skills and are known to be proven, effective ways of improving patient outcomes. Therefore, all undergraduate medical students should have the necessary competence to be good team players. 29 - 33 The critical period for IPE might be before students work with other professionals, because teams do not function well without practice. An IPE program at the prelicensure level has been shown to produce positive outcomes in patient satisfaction, collaborative team behavior, and reduced medical error. 34 Hence, gradual implementation of IPE is recommended for undergraduate students.
IPE's learning outcomes are not simple knowledge or skills, but contextual knowledge and performance skills in collaborations among different professions, so the curriculum should be designed not only at the individual level but also at the organizational level, and an interorganizational learning process should be arranged. Our results support these lessons. We included interactions with other professionals through shadowing in the first year of IPE when medical students were the only participants in the program, and then added a multiprofession simulation session in the second year. Students' satisfaction increased in the second year when direct interaction was present.
Despite debate about the validity and reliability of the RIPLS, a number of studies have reported that RIPLS is a reliable and valid tool. 35, 36 Our results revealed the weak internal consistency (0.371) in the roles and responsibilities subscale that has already been reported. 22, 37 We used a bilingual version of the RIPLS questionnaire because we did not have previous data from the Korean version of RIPLS and medical students are very good at English.
Our study has several limitations. First, we started the IPE program at a small institution and the number of participants was limited, so it is hard to generalize our results to other institutions. Second, nursing students participated only in the simulation session and were not included in the rest of the IPE training. The curriculum was designed mainly by medical doctors for medical students. Because we still do not have a curriculum that involves multiple professions from the start, we need to develop and apply an IPE curriculum that includes diverse professions throughout the program. Third, the validity of measuring students' attitudinal changes using RIPLS is still debated, and multi-faceted evaluation using various tools could be a better option for assessing students' attitudinal change. Fourth, as the internship period was one year after the IPE course and various factors could affect the intern's performance in addition to IPE training, it may be difficult to conclude that the improvement of intern' performance as teammates was only achieved by IPE training. And last, although the final outcome measure for the IPE program was improvement in patient outcomes and benefits to patients, the fourth level of Kirkpatrick's model, we do not have data regarding any change in patient outcomes or benefits to patients. A long-term follow-up study will provide it.
In conclusion, the IPE curriculum was satisfactory in that it changed students' attitudes to be more favorable to IPCP and these changes were maintained until the end of the medical internship. This study shows that even a single, short IPE program with limited interaction with other health professionals can change medical students' attitudes about IPCP. We also found that students enjoy interaction with students in other health care professions and that this interaction increases IPE effectiveness. The program should be extended to more diverse groups of health professionals and students.
|
2021-03-10T06:23:19.234Z
|
2021-03-03T00:00:00.000
|
{
"year": 2021,
"sha1": "926ada2d80ffa58ac50a44283d32b8a6c64f711f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2021.36.e69",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1a70a162572d359ec1e93b3e14f916121eacaef",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231956960
|
pes2o/s2orc
|
v3-fos-license
|
Curcumin anti‐tumor effects on endometrial cancer with focus on its molecular targets
Curcumin is extracted from turmeric and shows a variety of properties that make it a useful agent for treating diseases and targeting different biological mechanisms, including apoptosis, angiogenesis, inflammation, and oxidative stress. This phenolic compound is safe even at high doses. However, it has poor bioavailability. The incidence rates of endometrial cancer (EC) that is one of the most prevalent gynecological malignancies is increasing. Meanwhile, the onset age of EC has been decreased in past few years. Besides, EC does not show a convenient prognosis, particularly at advanced stages. Based on this information, discovering new approaches or enhancing the available ones is required to provide better care for EC patients. In this review, we cover studies concerned with the anti-tumor effects of curcumin on EC. We focus on molecular mechanisms that are targeted by curcumin treatment in different processes of cancer development and progression, such as apoptosis, inflammation, and migration. Furthermore, we present the role of curcumin in targeting some microRNAs (miRNAs) that may play a role in EC.
Introduction
Curcumin is a phenolic antioxidant extracted from turmeric, which is frequently used as a spice and has a yellow color [1,2]. The rhizome of the herb Curcuma longa is the origin of turmeric that contains turmerin protein as well as analogs of curcumin, demethoxycucumin, and bisdemethoxycurcumin. 1,7-bis(4-hydroxy-3methoxyphenyl)-1,6-heptadiene-3,5-dione is the chemical name of curcumin and C 21 H 20 O 6 is its empirical formula [3,4]. Since curcumin and its two analogs have the same molecular and biological characteristics, it is suggested that bisdemethoxycurcumin converts to demethoxycucumin, which in turn, transforms into curcumin. While curcumin plays a variety of beneficial roles, studies on animals and humans have concluded that it is a safe agent even at high doses [5]. However, a poor bioavailability has been attributed to curcumin [6,7]. 1 to 2 h after consuming a single 4000 mg or higher oral dose of curcumin, its peak of plasma concentration can be observed [6]. Curcumin acts as an anti-oxidative, antimicrobial, anti-malarial, anti-HIV, and anti-angiogenic agent. Furthermore, it can be used in the treatment of inflammation, skin wounds, and neurodegenerative diseases [8][9][10].
Endometrial cancer (EC) is one of the most prevalent gynecological malignancies all over the world and its incidence rates are increasing [11,12]. The mortality rate of this disease is growing more in older women than young women [11]. Moreover, the onset age of EC has been decreased compared to the past years [11]. EC can be categorized into two types: Type I and Type II [13]. Each one of these types exhibits some unique mutational profiles and clinical features. Primarily, tumors with histological characteristics of endometroid are considered as Type I tumors; whereas, Type II tumors show non-endometroid histological features. Loss of PTEN has been reported in 83 and 11% of Type I and Type II tumors, respectively. TP53 mutations have been reported in 10-20 and 90% of Type I and Type II tumors, respectively [14]. Mutations of PIK3CA have been found in 20-40% of Type I tumors, which is a higher number compared to 20% in Type II tumors. However, amplifications of this gene occur in 15% of Type I tumors, which is a lower number compared to 50% in Type II tumors. Furthermore, PIK3R1 mutations are more common in Type I tumors (43%) compared to Type II tumors (12%) [15].
The majority of patients with EC have abnormal uterine bleeding at the beginning of their disease. Endometrial biopsy or operative dilation and curettage (D&C) is being used for diagnosis, and in 99% of the cases, these methods lead to a histopathologic diagnosis [16]. Transvaginal ultrasound is another diagnostic tool [17]. EC prognosis is associated with multiple factors, such as histological subtype, grade, and the disease stage [18]. Besides, the survival of the patients and lymph node metastasis are significantly correlated with histological grade and the depth of myometrial invasion [19]. Studies also have shown several biomarkers that can be useful in predicting the outcome of the disease, including serum amyloid A, CA-125, CA 15 −3, CA 19 − 9, survivin, c-erbB2, cyclooxygenase, and L1 cell adhesion molecule [20]. Older age, family history of EC, early menarche, late menopause, obesity, exposure to radiation, and infertility are some of the risk factors of EC [21]. The chance of developing EC is lower in African or Asian women than in Caucasians [21]. However, white women have a better prognosis than black women at the same stage of the disease [22]. 28% of EC cases show regional or distant metastasis despite the early-stage diagnosis in more than 70% of the cases [23]. Particularly at advanced stages, this disease does not show a convenient prognosis [23]. There are different treatment options for advanced and recurrent EC, such as chemotherapy, radiation, hormone therapy, and surgery. However, common treatments are not capable of enhancing overall survival rates [24]. Also, none of the therapeutic methods is helpful in 15% of women with an aggressive phenotype of EC [25]. Thus, discovering new approaches or enhancing the available ones is required to provide better care for EC patients. In this review, we cover studies concerned with the antitumor effects of curcumin on EC. We focus on molecular mechanisms that are targeted by curcumin treatment in different processes of cancer development and progression, such as apoptosis, inflammation, and migration. Furthermore, we present the role of curcumin in targeting some microRNAs (miRNAs) that may play a role in EC.
Curcumin and cancer
Since curcumin has interactions with several intracellular and extracellular molecules that are involved in various cancers, it is a potential candidate for suppressing cancer progression [26]. Curcumin has negative effects on processes involved in cancer, including apoptosis, angiogenesis, inflammation, and oxidative stress. Thus, it can serve as a beneficial agent in prevention, treatment, and controlling the symptoms of cancers, such as breast cancer, colorectal cancer, prostate cancer, melanoma, and lung cancer [26][27][28][29][30][31][32]. The complicated chemistry of curcumin is a reason for its diverse effects. Besides, curcumin is capable of involving several signaling pathways of survival, cellular protection, metastasis, and angiogenesis [33]. While curcumin possibly enhances chemotherapy and chemo-preventive impacts on cancer cells, it is safe and shows almost no side effects [34]. The mentioned effect is because of the different interaction of curcumin with normal tissues and cancer cells [34]. Higher cellular uptake, lower glutathione level, and active NF-κB expression in cancer cells are the reasons for the different effects of curcumin on them compared to normal cells [34].
Several studies have been concerned with the role of curcumin with a focus on its molecular targets and clinical features. For instance, curcumin effects on gastric cancer have been investigated and it is found that curcumin treatment leads to cells' apoptosis and autophagy as well as inhibition of PI3K/Akt/mTOR signaling pathway [35]. In an open-label phase I trial, the effects of curcumin and docetaxel, a chemotherapeutic agent, co-treatment in breast cancer patients was evaluated to determine the maximal tolerable dose of the combination of dose-escalating curcumin and the standard dose of docetaxel [36]. In another study conducted on EC, curcumin treatment has been found to suppress Bcl-2 expression [37]. Co-treatment with letrozole and curcumin is reported to cause an increased inhibitory effect on tumor progression [37]. Moreover, both letrozole and curcumin can induce apoptosis [37]. Sun et al. [38] has also concluded that curcumin has the ability to downregulate MMP-2 besides inhibiting proliferation of EC cells.
Curcumin induces apoptosis in EC cells
Apoptosis, programmed cell death, is an energy-dependent mechanism that its deregulation is a cancer hallmark [39,40]. Although, apoptosis is necessary for vital functions of body including the turnover of normal cell, hormone-dependent atrophy, and chemical-induced cell death [40]. The changes and abnormalities in apoptosis may cause the resistance of tumors to treatments besides its undeniable roles in progression and development of tumors [39]. Moreover, a large number of anticancer drugs play their roles by targeting the apoptotic signaling pathways to initiate cancer cell death [39].
One of the processes by which curcumin exerts its anti-tumor activity is apoptosis. Curcumin leads to apoptosis in multiple cancers through involving various mechanisms. For instance, it induces apoptosis in castration-resistant prostate cancer by iron chelation [41]. In melanoma cancer cells, producing reactive oxygen species is another way that curcumin results in apoptosis [42]. Feng et al. [43] observed that curcumin led to lower expression levels of androgen receptor and beta-catenin in EC cell lines. Curcumin involves the Wnt signaling pathway to downregulate the androgen receptor which results in the inhibition of apoptosis and proliferation of EC cells [43]. In human endometrial adenocarcinoma HEC-1-A cells, it was demonstrated that high expression levels of Ets-1, a proto-oncogene, led to an increase in an anti-apoptotic protein (Bcl-2) and this up-regulation was reduced by the administration of curcumin [44]. Moreover, curcumin induced apoptosis and DNA degradation in this cell lines [44].
An investigation that used curcumin encapsulated in liposomes found that it causes alterations in the morphology of cell's nucleus in EC cell lines (Ishikawa and HEC-1) including higher number of apoptotic chromatin condensation and fragmentation of DNA [45]. Furthermore, the results showed that liposomal curcumin considerably leads to the induction of apoptosis and inhibition of cells proliferation as well as inhibiting the expression of NF-κB, caspase-3, and MMP-9 [45]. Kumar et al. [46] reported that using curcumin loaded mixed micelles would increase the apoptotic population to 67.97% from 32.56% in free curcumin. After the treatment with curcumin chromatin condensation, pyknosis of nucleolus, and apoptotic bodies were observed which are typical characteristics of apoptosis [46]. In curcumin loaded mixed micelles, expression levels of survivin, bcl-2, PARP, and Mdr which are anti-apoptotic factors showed a significant reduction [46]. Furthermore, Kumar et al. [46] indicated that curcumin loaded mixed micelles can cause cell cycle arrest at G0/G1 phase and modulate the levels of TNF-α, IL-6 and IL-10. A paper resulted that treating EC cell lines (Ishikawa and RL95-2) with curcumin (40-50 mM) leads to a 60-80% reduction in the viability of cells [47]. While inactive caspases of cancer cells need the protein cleavage in order to become active and take part in apoptosis, cells treated with curcumin has been demonstrated cleavage of active caspase-3 [47,48]. Curcumin-treated cells expressed less IL-6 that induced phosphorylation of STAT-3 [47]. The STAT-3 phosphorylation is linked with reduced cell viability and enhanced caspase-3 cleavage [47]. Also, curcumin treatment resulted in the inhibition of JAK-STAT signaling as well as increased SOCS-3 leading to enhanced STAT-3 phosphorylation and cell viability [47].
Curcumin inhibitory effect on cell migration and invasion
In vivo, cancer cells' migration is through the progressive degradation of the surrounding extracellular matrix to make migration tracks for themselves [49]. Cell migration is a critical process in cancer metastasis [50]. Several studies have shown that the two closely linked processes, invasive growth, and metastasis, are principal signs of tumor progression [51]. Severe failure of organs is a result of massive metastatic lesions which are possibly followed by a patient's death [51]. Remarkably, significant numbers of solid tumor mortalities occur due to the cancer metastases, and the inability to treat them [49].
Curcumin has been shown to reduce the movement and invasion of EC cell lines (HEC-1B and Ishikawa) [52]. Matrix metalloproteinases (MMPs) have an undeniable function in a variety of tumor formations such as growth, invasion, and metastasis of tumors and events that occurred in early carcinogenesis [53]. MMP-2, MMP-9, and proteinase activity have been decreased by curcumin treatment [52]. Western blot assays have shown that curcumin causes a significant reduction in phosphorylated extracellular signal-regulated kinase (ERK) 1/2 [52]. Also, co-treatment of HEC-1B cells with curcumin and ERK inhibitor, U0126, leads to a suppression of cells invasiveness as well as an enhanced decrease in expressions of MMP-2 and MMP-9 [52]. Sirohi et al. [54] found that curcumin inhibits the cancer cells proliferation and tumor growth in Ishikawa cells both in vivo and in vitro. Scratch wound assay showed that curcumin inhibits the migration of Ishikawa and Hec-1B cells [54]. Besides the induction of apoptosis which is mediated by reactive oxygen species, curcumin up-regulates Slit-2 expression in Ishikawa, Hec-1B, and primary endometrial cancer cells [54]. Meanwhile, it results in down-regulation of stromal cell-derived factor-1 (SDF-1) and CXCR4 which inhibits the expressions of MMP-2 and MMP-9; therefore, curcumin reduces cell migration [54].
Effects of curcumin on miRNAs involved in EC
MicroRNA (miRNA) which is a small, single-stranded, non-coding RNA, is present in majority of eukaryotes including humans [55]. Studies suggested that miR-NAs are responsible for the regulation of at least 30% of genes that are coding proteins [55]. By binding to target mRNA, miRNA inhibits the production of proteins [55]. Since associations between different human diseases and miRNA are being found gradually, developing new therapeutics focuses on targeting the miRNA directly [56]. Significant alterations in miRNA expression have been observed in various tumor tissues and cancer cell lines which are related to multiple biological aspects such as proliferation, differentiation, and survival [57]. In several disorders and medical conditions including cancer, dysregulation of miRNA is a biomarker and serves as an oncogene or a tumor suppressor gene [57].
MiRNA-34a is one of those miRNAs which are involved and exert several roles in EC [58][59][60][61]. For instance, miR-34a is down-regulated in EC in comparison with normal tissue, and this down-regulation is linked with a poorer prognosis [60]. MiR-34a modulates the expression of MMSET which is suggested to be a prometastatic agent; thus, it reduces the invasion of EC cells [60]. Through down-regulating Notch1, miR-34a inhibits the proliferation, migration, invasion, and phenotypes associated with EMT in EC cells [61]. The effects of curcumin on miR-34a has been investigated in multiple cancers including gastric cancer, colorectal cancer, prostate cancer, and breast cancer [62][63][64][65]. While curcumin is shown to induce apoptosis and inhibit proliferation in gastric cancer, it is suggested that these effects may be associated with its ability to increase the expression level of miR-34a which can affect Bcl-2, CDK4, and cyclin D1 [63]. In another study, it is observed that curcumin led to an increase in miR-34a expression as well as down-regulation of β-catenin and c-myc [65]. Furthermore, curcumin anti-proliferative effects were suppressed and the β-catenin/c-myc axis was activated by inhibiting miR-34a [65].
In EC miR-21 expression is upregulated and relation is found between this miRNA and maspin which is a tumor suppressor gene [66]. In endometrioid EC cells, the upregulation of miR-21 has been observed to result in a significant reduction in the expression level of Phosphatase and tensin homolog deleted from chromosome-10 (PTEN) protein which is a tumor-suppressor gene [67]. Curcumin can reduce miR-21 [68]. Furthermore, it is suggested that curcumin plays its multiple anti-tumor properties through involving miR-21 such as proliferation, apoptosis, metastasis, and resistance to anti-cancer drugs [68]. Data shows that curcumin treatment leads to a decrease in both activity and expression of miR-21 promoter through suppressing binding of activator protein 1 [69]. Besides, curcumin induces a target of miR-21, tumor suppressor programmed cell death protein 4 (Pdcd4) [69].
Over-expression of forkhead box protein O1 (FOXO1), a down-regulated tumor suppressor in EC, has been observed to inhibit proliferation of Ishikawa cells as well as suppression of cell migration and invasion [70,71]. It is shown that FOXO1 is significantly decreased by some miRNAs in HEC-1B cells including miR-9, miR-27, miR-96, miR-153, miR-182, miR-183, or miR-186 [71]. An investigation found that curcumin suppressed cell proliferation by miR-9 up-regulation and inhibited Wnt/βcatenin signaling in oral squamous cell carcinoma [72]. In a study on the ovarian cancer cell line, curcumin treatment led to a significant increase in miR-9 [73]. Overexpression of miR-9 increased caspase-3 cleavage and enhanced apoptosis [73]. Moreover, Akt and FOXO1 phosphorylation was reduced by both curcumin and overexpression of miR-9 [73]. Therefore, it is demonstrated that curcumin anti-tumor effects on this cancer are mostly through miR-9 up-regulation [73]. Also, another study suggests that curcumin may affect expression levels of miR-183 [74].
Curcumin anti-inflammatory roles in EC
Local and chronic inflammation can be a predisposing factor for development of cancer since it leads to the generation of free radicals as well as increasing COX-2 and PGE 2 ; therefore, it may cause damage to DNA and proliferation of cells [75]. Furthermore, chronic inflammation may disrupt NF-κB pathway regulation which result is apoptosis suppression, inhibition of cell cycle arrest, and induction of pro-inflammatory cytokines [75]. Events that occur in the menstrual cycle are similar to inflammation mechanisms [75]. On the other hand, one of the processes which make obesity related to a higher risk of EC is inflammation [76]. Adipose tissue secretes several proand anti-inflammatory cytokines including tumor necrosis factor (TNF)-α, leptin, interleukin (IL)-6, C-reactive protein (CRP), and adiponectin, respectively [76]. Besides, obesity leads to higher pro-inflammatory markers and lower anti-inflammatory markers and enhances the status of chronic low-grade inflammation [76].
Curcumin has multiple effects on inflammation and obesity-associated inflammatory conditions which could be useful in treating EC. Curcumin results in modulation of TNF-α expression through making an effect on TNF-α promoter methylation status [77]. Curcumin regulates the toxic effect in adipocytes since it decreases the secretion of inflammatory cytokines which leads to a protective impact on hypoxia [77]. Also, TNF-α, COX-2, STAT, cyclinD1, and NF-ĸB signaling pathways can be inhibited by curcumin [77]. Curcumin has been found to suppress obesity-associated inflammation besides its beneficial effects on systemic inflammation, hyperglycemia, and resistance to insulin [78]. In obesity, this anti-inflammatory agent involves WAT and regulates different targets including inhibition of low-grade chronic inflammation, increasing anti-oxidant responses, and decreasing the formation of adipose tissue [78].
Conclusions
The mortality rates of EC are increasing in older patients. Furthermore, incidence rate of it is growing in the general population. A considerable number of patients show regional or distant metastasis, although more than 70% of cases are diagnosed at early stages. Therefore, considering potential therapeutic targets for treating EC is a critical step to enhance survival and life quality of the patients. Curcumin has complex chemistry and it is capable of targeting some signaling pathways. Moreover, it can interact with several intracellular and extracellular molecules. These features lead to anti-tumor effects of curcumin on various cancer cells and is useful at different stages, including prevention, treatment, and controlling the symptoms of cancers. There are studies concerned with the anti-tumor effects of curcumin in the treatment of EC (Fig. 1). Curcumin plays these roles by involving various targets, such as signaling pathways, proteins, genes, and RNAs. Induction of apoptosis, reducing inflammation, and inhibiting cell migration are the results of curcumin treatment. Furthermore, there are some miRNAs whose effects on EC have been identified and curcumin has been observed to impact on these miRNAs but in other cancers. However, to the best of our knowledge, studies about curcumin effects on EC, especially at the clinical level, are limited. Altogether, curcumin should be considered as a therapeutic target in EC and its anti-tumor effects on this cancer deserve further exploration.
Abbreviations EC: Endometrial cancer; miRNA: MicroRNA; MMP: Matrix metalloproteinase. Fig. 1 Schematic representation of curcumin targets that are useful for treating EC. As it is shown in this figure, curcumin targets a variety of molecules and signaling pathways that lead to its anti-tumor effects, including induction of apoptosis, suppressing inflammation, and preventing migration. Furthermore, some microRNAs that are targeted by curcumin may be useful in the treatment of EC
|
2021-02-19T14:23:03.060Z
|
2021-02-18T00:00:00.000
|
{
"year": 2021,
"sha1": "8bcd3018880959eb790602c56445e30b21b1dcec",
"oa_license": "CCBY",
"oa_url": "https://cancerci.biomedcentral.com/track/pdf/10.1186/s12935-021-01832-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e94af18acc43978bb416a4564f9cf02d4c48dbdf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229018100
|
pes2o/s2orc
|
v3-fos-license
|
Practice of Behaviour Modification Techniques by Pre-Service Teacher Interns of Colleges of Education in Ghana
Since the primary aim of every teacher is to create and maintain positive learning environment for learners, it
is logical to admit that the teacher must have in-depth knowledge and skills
which address disciplinary issues and educational needs especially in the classroom. It was on account of this that this
study was undertaken to assess the level of practice of behaviour modification technique in the classroom by pre-service
teacher interns using a cross-sectional survey design. Data collected from 360
respondents at Colleges of Education in Ghana and analyzed using a three-point Likert-type scale questionnaire, reveal that the pre-service teacher interns do not regularly practice
most of the behaviour modification
techniques expected to be used for effective classroom management during the
internship programme. The findings further
showed no significant differences between male and female respondents in their
practice of behaviour modification techniques. In like manner, it was largely evident, that the programme of study for
the pre-service teacher interns had no significant effect on their practice of
behaviour modification techniques. It is in the light of the above findings that this study makes a helpful
recommendation to strengthen teacher training institutions in addressing such
deficiencies in the formation programme in Ghana.
Introduction
One fundamental issue in the field of education is the preparation and training https://doi.org/10.4236/jss.2020.810016 of teachers who are described "as the most significant resource in schools" and "are central to school improvement efforts" (OECD, 2005). For Vegas, Ganimian, and Jaimovich (2012), an education system is only as good as its teachers. At the same time, evidence from different education systems around the world also shows that the most important factor in determining how well children perform is the quality of teachers and teaching (The Schools White Paper, 2010). Way (2001), stressed that teachers are regarded as the agents of change for students and for schools. According to Way, one key factor to improve schools is fostering teacher development which helps their craft, shapes school practices, and builds learning communities. Since teachers' preparation programmes bear great impacts on the teacher's ability to provide effective and efficient instructional services and management of a class, pre-service teachers' training programmes must focus on providing the requisite theoretical knowledge and practical experiences that would enable them create the conducive environment necessary for teaching and learning and also help them become successful teachers. This is particularly necessary because, all through their career, classroom discipline ranks foremost among the many and frequent issues teachers may have to confront.
Discipline, therefore, comes to the forefront with two broad objectives in the school environment are firstly, to ensure the safety of staff and students and, secondly, to create an atmosphere conducive to learning (Gaustard, 2005). To many students, discipline connotes punishment, pain and fear. Yet, it possesses far greater merit than this perception; it has more to do with the correction of undesirable behaviour at home, in the school or at any place (Narebe, 2013). In Mclntosh (2013) The Cambridge Advanced Learners Dictionary discipline is defined as training that makes people more willing to obey or help them to control themselves, often in the form of rules and regulations which, when broken or not adhered to, results in negative consequences in the form of punishment. Alhassan (2000) explains the concept of discipline as training that ensures that an individual develops orderly conduct, self-control as well as self-direction. For Were (2006), discipline, is a system of guiding the individual to make rational decisions sensibly. It is also an action taken by grown-ups to help a child amend his or her behaviour. Discipline, therefore, forms part of moral education which is significant in the development of the child's character (Were, 2006). Perkins (1969), on his part, defines discipline as the task of helping students to utilize their abilities, energies, and talents in ways that promote their development and learning. All of this may concord with the earlier educational doctrine of Beery (1917) who held that discipline was not punishment but the training of every power to the end that it may be controlled and used for personal good and social service. In the light of this appreciation, the disciplinarian was seen as one who helped each individual for whom they were responsible to bring those powers under control and to use them in such a way that they should become a useful member of society (Beery 1917). Generally, traditional views of discipline applied to the classroom emphasize that teacher control of pupil behaviour is essential for learning (Neill, 1978, cited by Kohut & Dale, 1979. Nonetheless, global evidence shows that there is indiscipline in schools (Pro-Teacher, 2005;Reid, 2000), and to Ngwokabuenui (2015), students' indiscipline is instigating a menace in all parts of the world in relation to children's affairs. In the study of Curwin and Mendler (1988), 15% of students break classroom rules regularly and if sufficient structure is not put in place to arrest the situation, these misguided individuals could disrupt the learning process of other students. In addition to this, a study of 479-teacher samples from preschools to eighth-grade established that 48% of the teachers reported to have had three or more students in their classrooms exhibiting serious behavioural difficulties (ProTeacher, 2005); Shin and Koh (2008) also point out that 32% out of 116 public high school teachers have indicated that 25% -50% of their students have behavioural challenges and are difficult to teach. Since indiscipline impinges on learning activities in the school environment, better management and solutions should be rolled out to arrest it. Accordingly, and in recent years, behaviour modification has gained the attention of researchers in the field of education owing to its significant effects in improving children's behaviour through increasing desired behaviour and decreasing undesired behaviour (Eshun, 2016). According to Alkhateeb & Alhadidi, who are cited in Al-Bustanji, Almakanin, Beirat and Bdour (2018), behaviour modification has important implications in teaching strategies and techniques when used with children, especially those with special needs regardless of their disabilities. Indeed, as a well-managed classroom can provide an exciting and dynamic learning experience for everyone, its opposite could also overwhelm teachers, rendering them "powerless" in dealing with behavioural issues in the classroom environment. Canter, as cited in Kakkad (2012) explains that, in the past, a simple stern look or warning was appropriate to shape up a classroom. Therefore, it is important to find a behaviour modification approach which suits the needs of both the teacher and student. This directs the attention to the content of teacher training programmes and their effectiveness in the classroom.
Although there has been a lot of study done with regard to pre-service teachers' exposure to effective instructional management in their training, there still exists a significant gap between students' effective instructional management knowledge and the requirements necessary for teacher training. As a result, many pre-service teachers, even upon completion of teacher education programme, are inadequately prepared to effectively manage student behaviour due to their lack of exposure to classroom management content (Shamina & Mumthas 2018). Indeed, not much research has gone into this area of teacher formation but a few available studies have examined the extent to which behaviour modification management content has been included in pre-service teacher preparation programmes (Stough, 2006). As part of their study of 19 tertiary institutions in the north-eastern USA, Wesley and Vocke (1992), underscored the fact that majority of the programmes included instruction in classroom discipline. In the work of Blum (1994), over half of programmes had units on classroom management even though the units were not mandatory for 43% of enrolled students.
Certainly, there is the general belief that training pre-service teachers receive insufficient classroom behaviour management content, equipped with the needed knowledge and skills to make them successful teachers. There is, therefore, a huge gap between the pre-service teachers' training and their practice on the field (Al-Bustanji, Almakanin, Beirat, & Bdour, 2018).
Studies done on the effect of the teacher's qualities, like gender and programme of study on their belief of classroom discipline suggest that there is no significant difference between males and females in their practice of behaviour modification even though males are generally considered do better than females (Bukhari, 2016). Other studies that considered the effect of programme of study on the practice of behaviour modification techniques by pre-service teachers indicated no general disparity in how students of colleges of education applied their knowledge of behaviour modification (Brace, 2017;Muller, 2015).
In Ghana, little research has gone into the practice of behaviour modification techniques by pre-service teachers. Available literature focuses on behaviour modification techniques used by teachers in service. Notable authors include Aponsem (2015), Eshun (2016), Ahiapko (2016) and Narebe (2013). Aponsem, for instance, studied the relationship between behaviour modification practices of teachers and pupils' attendance in the Eastern Region of Ghana and Eshun investigated behaviour modification strategies adopted by teachers in some selected schools in Ashanti Region. Also, Ahiapko carried out a study which looked into behaviour modification techniques adopted by Senior High School teachers in five districts in the Volta Region and Narebe, on his part, assessed the knowledge of teachers on behaviour modification strategies in Tamale, Ghana. However, this current work examined the practice of behaviour modification strategies by pre-service teachers in the classroom, seeking to determine the effect of gender and programme of study on the practice of behaviour modification strategies used by final year pre-service teachers on internship. It was on the basis of this that research questions were formulated.
Research Questions
This study was guided by the following research questions: 1) How do pre-service teacher interns of Colleges of Education rate their practice of behaviour modification techniques in the classroom?
2) How does gender affect the ratings of pre-service teacher interns' practice of behaviour modification techniques in the classroom?
3) How does programme of study affect the ratings of pre-service teacher interns' practice of behaviour modification techniques in the classroom?
Research Design
A cross-sectional survey design was used for this study. According to Neuman (2000), cross-sectional surveys are appropriate for situations where the data to be collected are about self-reported beliefs or behaviour. This design helps to collect data to make inferences about a population of interest at one point in time. Besides, it enables the researcher to collect data and compare many different variables at the same time without manipulations.
Population and Sample Size
Colleges of Education in the Ashanti Region of Ghana were considered for the study. This is because Ashanti Region has nine (9)
Research Instrument
A Behaviour Modification Questionnaire (BMQ) developed by the researchers was used for the study. The questionnaire consisted of two (2) parts. The first part consisted of 5 items that dealt with the demographic data of the respondents namely: gender, programme of study and the name of the college of education. The second part was to elicit information to measure the pre-service teachers' level of practice on behaviour modification techniques in the classroom. It consisted of 16 items constructed on a three-point Likert scale with the responses: Most of the time, some of the time and never. A pilot study was conducted to assess the validity (internal consistency) and reliability of the questionnaire to enhance its accuracy for the data collection. Participants for the pilot study were selected from Bechem College of Education in the Ahafo Region of Ghana. The Cronbach's alpha which is a measure of the reliability (internal consistency) of the instrument was calculated as 0.70 which is considered high in most social science research applications.
Data Collection Procedure
The data for the study was collected using three-point Likert scale questionnaire at a single point in time as indicated earlier. The questionnaires were administered by the researchers directly to the participants in March, 2019. A total number of 370 questionnaires were distributed. The number of questionnaires which were successfully completed and returned were 360. This represents a return rate of 90%. On the spot method of administration and retrieval was used to improve the return rate. All ethical procedures required were followed. Participants were made to indicate their willingness to participate in the study and directives on the questionnaires ensured respondents' anonymity and confidentiality.
Data Analysis
The data were analysed using descriptive and inferential statistics. Statistical software used for the analyses was the Statistical Package for Social Scientists (SPSS). The means and standard deviations of the ratings for each of the items were computed and the means compared to the theoretical mean rating (assuming a normal distribution of responses) to ascertain the respondents' perception on the indicators considered. Additionally, the effect of gender and programme of study on respondents' practice of classroom behaviour modification techniques were determined. An item-by-item t-test and analysis of variance (ANOVA) at 5% level of significance was performed to establish a possible significant difference in the respondents' ratings of the indicators of this study.
p-values, lower than 0.05 were deemed significant.
Ratings of Pre-Service Teacher Interns on Practices of Behaviour Modification Techniques
Effective teaching and learning cannot take place in poorly managed classrooms (Jones & Jones, 2012;Van de Grift, Van der Wal, & Torenbeek, 2011). Therefore, it is required that teachers select and use the most appropriate classroom management strategies that will support and facilitate effective teaching and learning. This result indicates the ratings of pre-service teacher interns on their practice of behaviour modification techniques in the classroom after their internship programme. The means and standard deviations indicated in Table 1 were computed from the ratings of the respondents on the various indicators of the respondents' practice of behaviour modification techniques in the classroom using a three-point Likert-scale questionnaire. The respondents were to rate their responses on the items "most of the time = 3", "some of the time = 2" and "never = 1". The theoretical mean was 2. Thus, ratings above 2 were deemed high extent of practice of behaviour modification techniques in the classroom during the internship programme. On the contrary, ratings below 2 were deemed low extent of practice of behavior modification techniques in the classroom during the internship programme. The mean ratings of the 360 respon-dents on their practice of behaviour modification techniques in the classroom ranged from 1.31 (SD = 1.51) to 2.43 (SD = 0.75). Further to this, for 7 out of the 16 items, the mean ratings of the respondents were greater than 2 which suggest that the pre-service teachers during their internship programme very often practised that behaviour modification technique. However, with 9 of the items, the mean ratings of the respondents were less than 2 suggesting that they did not regularly practise those techniques during their internship programme. The results (in Table 1) also indicated that the overall mean ratings of the respondents was 1.85, suggesting that generally, the pre-service teacher interns seldom practised those behaviour modification techniques used in the classroom. Items that were rated below 2 were: "I motivate my students when I want to strengthen a behaviour", "I use a lot of reinforcement strategies so that students will enjoy my lessons", "I commend students for putting up good behaviour", "I do not cane students; I employ other forms of behaviour modification", I demonstrate the positive behaviour that I want students to practice", "I reward good behaviour with tangible items", "I reward good behaviour with praise", "I give special privileges to student for good behaviour", and "I verbally reprimand students for inappropriate behaviour". In this study, the pre-service teacher interns indicated that they did cane learners who put up disruptive behaviour (Mean rating 2.17) even though corporal punishment is prohibited by the instructional norms of the Ghana Education Service. Boakye (2001) and Edumadze (2004) hold that Ghanaian teachers still use corporal and other forms of subversive punishment because these facilitate teaching and learning. Additionally, the pre-service teacher interns reported of their use of time-out to manage disruptive behaviour in the classroom; they judged it as a non-violent measure of disciplining unruly behaviour with minimal damaging effects on children. This is in line with Main and Hammond's study (2008) also underscored the fact that pre-service teachers predominantly reported observing time-out, both in class and out of class as the most frequently observed approach. On another level, this study also revealed that respondents did not often reward good behaviour with tangible items due to lack of funds to sustain such a practice. Some of the pre-service teacher interns admitted to not punishing students for disruptive behaviour in order to encourage them to be punctual at school. This is similar to a study by Kalagho (2014), indicating that 78% of the respondents admitted to have ignored disruptive behaviour of students in view of the fact that some pupils put up disruptive behaviour to provoke and seek attention. If they do receive the attention they crave, there is the likelihood that the unruly behaviour will repeat itself, but if attention is denied them, the misbehaviour dissipates on its own (Kalagho, 2014).
Effects of Gender on Practice of Behaviour Modification
Techniques by Pre-Service Teacher Interns Table 2 presents the results of mean ratings of respondents by gender and t-test analysis to access the effects of gender on practice of behaviour modification by pre-service teacher interns during internship programme. The item-by-item mean ratings of the male pre-service teacher interns ranged between 1.33 -2.31 while that of the females ranged between 1.29 -2.51. Out of the 16 items, the male respondents largely employed 6 for class management while their female counterparts indicated their use of 8 during their internship programme. This suggests that the female teacher interns used more behaviour modification techniques for classroom management than their male colleagues. This could be grounded on the belief that female teachers are more custodial and stick to the classroom ground rules and more persistent in controlling disruptive behaviour as compared to males in the view of Hakan and Esergül (2015). Item-by-item comparison of means (Table 2) to assess the effect of gender on the respondents' ratings of their practice of behaviour modification techniques during an internship programme indicated that at 5% level of significance thirteen (13) out of the sixteen (16) items showed no significant effects of gender on the respondents' practice of behaviour modification techniques.
On the contrary, with three (3) out of the sixteen (16) items at 5% level of significance, there was significant difference in the ratings of the respondents by gender with that of the females being significantly higher than that of the males. With item number 5, reinforce strategies, the mean rating of both the male and female respondents suggests that they did not usually use this technique for managing their classrooms. In this study, the finding that teachers generally do not statistically and significantly differ in their practice of behaviour modification techniques by gender is consistent with that of Okafor (2015) and Bukhari (2016) who maintain that gender does not significantly affect the practice of behaviour modification techniques. This may be grounded on the fact that teachers, male or female, are trained by the same curriculum, and as a result, express not much difference in their practice of their acquired knowledge. Table 3 shows the result of a one-way ANOVA to determine the effect of programme of study on the participants' practice of behaviour modification techniques. It also indicates the mean ratings of the pre-service teacher interns studying Mathematics, Technical Education, Early Childhood, General Education and French on their practice of classroom behaviour modification techniques. The range of the mean ratings for the five programmes was as follows: Mathematics = 1.26 (0.526) -2.15 (0.718); Technical Education = 1.29 (0.468) -2.29 Open Journal of Social Sciences The result generally indicates that the item-by-item mean ratings of the respondents according to the programme of studies were lower than or closer to the theoretical mean of 2.0. Therefore, the level of practice of the pre-service teacher interns of these five (5) programmes on classroom behaviour modification techniques was low. The result (in Table 3) also indicates that at 5% level of significance there was no significant difference between the mean ratings of the respondents according to their programme of study (p-value > 0.05). Exception to this were three (3) items: numbers 13 (Send student out of the classroom), 14 (Take away privileges) and 15 (Time out) which indicated a significant difference in the ratings of the participants for the various programmes. The results, therefore, suggest that the participants, who were of different academic programmes, had similar practice of behaviour modification in classroom management and thus, coincides with aspects of Skinner's as cited in Mukadam, Vyas and Nayak (2014) by pro-grammed instruction which suggest that learners of similar ages and characteristics must be measured with same treatment with regard to instructional management. In the light of this knowledge, trainers of teachers have largely and consistently ensured that students are uniformly taught with the same content across the board. It stands, therefore, to reason that the practice of behaviour modification in classroom management is similar among participants from the different programmes of learning. This is because pre-service teachers of different programmes of study are taken through the same content with respect to management of lessons.
Effect of Programme of Study on Practices of Pre-Service Teachers on Behaviour Modification Techniques
The finding of this study is consistent with Brace's (2017) which indicated in a cross-cultural study that there were no significant differences in how students of Colleges of Education apply their knowledge of behaviour modification techniques irrespective of their programme of study. This is again in tune with an earlier study by Muller (2015) involving final year education students at the University of Cologne (Germany), which suggested that the programme of study had no significant effect on the use of behaviour modification strategies by education students.
Conclusion and Recommendations
Implementing effective correcting behaviour measures is important for successful teaching and learning. In classrooms where teachers are able to effectively manage their classes, students gain a lot from their lessons and are proud of their teachers. This study assessed the practice of pre-service teachers' classroom behaviour modification techniques with particular reference to pre-service teacher interns of Colleges of Education in Ghana. Based on the findings of the study, it can be concluded that the pre-service teacher interns did not regularly practice most of the behaviour modification techniques expected to be used for effective classroom management during the internship programme. This can adversely affect their classroom management performance as teachers. Therefore, teacher training programmes in Ghana need to be strengthened to address this deficiency. First and foremost, a special attention should be given to the field of Behaviour Modification in the training of teachers. Pre-service teachers must be exposed to both theory and practice by increasing the number of courses pertinent to Behaviour Modification. In so doing, these courses would allow students to practice the skills they are introduced to in their teacher education programme, and having been equipped with the necessary tools, they will have the confidence to use them during both the teaching practice phase and throughout their teaching career.
Accordingly, Colleges of Education and other institutions in charge of teacher formation programmes should lay special emphasis on teaching practice which increases and enriches students' practical experiences as they observe and adopt strategies for successful instruction, engagement, and management of students' disruptive behaviour. Finally, in much the same way, pre-service teachers should be exposed to Behaviour Modification at the very beginning of their preparation, providing them with every possible opportunity to broadly relate Behaviour Modification principles and strategies to other course areas and affording them a better understanding and competencies of Behaviour Modification. With improved behaviour modification techniques, it is expected that teaching-learning situation in the classroom would significantly improve.
|
2020-10-29T09:03:20.265Z
|
2020-09-30T00:00:00.000
|
{
"year": 2020,
"sha1": "f48cb39653ff992bcb2c23038854d2d858559b2f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=103761",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "b1b5cdf92b506e65c1d7ac537bef94f462e95c4e",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
260037484
|
pes2o/s2orc
|
v3-fos-license
|
Identifying the Accuracy and Comprehensibility of Students’ English Word By Word Pronunciation at North Lombok Senior High School
: Pronunciation is an important component of language learning that must be well mastered in order to support the development of students' competence in language skills, particularly speaking skills. This descriptive research aims to evaluate the accuracy and comprehensibility of English pronunciation among tenth-grade IPS students consisting of 30 students at SMAN 1 Bayan in North Lombok. Qualitative methods were employed, using student voice recordings as primary data and relevant documents as secondary data. The data analysis technique in this study uses the concept of qualitative data analysis developed by Miles and Huberman in (1994), including data reduction, data display and conclusions drawing. The results showed that while some students demonstrated excellent proficiency in English pronunciation, others faced significant challenges in accurately articulating vowel and consonant sounds. These findings highlight the importance of targeted interventions to support struggling students and improve the pronunciation skills of all students. The research questions are focused on the accuracy and comprehensibility of English pronunciation among students in North Lombok. Through our evaluation, we found that the level of accuracy in common words varied among students, with some struggling to pronounce even basic words accurately.
INTRODUCTION
It is clear that mistakes that students make while learning a language in the classroom have grown to be a major source of concern for language teachers. These mistakes could occur as a result of the usual challenges that students have learning a second or target language (Marzuki, 2021). Their inability to appropriately pronounce words in foreign languages is one of their learning challenges. Pronunciation is an important component of language learning that must be well mastered in order to support the development of students' competence in language skills, particularly speaking skills. Cook in Gilakjani (2019) defines pronunciation as the production of English sounds. Anggraini conveys (2022) that a fundamental ability that is crucial to the growth of speaking, listening, and speaking English is pronunciation. By learning pronunciation, you will know how to pronounce a word correctly (Cakmak, 2019). Meanwhile, Dalton and Seidlhofer (2001) state that pronunciation is defined as the production of significant sounds. They believe that sound is significant in two ways. Based on these three definitions, we can conclude that pronunciation is described as the production of sound, and learners who want to sound acceptable in English must be mindful of their pronunciation.
Pronunciation plays a vital role for learners to comprehend what native speakers say. This is supported by Yudar Aditomo and Silalahi's (2020) statement, they say the ability of learners to interact with others, particularly native English speakers, increases because of good pronunciation, which helps them comprehend native speakers. However, pronunciation is a frequent issue for students who are learning English, and often they are unaware that they are lacking in English pronunciation. Aulia (2018) states if the students are unable to pronounce those structures or words correctly; it hinders them from communicating effectively in English. In reality, a lack of pronunciation awareness is regarded as a minor issue in English. According to Aarsleff (1989), pronunciation is the Cinderella of foreign-language teaching, because Western linguists have studied vocabulary and grammar for much longer than pronunciation, Jurnal Ilmiah Profesi Pendidikan Volume 8, Nomor 2, Mei 2023 ISSN (Print): 2502-7069;ISSN (Online): 2620-8326 and thus grammar and vocabulary are much better understood by English learners than pronunciation. Also, Arafiq, Yusra, and Saputra (2020) state that the process of teaching English to EFL students must always be tough because it calls for not just a commitment to learn but also an understanding of the phonological distinctions between the students' native tongue and English as the target language.
Because of this, most students have poor pronunciation and are overly concerned with vocabulary and grammar. Such errors in pronunciation can lead to misinterpretation and potentially the failure of speakers in oral communication, despite their fairly good stock of vocabulary and grammatical structure. According to Tuan (2010), the students may understand the rules of proper word pronunciation, but it is still difficult for them to pronounce them orally because English sounds do not exist in their mother tongue.
There are certain research findings associated with the study. The first one was the research, conducted by Nirani (2019), which examined the pronunciation accuracy of receptive vocabularies among 7th-grade students at SMP Kemala Bhayangkari 1 Surabaya. The research revealed that the majority of students demonstrated accurate pronunciation of the receptive vocabulary. However, certain sounds, namely /t/, /d/, /θ/, and /r/, were commonly mispronounced, particularly when these sounds occurred at the end of words or in the initial position.
Second, was the study which identified four words that were pronounced correctly by all students: middle, sea, sail, and much. Conducted by Akis, Asriati and Muhsin (2020), focused on investigating pronunciation errors made by eleventh-grade students learning English as a foreign language at SMA Muhammadiyah 1 Makassar. The study specifically examined the errors in vowels based on a conversation text, comparing student pronunciation to the correct pronunciation provided by the Cambridge dictionary. The findings revealed that the students made ten errors in vowel pronunciation. Additionally, the study found that the students' first language influenced their English pronunciation. The results emphasize the importance of considering the impact of students' first language when teaching English as a foreign language, suggesting the need for effective teaching strategies to improve pronunciation skills.
Therefore, those studies aimed to identify the accuracy and comprehensibility of students' English pronunciation in North Lombok, specifically at SMAN 1 Bayan, through the evaluation of students' proficiency in both vowel and consonant pronunciation.
METHODS
This research is descriptive research by using qualitative methods. According to Moleong (2018: 4), "qualitative research method is research that produces descriptive data in the form of written and oral words from people who are the subject or object observed. The location of the study is Senior High School 1 Bayan, particularly in the tenth grade of IPS which consists of 30 students. This location was chosen due to its diverse student population, which includes students from various ethnic backgrounds and linguistic abilities. The researcher used student voice recordings as primary data and secondary data which are relevant documents for data collection. During the study, the researcher observes the learning activity when the teacher asks the students to pronounce and speak together in the classroom. Moreover, there were 3 main steps to collect data. First, to determine the 15 general words that students find the most challenging at the first meeting. At the second meeting, the researcher observes when the English teacher asks the students to pronounce words together by showing the words on an LCD projector and listening to the audio speaker." In the last step the researcher records the students' sound production of the vocabulary in three sessions Ten (10) students participated in each session. Their production of the vocabulary was recorded using the audio recorder and transcribed. The data qualitatively using data analysis procedures developed by Miles and Huberman (1994), including data reduction, data display and conclusions drawing Based on the evaluations conducted by a researcher and an English language teacher at the school, the average accuracy of 30 students who were able to properly articulate vowel sounds is 19.3 out of a possible score of 40. On the other hand, the average accuracy score of students who struggled with proper vowel pronunciation was 20.7. The highest accuracy score obtained by one of the students was 31. On the other hand, the maximum score obtained by a student who struggled with vowel pronunciation was 30. So, we can say that the minimum accuracy score obtained by a student was 10. In contrast, the maximum score obtained by a student who had difficulty with vowel pronunciation was 9. Based on the evaluations conducted by the researchers and an English language teacher at the school, the average accuracy level of 30 students who were able to properly articulate consonant sounds is 23.03 out of a possible score of 40. On the other hand, the average accuracy score of students who struggled with proper vowel pronunciation was 24.96. The highest accuracy score obtained by one of the students was 39. On the other hand, the maximum score obtained by a student who struggled with vowel pronunciation was 38. So, we can say that the minimum accuracy score obtained by a student was 10. In contrast, the maximum score obtained by a student who had difficulty with vowel pronunciation was 9. Based on the results, none of the students rated their English pronunciation as very poor and excellent. 4 students rated their vowel pronunciation as very good. 6 students rated their pronunciation as good. 16 students rated their vowel pronunciation as average. 4 students rated their vowel pronunciation as poor. Overall, the majority rating is average, which is 16. Notably, none of the students was rated as having very poor consonant pronunciation. 4 students were rated as having excellent consonant pronunciation. 5 students were rated as having very good consonant pronunciation. 9 students were rated as having good consonant pronunciation. 9 students were rated as having average consonant pronunciation. Finally, 3 students were rated as having poor consonant pronunciation.
Findings
Overall, this assessment highlights varying degrees of proficiency in consonant pronunciation among the 30 students.
Discussion
The study evaluated the accuracy of 30 students in properly articulating vowel sounds, and the results showed that the average accuracy score for those who were able to articulate vowel sounds properly was 19.3 out of a possible score of 40. This suggests that there is room for improvement in the pronunciation skills of these students. One possible reason for this average score could be that the students did not have enough exposure to the English language, particularly in terms of listening and speaking, or that their native language affected their ability to produce accurate vowel sounds. On the other hand, the average accuracy score for students who struggled with proper vowel pronunciation was 20.7. This suggests that while these students had difficulties with vowel sounds, they were not significantly worse off than those who were proficient in vowel pronunciation. One possible reason for this score could be that these students had better exposure to the English language, particularly through reading and writing, or that they had received some form of support or instruction to improve their pronunciation.
The highest accuracy score was obtained by one of the students at 31, indicating that some students were able to perform exceptionally well in this area. This could be due to several factors, such as their interest in the English language, their exposure to it through media and other sources, or their ability to focus and practice consistently. On the other hand, a student who had trouble pronouncing vowels received a maximum score of 30, indicating that not all students who struggle with pronunciation are necessarily performing worse than those who are proficient in this area. This could be due to the fact that the student had a good understanding of the English language, despite struggling with vowel sounds, or that they had received some form of support or instruction to improve their performance.
However, the study discovered that a student's minimum accuracy score was 10, indicating a significant variation in performance among students. This could be due to several factors, such as a lack of exposure to the English language, difficulties with auditory discrimination, or other factors that affect their ability to produce accurate vowel sounds. The highest score a student who struggled with vowel pronunciation could receive was 9, indicating that some of these students may require additional support to improve their performance. This could be due to factors such as lack of exposure to the English language. Overall, these findings suggest that while there may not be a significant difference in accuracy scores between students who are proficient in vowel pronunciation and those who struggle with it, there is still a need for additional support and instruction to improve the pronunciation skills of all students.
Furthermore, the findings of an evaluation conducted by a researcher and an English language teacher at a school, regarding the accuracy of consonant pronunciation among 30 students. The average accuracy score of the students who were able to articulate consonant sounds properly was 23.03 out of a possible score of 40. This score indicates that, on average, the students have a moderate level of proficiency in pronouncing consonant sounds, with a scope for improvement. On the other hand, the average accuracy score of students who had difficulty with consonant pronunciation was 24.96, which is slightly higher than the average score of the students who were able to articulate consonant sounds properly. This finding may suggest that some students who struggle with consonant pronunciation may have developed coping mechanisms to compensate for their difficulties, which has led to a slightly higher accuracy score.
One of the students achieved the highest accuracy score of 39, which demonstrates exceptional consonant pronunciation proficiency. This score implies that some students have an exceptional ability to articulate consonant sounds and may have developed advanced skills through extensive practice or exposure to English language environments. On the other hand, a student who had trouble pronouncing consonant sounds received a maximum score of 38, which is just one point less than the highest score a student who was able to do so received. This finding suggests that some students who struggle with consonant pronunciation may have developed coping mechanisms that have enabled them to perform relatively well in this area. However, it's also important to note that the minimum accuracy score a student could receive was 10, which is significantly lower than the average score. This finding highlights the need for targeted interventions to help students who are struggling with consonant pronunciation. It's also important to note that the maximum score a student who had trouble pronouncing consonant sounds received was 9, which is one point lower than the minimum score a student who was able to articulate consonant sounds correctly received. This finding suggests that some students may require more specialized support to improve their ability to pronounce consonant sounds accurately. Overall, the results of this evaluation suggest that while some students have developed advanced skills in consonant pronunciation, there is a significant variation in the level of proficiency among the 30 students evaluated. The findings also highlight the need for targeted interventions to support students who are struggling with consonant pronunciation, which could potentially improve their overall English language proficiency. Pronunciation is still moderately influenced by the mother tongue but no serious phonological errors. A few grammatical and lexical errors but only one or two major error cause confusion Average 2.1-3.0 Pronunciation is influenced by the mother tongue but only a few serious phonological errors. Several grammatical and lexical errors, some of which cause confusion Poor 1.1-2.0 Pronunciation is seriously influenced by mother tongue with errors causing a breakdown. Many "basic" grammatical and lexical errors.
Very poor 0-1.0 Serious pronunciation errors as well as many "basic" grammatical and lexical errors. No evidence of having mastered any of the language skills and areas practiced in the course.
According to Mallapiang's (2015) concept of the level of comprehension of English pronunciation, especially in vowels, the sample of 30 students had varying degrees of proficiency in English pronunciation. The rating scale ranged from 1 to 6, where 6 was considered excellent and 1 was considered very poor. To know the criteria of student's comprehension, use the comparison ratio (a1/a2 = b1/b2) to convert the score of students' vowel and consonant pronunciation into a scale of pronunciation assessment based on Mallapiang (2015). Therefore, figuring out which classification the students are in makes calculating the score simple.
The scores provided by the students reflect their level of proficiency in English pronunciation, particularly in vowels. The students who rated their vowel pronunciation as excellent (score of 6) were able to articulate the sounds accurately, with minimal influence from their mother tongue. The small number of grammatical and lexical errors they made further supported their proficiency in English pronunciation. Similarly, the students who rated their vowel pronunciation as very good (score of 5) had a minor impact from their mother tongue on their pronunciation, with only a few minor grammatical and lexical errors. Their proficiency in English pronunciation was slightly lower than those who scored a 6, but they were still able to articulate the sounds accurately with minimal confusion caused by their grammatical and lexical errors.
The students who rated their vowel pronunciation as good (score of 4) had no significant phonological errors, but their mother tongue influence on vowel pronunciation was still noticeable. They had a few grammatical and lexical errors, with only one or two major errors causing confusion. While their proficiency in English pronunciation was good, they still faced some challenges in articulating the sounds accurately. The students who rated their vowel pronunciation as average (score of 3) struggled with a few serious phonological errors, indicating that their mother tongue had an impact on their vowel pronunciation. They had several grammatical and lexical errors, some of which caused confusion. Their proficiency in English pronunciation was lower than those who scored higher, as they faced difficulties in articulating the sounds accurately, resulting in confusion for their listeners. Finally, the students who rated their vowel pronunciation as poor (score of 2) faced significant challenges in articulating the sounds accurately due to the influence of their mother tongue. Their pronunciation was seriously impacted, causing a breakdown in communication. Their proficiency in English pronunciation was the lowest among the students who participated in the study. In summary, the scores provided by the students reflect their level of proficiency in English pronunciation, with the majority struggling to articulate vowel sounds accurately due to the influence of their mother tongue. The varying degrees of proficiency highlight the need for effective strategies for addressing the challenges posed by the influence of the mother tongue on English pronunciation. These findings have important implications for language teaching and learning, particularly in the area of pronunciation.
The study evaluated the consonant pronunciation proficiency of a sample of 30 students using a rating scale ranging from 1 (very poor) to 6 (excellent) based on Mallapiang's (2015) concept of comprehension of English pronunciation. None of the students rated their English consonant pronunciation as very poor, which is a positive finding. The four students who were rated as having excellent consonant pronunciation had a minimal impact from their mother tongue on their pronunciation. They made only two or three grammatical and lexical errors. This suggests that these students may have had some exposure to English pronunciation outside the classroom or have a natural ability to acquire new sounds. The five students who were rated as having very good consonant pronunciation had a minor impact from their mother tongue on their pronunciation, with only a few minor grammatical and lexical errors. This indicates that these students may have had some previous exposure to English or have a good ear for distinguishing between English sounds Nine students were rated as having good consonant pronunciation, indicating that their mother tongue had some influence on their pronunciation. However, they made only a few grammatical and lexical errors, with one or two major errors that caused confusion. This suggests that these students may have some awareness of the differences between English and their mother tongue's sounds and are making efforts to improve their pronunciation.
Nine students were rated as having average consonant pronunciation, which meant they had a few serious phonological errors, and their mother tongue had an impact on their consonant pronunciation. They made several grammatical and lexical errors, some of which caused confusion. This suggests that these students may require more focused attention on their pronunciation to improve their proficiency in consonant sounds. Finally, three students were rated as having poor consonant pronunciation, indicating that their pronunciation was seriously influenced by their mother tongue, and their errors caused a breakdown in communication. These students require significant attention to improve their pronunciation skills, possibly through focused interventions and targeted practice. Overall, this assessment highlights varying degrees of proficiency in consonant pronunciation among the 30 students. The findings have important implications for language teaching and learning, particularly in the area of pronunciation, as it suggests the need for targeted instruction and interventions to improve the students' English pronunciation skills. Teachers can use this information to develop individualized instruction plans to help each student improve their English pronunciation based on their specific need.
CONCLUSION
The results showed that while some students demonstrated excellent proficiency in English pronunciation, others faced significant challenges in accurately articulating vowel and consonant sounds. These findings highlight the importance of targeted interventions to support struggling students and improve the pronunciation skills of all students. Our research questions were focused on the accuracy and comprehensibility of English pronunciation among students in North Lombok. Through our evaluation, we found that the level of accuracy in common words varied among students, with some struggling to pronounce even basic words accurately. This suggests that there is a need for targeted instruction to improve phonological awareness and auditory discrimination. In terms of comprehensibility, we found that students with higher proficiency in English pronunciation were more easily understood by native speakers of the language. This underscores the importance of developing accurate pronunciation skills to improve communication and overall language proficiency. Overall, our study has important implications for language teaching and learning in North Lombok. Teachers can use the findings to develop individualized instruction plans to support students with different levels of proficiency and address the challenges posed by the influence of the mother tongue on English pronunciation. Furthermore, the study highlights the need for further research to support effective interventions and improve the accuracy and comprehensibility of English pronunciation among students in this region.
ACKNOWLEDGMENT
I would like to thank my first and second advisor lecturers at FKIP, University of Mataram, who has provided the opportunity for me as a researcher to conduct research in the English Language Education Study Programme. I also want to thank all the other parties who contributed to the completion of this research but who I am unable to name individually. Criticism and suggestions are highly expected for the perfection of future research.
|
2023-07-22T15:39:49.659Z
|
2023-05-30T00:00:00.000
|
{
"year": 2023,
"sha1": "5d7caf3040cb45c518e6e394bda71c7ecacd820a",
"oa_license": "CCBY",
"oa_url": "http://jipp.unram.ac.id/index.php/jipp/article/download/1433/869",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d4eec9a5902c78a09fd81d1e9e69bc1196f37534",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
199099691
|
pes2o/s2orc
|
v3-fos-license
|
Coastal Environment and Social Environment Affecting the Vulnerability of Communities to Malaria Events in the Coastal Area of Syiah Kuala Sub-district of Banda Aceh
Banda Aceh city is dominated by coastal areas which are vulnerable areas as malaria vector breeding sites, Syiah Kuala Sub-district is one of the Sub-districts located in coastal areas with Annual Malaria Incident of 133 cases in 2017. This study aims to analyze coastal environmental factors and social environment against malaria. This type of research is a survey, using the mixed method model Concurrent Embedded method with a population of all households in 6 Villages of 5,628 households taking a sample of 112 respondents using the proportion formula. Data collection was done through questionnaires, observations, distance measurements and in-depth interviews with 6 informants, the data were analyzed bivariately using the chi-square test. The results showed that there were influences of lagoons, fish ponds, activities at certain times and knowledge of the incidence of malaria. Maintenance of malaria elimination can be optimized with coordinated, integrated planning and implementation efforts in the malaria breakdown forum. Raising and enhancing partnerships with various programs, sectors, NGOs, religious organizations, professional organizations, international organizations, donor institutions, business institutions and all levels of society by conducting migration surveys, investigating at-risk residents and monitoring the mobility of people who come and go to endemic areas to prevent malaria cases from re-entering. Malaria vector control in larva and positive larvae in the location focus by involving the community to support malaria elimination maintenance programs, especially in potential mosquito breeding sites such as lagoons, unproductive fish ponds by maintaining tilapia and blue panchax.
Efforts to prevent infectious diseases are a shared responsibility of the government, regional government and society. [14] Indicators of malaria prevention are sleeping using mosquito nets, using mosquito coils / spray / electric / mosquito repellent lotion, installing wire netting and using closed wear. [15] Syiah Kuala Sub-district is one of the second highest suspected malaria sub-districts out of 9 subdistricts in Banda Aceh, with Annual Malaria Incidence (AMI) of 133 cases suspected of being vulnerable to other coastal communities. The researcher focused on 6 villages from 10 villages in Syiah Kuala Sub-district, 5 of which were directly adjacent to the coast, there were lagoons at several points of settlement with global climate change which had unstable weather variability and resulted in some community fish ponds being unproductive, there was 1 village with several point of swamps, some people work as fishermen and fish pond farmers who are accustomed to activities at night do not use closed clothing. In the breeding cycle, Anopheles mosquitoes need breeding sites to lay eggs. This breeding place becomes an important thing in the life process of mosquitoes from larva then develops into pupa. [16] Then the pupa becomes an adult mosquito in the air. Only mosquito breeding sites have certain criteria that can become a place for Anopheles mosquitoes to breed. Therefore, mosquito breeding sites are one of the keys to analyzing the incidence of malaria. The purpose of this study was to analyze the vulnerability factors of the community including environmental vulnerability (lagoons, fish ponds) and social vulnerability (working at a certain time, knowledge) to the incidence of malaria in the coastal of Syiah Kuala sub-district, Banda Aceh.
Methodology
This type of research uses the mixed method Concurrent Embedded model. [17] In this case the author uses quantitative data as the main data and qualitative data as a complement. The population in this study were household in 6 villages in Syiah Kuala sub-district as many as 5,628 households with a sample of 112 respondents and using the proportion formula. Data collection was done through questionnaires, observations, measurements of the distance of houses with malaria mosquito habitat related to the flying ability of anopheles mosquitoes ranging from 0.5 to 2000 meters and in-depth interviews with 6 informants, data were analyzed bivariately using the chi-square test. For coastal environment vulnerability variables including (lagoons and fish ponds) measured distance of respondents 'homes to lagoons, fish farms using digital thrust Meter Rollers as research instruments with not vulnerable categories if the distance of respondents' houses to lagoons and fish ponds> 2000 meters and vulnerable categories if the distance of the house respondents to lagoons and fish ponds <2000 meters, while social vulnerability variables include (work / activity at a certain time and knowledge) to work / move at a certain time when the checklist is categorized as non-vulnerable if the respondent works at <18.00 and the category is vulnerable if the respondent works / moves at> 18:00 a.m.
For knowledge carried out using a structured questionnaire by submitting 15 questions to respondents including knowledge about malaria, ways to transmit malaria, efforts to prevent malaria, the habit of biting of anopheles mosquitoes, people's habits at night, clinical symptoms caused by anopheles mosquito bites, anopheles mosquito habitat , how to control malaria, use vector breeding places, how to treat malaria properly. Observations aim to obtain data on social phenomena related to the control and utilization of lagoons, fish ponds, abandoned fish ponds in focus locations suspected of influencing clinical malaria events. Furthermore, in-depth interviews with 6 informants consisting of 2 main informants namely farm owners who suffer from clinical malaria and 4 supporting informants namely the head of Banda Aceh City Health Office, from malaria program coordinator of Banda Aceh Health Office, head of Kopelma Health Center and head of Jeulingke Health Center in Syiah Kuala Sub-district . The study was conducted from April to August 2018 in 6 villages (Lamgugop, Rukoh, Jeulingke, Tibang, Deah Raya and Aleu Naga) Syiah Kuala Sub-district, Banda Aceh City. The sample is household, the part (subset) of the population chosen in a certain way until it is considered to be able to represent the population and use the Lameshow formula to determine the number of samples with the formula. = proportion of annual malaria incidence (AMI) Syiah Kuala sub-district = 23% (0.23) P a = estimated population proportion = 60% (0.6) P a -P 0 = estimated difference in the proportion studied in the population proportion = 37% (0.37) (2) Based on the sample calculation formula above, the sample size that will be used in the study is 112 heads of household. To determine the number of samples in each village of Syiah Kuala Sub-district, the author used a proportionate sampling because the number of subjects in each region is not equal, it can be determined using the following formula:
Head of Household Characteristics in Coastal Area
The results showed that from 112 respondents, 41 (36.6%) heads of household finished their high school education, the smallest percentage of 3 (2.7%) heads of household did not go to school or finish elementary school and the rest 68 (60.7%) heads of household finished junior high school, assciate degree and undergraduate degree. The largest respondents within 36-45 years of age are 41 (36.6%) heads of household. Related to gender, 93 (83%) respondents are male and for occupation, 46 (41.1%) heads of household work as entrepreneur.
Distribution of Lagoon
Of the 6 villages, the focus of the research is only Aleu Naga Village, which still has one lagoon with an area of 3 hectares. [19] The number of lagoons and area can be seen in Table 2 below:
Distribution of Fish Ponds
Of 6 Villages, the focus of research in Aleu Naga and Tibang Village having the widest fish ponds of 145 hectares was in Aleu Naga Village and 125 were found in Tibang Village and the smallest fish ponds were in Jeulingke Village of 3 hectares. The existence and area of the fish pond can be seen in Table 3 below:
Distribution of Anopheles mosquitoes
Of 6 villages, the focus of research is 5 Villages, there are Anopheles mosquitoes with An.vagus species in fish ponds and An.Subpitus species in the lagoon. [20] The presence of anopheles can be seen in Table 4 below:
Distribution of clinical malaria
Of the 6 villages, the focus of the study found that the largest clinical malaria in Rukoh Village was 46 cases and the smallest one in Lamgugop Village was 10 cases. [21] The number of clinical malaria can be seen in Table 5 below: Alue Naga 29 Total 133
Effect of Lagoon on Malaria Events
Based on the results of the statistical analysis using the Chi Square test, the p value = 0,000 was obtained, meaning that the lagoon factor affected the incidence of malaria (p <0.05) in the coastal area of Syiah Kuala Sub-district can be seen in Table 6 below: From the results of observations and secondary data in Syiah Kuala Sub-district, Aleu Naga Village still has a large lagoon with a 50-meter sea boundary, which is bordered by sand. The profile data of Syiah Kuala Sub-district in 2018 states that there are around 3 hectares of lagoons on the border of Aleu Naga Village with Bait Kaju Village, Aceh Besar. There are 2 out of 6 coastal villages including Aleu Naga and part of Tibang which their houses / residences are close to the lagoon. Respondents in this study aged 26-65 years were the age in the category of early adults up to the final elderly. The majority of respondents were 9 (8%) houses / dwellings close to the lagoon aged 36-45 years, namely the final adult age group and the smallest respondents of 4 (3.6%) houses / dwellings close to the lagoon aged 45-55 years namely early age groups were dominated by male family heads who were used to doing fishing. From the results of measurements of the distance of the house / residence of respondents with a lagoon in 112 respondents obtained a description of distance with a vulnerable category as many as 13 families (11.6%). After a bivariate test with Chi-Square statistical test shows the lagoon variable has a relationship with the incidence of malaria (p <0.05), p = 0,000. Details of distance are 9 households in Aleu Naga Village, which are 200-500 meters away from 6 (66.67%) households and 600-1,500 meters in distance (3 (33.33%) households, from 4 families in Tibang Village with 3 (75%) KK is 1,000-1,300 m and 1 (25%) KK is 1,400-2,000 m. Female anopheles mosquitoes usually bite humans at night or from dusk to dawn and the flight distance is no more than 0.5-2 meters from the breeding place. [22] Based on in-depth interviews with the head of the family who served as head of the hamlet with the profession of fish pond farmers in Aleu Naga Village suffering from clinical malaria, it was explained that in the past 6 months, health workers had visited their villages to check / snatch malaria larvae / parasites (plasmodium) in the lagoon. They did not know that the lagoon was a breeding ground for malaria mosquitoes and that for the past 3 months there had been no mutual aid to drain lagoon water into the sea. Some people use lagoons to fish, which are dominated by adult men early to late adulthood and look for oysters in lagoons dominated by mothers from late adulthood to the elderly. None of them collect / use the lagoon to maintain predatory fish larvae such as tin-headed fish, tilapia and blue panchax.
Malaria is an infectious disease that is a public health problem in the world. High malaria can have a broad impact on quality of life, economy and poverty. The WHO's target of the 2016-2030 Global Technical Strategy (GTS) for malaria through the Sustainable Development Goals (SDGs) is eradication efforts contained in the third objective of ensuring a healthy life and seeking prosperity for everyone until 2030. Based on in-depth interviews with the Health Office of Banda Aceh city, which was represented in the field of prevention and infectious diseases (P2P), found that during this time the Health Office Banda Aceh city realized the programs / activities of infectious diseases, especially those related to mosquitoes, especially malaria in the maintenance phase of malaria. Laboratory to read the results of microscopic blood tests. Conduct surveys in the field if there are case reports from the community, conduct examination of thick blood preparations and peripheral blood in patients suspected of clinical malaria. Spraying the IRS in 1000 houses in cases and then running 20 houses in the working area of Kopelma Health Center. Until now there has been no special activity at potential mosquito breeding sites to monitor and pick up plasmodium larvae in the lagoon.
The absence of information dissemination to the good coastal communities of Syiah Kuala Sub-District, especially the control and utilization of lagoons for the coastal communities of Syiah Kuala sub-district, Banda Aceh. This is worsened by the lack of attention and cooperation from other sectors such as the Fisheries Service, Forest Service, Public Works Agency and others in controlling the environment such as lagoons, productive fish ponds and unproductive fish ponds. Whereas by utilizing the lagoon to maintain larvae predator fish such as tin-headed fish, tilapia and blue panchax can add income, reduce vulnerability and prevent good clinical malaria in the coastal areas of Syiah Kuala Sub-district, Banda Aceh.
The Influence of Fishponds on Malaria Incidence
Based on the measurement results of the distance from the respondent's house to the fishponds, 85 (75.90%) respondents are of vulnerable category and of 27 (20.10%) respondents are of not vulnerable category. The results of statistical analysis using the Chi-squared test showed (p<0.003), meaning that there is a significant influence between fishponds and the incidence of malaria (p <0.05) in the coastal area of Syiah Kuala. It can be seen in the table 7 below: [23] The malaria case increased because there was a lack of support from related sectors. The community considers malaria is not a serious problem, so if people are affected by malaria, they do not immediately check with health service officials that will lead to higher malaria transmission. This is worsened by the lack of attention from other related sectors such as the Fisheries Agency, Forestry Service, Public Works Agency and other related agencies or services.
From the observation results and secondary data in Syiah Kuala Sub-district, there were 5 out of 6 coastal villages that have both productive and unproductive fishponds. The data from Syiah Kuala Sub-district in 2018, there are around 356 hectares of fishponds scattered in 5 coastal villages in Syiah Kuala. The largest fishponds are located in Aleu Naga Village with 145 hectares and the second ones are in Tibang with 125 hectares while the smallest fishponds are in Jeulingke Village with 3 hectares. Activities in handling of abandoned fishponds in Sidodadi Village, Padang Cermin sub-district, Pesawaran Regency are the removal of moss and the spread of tilapia predatory fish which involves malaria cadres. [24] Malaria Care Forum (NGO) was in charge of the activity, while the fish spreading activity involving malaria cadres and the community with Pesawaran District Fisheries Service in charge of the activity. The spreading of 10,000 fish in abandoned fishponds in Sidodadi Village is a cross-sector program between the Fisheries Service and the Public Health Office. After the program to remove moss and algae and the spreading fish, cases of clinical malaria and positive plasmodium appear to be decreasing. In addition to handling abandoned fishponds, the other malaria control program was Indoor Residual Spraying (IRS), mass blood survey, larva catching, mangrove planting, and clean Friday movement. Cross-sector support related to malaria treatment is a necessity. The phenomenon of malaria is a continuous sequence. Firtsly, a person falls ill due to contact with the environment, then an agent reacts in the body to fight the disease which may end up in a condition of being sick or healthy. The continuous phenomenon occurs around the world that includes infectious diseases such as malaria incidence on the coast of Syiah Kuala Sub-District, Banda Aceh.
Related to respondents' occupation, 46 heads of household are entrepreneurs (41.1%), others are 24 (21.4%), 15 are civil servants (13.40%), 10 are housewives (8.10%), 9 are fishpond farmers (8%) and the smallest number is 8 fishermen (7.1%). Related to the measurement results of the distance from the house of the respondents to the fishponds, from 9 heads of household in Aleu Naga Village, 7 (77.8%) have a distance of 200-500 meter and 2 (22.2%) have a distance of 600-1,500 meter. From 9 heads of household in Tibang Village, 6 (66.67 %) have a distance of 100-500 meter and 3 (33.3%) have a distance of 600-1,000 meter. From 5 heads of household in Deah Raya Village, 4 (80%) have a distance of 100-500 meter and 1 has a distance of 600-1,000 meter. From 31 heads of household in Rukoh Village, 10 have a distance of 100-500 meter and 21 have a distance of 600-1,000 meter. From 31 heads of household in Jeulingke village, 8 (25.80%) have a distance of 100-500 meter and a 23 (74.20) have a distance of 600-1,000 meter.
Region-based disease management must be carried out in an integrated manner from the planning, implementation, financing and monitoring of its implementation, likewise, the management of abandoned fishponds with an integrated manner must be carried out at all stages of malaria control activities, for example, at the prevention stage, integration can be applied to extension programs both carried out by the Public Health Office and other agencies, from the health aspect, besides explaining the symptoms and treatment of malaria, it is also necessary to explain about the potential mosquito breeding sites and their dangers, whereas from the Forestry Service, it is necessary to explain the effects of mangrove removal and the benefits of mangrove preservation, also from the legal aspect, it is necessary to explain the rules and regulations per se; it also needs to be conveyed about the importance spatial planning of coastal area in the spatial planning of regency/city. [25] Likewise, the Spatial Planning in the coastal area of Syiah Kuala needs to be carried out.
While in the spatial planning of province, the extension program will succeed well if there is an integration from planning, implementation, financing and monitoring. The management of abandoned fishponds so as not to be a place for breeding mosquitoes is a cross-sector program involving various agencies. Besides, it also requires a large amount of funds. Because many parties are involved and require large funds, the government's political will is crucial. Without the support and involvement of all parties, the management activities of abandoned fishponds will not be sustainable or only a temporary activity. The need for community empowerment in handling abandoned fishponds and active participation from the community are also highly expected in mosquito breeding management programs in abandoned fishponds. With the involvement of the community, there is a guarantee for the sustainability of activities because the community can participate in monitoring the activities or at the same time implementing them. Utilizing the fishponds again are not an easy thing, many factors can influence it, in addition to the availability of funds, human resources, and technology.
Constraints on community empowerment programs for the spread of predatory fish such as red tilapia in the fishponds that are not productive are not easy. Although in terms of HR, it is not a problem with the active involvement of the community, but in terms of technology regarding fish farming, both red tilapia and milkfish are not easy, requiring the technology of water flow in the fishponds. The availability of this technology initially received assistance from the Fisheries Service but in its implementation there were obstacles that the community could not immediately resolve. The communities often act passively and no effort to solve the problem on their own.
Malaria is a complex problem so that eradication of malaria must be carried out in an integrated manner by all related components and becomes an integral part of national development for the realization of healthy communities, gradually getting rid of malaria transmission until 2030. Based on in-depth interviews with householders who work as fishpond farmers in Deah Raya Village who suffer from clinical malaria, it has been stated in the last 6 months that health workers have never visited their villages to check malaria and catch parasite larvae (plasmodium) in fishponds. In the past year, moss has never been cleaned on farms, because the fishponds were only partially in use by making barrier on fishponds because their harvest had declined in the last two years, so there was not enough fund to clean the fishponds. While fishpond farmers also experienced the same thing, there was a crop failure due to pest. And until now the village financial assistance has not been disbursed, only few fish seeds are kept and they have much time to go fishing, sometimes also fishing in neglected fishponds for additional income.
In the last three months, they fished or sailed without using long-sleeved clothes and did not always use mosquito repellent lotion, except while going for fishing that they use lotions because there are lots of mosquitoes. They use the neglected fishponds just to fish as a hobby that can give them extra income. Malaria is a serious problem and the handling of abandoned fishponds is necessary to break the chain of transmission of malaria. In addition to reviving abandoned fishponds with shrimp farming, it can also replace it with cultivation of blue panchax, parrot fish and tilapia fish that require funds not as large as shrimp farming. Cross-sector, integrated, and sustainable cooperation are the groundwork in handling abandoned fishponds.
Based on the in-depth interview with a section chief of infectious disease prevention in the Public Health Office, there were funds originating from health operational costs to monitor mosquito breeding vectors, but now the funds are only utilized to monitor the larvae around the house such as bathtubs, used bottles and trenches in rural areas which are more likely to monitor aedes aegipty larvae (DHF).
Whereas in unproductive or neglected fishponds, larvae catching has never been conducted. Until now there is no mutual cooperation program to clean moss on the fishponds, and the community itself either individually or in groups do not clean the moss in fishponds because the cost of cleaning is expensive while the crop are small (crop failure) and there is no socialization to inform coastal communities, both in the Syiah Kuala sub-district and the other sub-districts in Banda Aceh related to the use of abandoned fishponds for coastal communities. The lack of attention from other related sectors such as the Fisheries Service, and others in environmental control has exacerbated the situation. In fact, using fishponds to keep blue panchax, parrot fish and tilapia fish can add up income, reduce vulnerability and prevent clinical malaria and positive malaria in the coast of Syiah Kuala Subdistrict of Banda Aceh.
Working / Activity Factors at a Specific Time with Malaria Events
Based on the results of statistical analysis using the Chi Square test, the value of p = 0,000, meaning that the activity / work at a certain time has an effect on the incidence of malaria (p <0.05) in the coastal area of Syiah can be seen in Table 8 below: Based on the results of interviews and community observations on 112 households in the coastal area of Syiah Kuala Sub-district, descriptions of work / activities at certain times were obtained with a vulnerable category of 49 (43.75%) households. After bivariate testing using the Chi-Square statistical test shows that the variable work / activity at a certain time has an influence on the incidence of malaria (p <0.05), p = 0,000. Observations found that malaria caused by work / activity outside the home at night was related to the habits of some species of mosquitoes that were exophagic at night. Exophagic mosquitoes are mosquitoes that bite a lot outside the home, but can enter the house if humans are the preferred main host. This is also related to the number of respondents who work / do activities outside the home such as some households of early adulthood and late adulthood who fish in the lagoon, the fish pond is neglected and some mothers look for oysters in the lagoon until late at night on the coastal District Syiah Kuala. Based on in-depth interviews with the households who served as head of the hamlet with a fish pond farming profession in Aleu Naga Village who suffered from clinical malaria, explained that the farm had not been cleaned for the past 2 years and found many mosquitoes and usually went to the fish pond using short-sleeved clothes and not using mosquito repellent lotion, they only burn dried tree trash to repel mosquitoes. Malaria prevention efforts can be carried out by increasing awareness of the risk of malaria, preventing mosquito bites, controlling vector / suspect vectors and chemoprophylaxis. Prevention of mosquito bites can be done using mosquito nets, mosquito repellent lotion / spray / mosquito repellent / electric mosquito repellent and for those who are accustomed to working / doing activities above 6:00 p.m. you should use closed clothes or long sleeves and trousers to avoid mosquito bites. [26]
Knowledge with Malaria Events in the Coastal of Syiah Kuala Sub-district Banda Aceh
Based on the results of statistical analysis using the Chi Square test, the value of p = 0.002 is obtained, meaning that there is a significant influence between knowledge and the incidence of malaria (p <0.05) in the coastal area of Syiah Kuala Banda Aceh can be seen in Table 9 below: 11 understanding, clinical symptoms, modes of transmission, prevention methods, places to breed mosquitoes, how to find the right treatment. Based on the results of the statistical analysis, it can be explained that one's knowledge will influence the incidence of malaria on the coast of Syiah Kuala Sub-district, Banda Aceh. This is supported by Bloom's theory stating that knowledge is knowing what is done and how to do it; knowledge is the result of knowing someone about an object through their senses and is influenced by the intensity of attention and perception of the object. [27] In this study respondents did not know that malaria was transmitted by parasites to anopheles mosquitoes as many as 41 (36.6%) households, did not know malaria transmission through direct contact with malaria sufferers as many as 63 (56.3%) households, did not know one efforts to prevent malaria are to use mosquito nets during sleep at night as many as 47 (42.0%) households, do not know that malaria mosquitoes actively bite in the morning, afternoon and night as many as 63 (56.3%) households, do not know the habit of doing activities outside the home at night will be at risk of being bitten by malaria mosquitoes as many as 57 (50.9%) households. Not knowing malaria will heal on its own as many as 55 (49.1%) households, not knowing malaria can be affected by all age groups, except toddlers as many as 52 (46.4%) households, do not know the clinical symptoms of malaria, namely the presence of spots in the arms and body as many as 62 respondents (55.4%) households, did not know that malaria mosquitoes could breed in bathtubs and buckets filled with water in the house as many as 56 (50.0%) households, did not know that malaria mosquitoes could not develop multiply in unproductive lagoons / fish ponds and swamps as many as 51 (45.5%) households, do not know the need to hoard as soon as swamps, puddles that can breed malaria mosquitoes as many as 63 respondents (56.3%) households.
Unaware of how to make a lagoon water irrigation channel into the sea is needed as many as 60 respondents (53.6%) households, do not know to maintain tilapia and blue panchax can reduce vulnerability, prevent malaria and increase income by 55 (49.1%) households , not knowing malaria can cause death if untreated immediately as many as 48 (42.9%) households did not know if the fever, shivering (malaria) had to go directly to the public health center, they only had 57 (50 9%) households. This is in line with the results of testing with chi square, the solution is to use fisher exact, the results show that p = 0.02 with continuity correction p value is 0.002, this shows that Ho is rejected with α = 0.05, meaning that p-Value is smaller than the level of error that has been determined so that it can be seen that there is a relationship between the level of knowledge with the incidence of malaria in the Public health center area Kasongan Katingan Hilir Sub-district Katingan Regency. [28] This is reinforced by the research results of statistical analysis obtained X2 count (33,885)> X2 table (3,841) and p (0,000) <α (0,05), meaning there is a relationship between knowledge and incidence of malaria in Koeloda Health Center, Golewa Sub-district Ngada Regency. [29] Knowledge determines a person's behavior, for example preventive measures (health prevention behavior) for malaria, i.e. every action taken by individuals to prevent malaria, among others, sleeping using mosquito nets, using anti-mosquitoes, installing mosquito nets, using long-sleeved clothes when working / doing activities above 18.00 hours, flowing water from the lagoon to lagoon or creating a coastal barrier, cleaning the fish ponds from moss, closing / hoarding marshes and dealing with lagoons, ponds are not productive to maintain predator mosquito larvae like blue panchax , indigo and tilapia. Efforts to increase public knowledge about the dangers of malaria have been carried out by the Banda Aceh City Health Office through the management of eradication and prevention of malaria is to carry out counseling and promotions in the form of solicitation such as morning patrol conducted by Kopelma Public Health Center. However, providing information on lagoon production, productive ponds and unproductive ponds as a place that has the potential and susceptibility to malaria has never been done. Likewise the management / utilization of lagoons, unproductive fish ponds can be utilized to maintain tilapia and blue panchax as predatory fish to eat mosquito larvae which can reduce the malaria vector and can increase additional income.
Based on the results of interviews with the Head of the Health Office represented by the prevention, control of infectious diseases and the Malaria Program Coordinator, the Banda Aceh City Health Office said that counseling was carried out individually when the community was sick with
|
2019-08-02T20:42:50.380Z
|
2019-07-16T00:00:00.000
|
{
"year": 2019,
"sha1": "685d5f2d9e1ff0b7adc46d302fa42348d31550e0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/273/1/012054",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a905425f02f38b22f5514e0aa3b9665c37db24af",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Physics",
"Geography"
]
}
|
229160621
|
pes2o/s2orc
|
v3-fos-license
|
Optical Properties of Artemisinin and Its Derivatives
Artemisinin and its derivatives are of great research value in biology. In this work, we study their chiral and optical properties. The multidimensional multifunction analysis method is used to analyze the linear and nonlinear optical processes (one-photon and two-photon absorption: OPA and TPA), electronic circular dichroism (ECD), and Raman optical activity (ROA) mechanisms under light excitation. Transition dipole moments (TDMs) and charge difference density (CDD) are used to describe the electromagnetic interaction between ECD and ROA when a substance is excited by light. The theoretical research results of the study show that the dioxygen atoms provide an intermediary for the transfer between charges and also enhance the role of the TDMs. This generalized chiral theory can not only explain the traditional sources of chirality but also distinguish whether the molecule has chirality when the chiral center is not obvious. By analyzing ROA and different vibration modes, we can clearly observe that each part of the molecule responds differently when excited.
INTRODUCTION
Artemisinin (ART) is regarded as the first natural peroxide extracted from the Chinese herbal medicine Artemisia annua. Artemisinin and its derivatives are all sesquiterpene lactones. They are widely used to treat malaria due to their high antimalarial activity and low toxicity. 1 With the safety record established in millions of malaria patients, artemisinin was reported for the first time in anti-tumor activity in 1993. 2−4 Since then, numerous derivatives of artemisinin such as esters, 5 ethers, 6 dimers, trimers, and tetramers 7 have been researched and expected to become anti-tumor drug candidates. We selected artemisinin and its two derivatives to study their optical properties, as shown in Figure 1. The chemical structural formulas and atomic schematic diagrams of molecule 1, 2, and 3 are shown in Figure 1a−c, respectively. First, all of them are sesquiterpene lactones, but they have their own characteristics. By comparing molecule 1 (R1) and molecule 2 (R2), it is observed that the structure of R1 consists of a ring containing dioxygen atoms, while the dioxygen atoms of R2 are on the branch. By comparing R2 and molecule 3 (R3), it is observed that R2 has a dioxygen atom structure, while R3 has two dioxygen atom structures. Since there are no other special atoms except oxygen in the studied molecules, it is very helpful for us to control the variable of atoms. It is precisely because they have similarities and differences, we chose these three molecules for research.
The dioxygen atoms have a special configuration, and there are no other special atoms in the chosen molecules, which makes it convenient for us to study the characteristics of the oxygen atoms and eliminate the interference of other atoms. The two-dimensional (2D) visualization method shows us which part of the atoms of the molecule responds to the light, and the direction of charge transfer is represented by the three-dimensional (3D) diagram.
When molecules are excited by light, charge transfer occurs. This phenomenon exists in many systems. What we want to study is the charge transfer characteristics in OPA and TPA. Compared with TPA, OPA directly transitions from the ground state to the final excited state, and the light absorption intensity is weaker. 8 Hence, we also studied the TPA process. As a third-order nonlinear optical process, this process was proposed by Goppert-Mayer. 9 TPA can be analyzed by the absorption cross section. Compared with OPA, TPA only needs to absorb one-half of the energy, so its electronic transition ability is stronger. In our research, we regard the transition between the ground state and the final excited state in TPA as two processes. The intermediate transition state can be directly obtained by quantum chemical calculation. 10−13 TDMs in TPA usually contain two modes, a three-state model produced by the intermediate transition and a two-state model produced by the direct transition. 14 TPA also has a wide range of applications in microscopy, 15−17 solar cells, 18,19 non-destructive imaging of biological tissues, 20,21 and nanodevice manufacturing. 22 Usually the chirality of the system can be achieved with the help of electronic circular dichroism (ECD). 23,24 As we know, ECD is an asymmetric response between electromagnets, it is closely related to the transition magnetic dipole moment (TMDM) and the transition electric dipole moment (TEDM). Theoretically, the formula for calculating the ECD intensity is as follows 25 where μ e and μ m are the TEDM and TMDM, respectively. Raman optical activity (ROA) spectroscopy, a way to express the molecular vibration optics spectrum, has a high resolution in the frequency domain, and its intensity is determined by the molecular structure 26,27 where different from the ECD, the tensor product of transition electric quadrupole moment (TEQM, θ e ) and TEDM determines the ROA intensity, and ω ij is the gap of two energy levels. The first and second terms of eq 2 are Raman activity and ROA intensity, respectively. ROA is related to the product of TEDM and TMDM and the tensor product of TEQM and TEDM. Through the 2D color maps, we realized the visual representation of TEDM and TMDM and their tensor product. Similarly, the tensor product of TEQM and TEDM can also be represented by a 2D color map. This paper focuses on the comparative analysis of the response of oxygen atoms to light excitation at different positions of molecules and successfully confirms that the source of the molecular chiral center is not only the chiral center but also the asymmetric electromagnetic interaction in the whole system.
RESULTS AND DISCUSSION
2.1. OPA and TPA. Figure 2 displays the OPA and TPA spectra of these three molecules. The blue, red, and black lines represent R1, R2, and R3, respectively. As shown in Figure 2a, we selected the first excited state (S 1 ) of R1, S 1 of R2, and S 1 and S 4 of R3 to study after observing the OPA diagram. We can observe that the wavelengths of the first excited states of molecules 1, 2, and 3 are 233, 226, and 244 nm, respectively. We also get information that there is S 4 of R3 at 215 nm. For S 1 , the molar absorption coefficient of R1 is very small, and the one of R2 is still small but larger than the one of R1; the molar absorption coefficient of R3 is much larger than those of the other molecules. This indicates that R3 has a strong response to OPA. Figure 2b shows the TPA spectrum of these three molecules. The transition process of TPA requires two steps, that is to say, an intermediate state is necessary as a transfer station from the ground to final Since the studied TPA state is the first excited state, which has no intermediate state. Therefore, it directly transfers from the ground state to S 1 , which is the same as OPA transition characteristics. Then, the excited states to be studied in TPA are the same as OPA, so the analytical results are also the same.
By drawing the TDM and CDD diagrams of OPA, we can see more intuitively the position and direction of the electronic transition between excited states when molecules are excited by light, as shown in Figure 3. The positions of hole density and electron density are shown in green and red, respectively. Through the combination of 2D and 3D visualization methods, we can get a lot of information. Figure 3a is the transition density matrix of S 1 of R1, which shows that weak charge transfer and strong local absorption are the main characteristics of the transition. This conclusion can obviously be confirmed by Figure 3c, which shows the charge difference density of S 1 in R1. The localized excitation characteristics of the charge are obvious and are concentrated around the oxygen atoms. For S 1 of R2, the charge transfer characteristic is localized excitation, as shown in Figure 3b. Figure 3d shows the atom positions where charge redistribution processes are concentrated on the oxygen atoms of the intermediate part. For S 1 of R3, Figure 3e shows that the charge redistribution characteristic is localized excitation with weak charge transfer. As shown in Figure 3g, the localized excitation occurs on the benzene ring and the oxygen next to it. Moreover, for S 4 of R3, Figure 3f shows that the charge transfer characteristic is localized excitation with weak charge transfer, which is confirmed in Figure 3h, and the occurrence of localized excitation and charge transfer are observed. The atoms are concentrated on the ring containing the oxygen atoms and the benzene ring. By comparing the structure of the three molecules, we found that all of them have a dioxygen bridge structure, and it is the dioxygen bridge part that can be used as an intermediate for charge transfer.
2.2. ECD. Figure 4 shows the ECD spectra of the three molecules. We analyze the first excited states of these three molecules separately. The blue, red, and black curves represent R1, R2, and R3, respectively. By observing the molecular structure diagram, we can clearly observe that R1 and R2 have chirality, but we cannot see whether R3 has chirality. It can clearly conclude that R3 is also chiral. To confirm which part of the molecule contributes to the charge−hole interaction, we plotted the three-dimensional density of TEDM and TMDM, as well as the twodimensional TEDM and TMDM and their tensor product.
The density of TEDM and TMDM component isosurface maps of R1 are shown in Figure 5a. The first row represents the TEDM density, where pink represents holes and blue represents electrons; the second row represents the TMDM density, where yellow represents holes and purple represents electrons. By comparison, it is found that the TMDM density and TEDM in each component are basically the same. For R1, the atoms that contribute more to electron and hole differential after photoexcitation are still concentrated around the dioxygen bridge. TEDM, TMDM, and their tensor product of R1 are shown in Figure 5b at different states. The first column represents TEDM, the second column represents TMDM, and the third column is the tensor product of TEDM and TMDM. The last column of Figure 5 shows the contribution of each atom to the ECD intensity, which is determined by the tensor product itself, that is, the quadrupole moment |⟨j|⟨μ e |i⟩⟨j|μ m |i⟩| 2 . The density matrixes indicate that the TEDM density is greater than the TMDM density.
The 3D TEDM and TMDM density maps of R2 are shown in Figure 6a. It can be observed that the TEDM density and the TMDM density are mainly concentrated in the dioxygen bridge part of R2 with a large value. The TEDM density in the X component is a little larger than the TMDM density, whereas in the Y and Z components, the TMDM density is greater than the TEDM density. So, in the density matrix, TEDM and TMDM have similar densities. Figure 6b shows the matrix filling diagrams of TEDM and TMDM, and their tensor product. The TDM density is mainly concentrated on the dioxygen bridge and its connected ring, and the difference between the intensity of TDM and TMDM is small. Now let us also analyze the ECD of R3. Figure 7a shows that the TEDM density and the TMDM density of R3 are concentrated at the benzene ring and the oxygen atoms connected with it, and their values are large. As revealed in Figure 7b, we found a phenomenon that the intensity of TDM is much greater than that of TMDM, showing an order of magnitude difference. It is also obvious that the density of TDM is indeed concentrated at the benzene ring and its connected oxygen atom. The 2D and 3D visualizations make it convenient for us to study ECD. This method can be called generalized chirality theory. Using this method, we successfully distinguish R3 with chirality and found that the chirality of the system depends not only on the chiral center of the molecule but also on the magnetic transition of the entire system that determines the chirality. 27 2.3. Raman Spectroscopy and ROA. The Raman spectroscopies of the molecules were analyzed (see Figure 8). Figure 8a shows the resonance Raman spectrum of R1 with a strong Raman peak at a wavenumber of 915 cm −1 , and its Raman activity is not high. Figure 8b shows the resonance Raman spectrum of R2 with a strong peak at the wavenumber of 898 cm −1 , and its Raman activity is two orders of magnitude larger than that of R1. A strong Raman peak of R3 appears at 1653 cm, and the value of Raman activity reaches 10 3 , as shown in Figure 8c. At the same wavelength, the resonance Raman activity is not the same, which indicates that under the same incident light, different vibration modes of the molecule have different Raman spectral responses.
The second term of formula 2 indicates that the ROA spectrum relates to the TEDM, TEQM, and TEDM of the molecules. We have also studied the Raman optical activity (ICPu/SCPu (180)) of these three molecules and still choose the first excited state. Figure 9a−c shows the ROA spectra of molecules 1, 2, and 3. The molecular vibration mode maps corresponding to these peaks of the three molecules are also shown in Figure 9d−f. Simultaneous analysis of the ROA spectrum and vibration modes at different frequencies allow us to get the response of each group of the molecules to light. As can be seen in Figure 9a, ROA is also very strong. As shown in Figure 9d, the vibration of R1 mainly exists in the ring where oxygen atoms exist, and there are many vibration positions. The main reason for this phenomenon might be the existence of lone pair electrons in oxygen, which play a more important role in electromagnetic interactions. 28 In Figure 9b, we analyze a strong peak of R2 at a wavenumber of 1653 cm −1 . The light response of R2 is mostly concentrated on the dioxygen bridge and its connected ring in Figure 9e. The results show that the oxygen atom plays an important role in TEDM and TMDM. ROA has different sensitivities to different incident light frequencies, which causes the relative intensity of the Raman peak to change when the Raman shift changes. When the wavenumber of R3 is 1653 cm −1 , the maximum absolute value of the Raman optical activity is obtained in Figure 9c. As shown in Figure 9f, the vibration of R3 is mainly concentrated on the benzene ring and its connected oxygen atoms. The strength of ROA is relative to the tensor product of TEDM and TMDM and the tensor product of TEQM and TEDM. Consequently, Figure 10 shows this process. The tensor product is calculated. As shown in Figure 10a, we can realize that the tensor of R1 at 233 nm is mainly donated by the oxygen-containing ring and the dioxygen bridge. In Figure 10b, the ROA of R2 is mostly contributed by the dioxygen bridge and its connected ring. Here, the oxygen atoms can act as intermediaries in the process of charge transfer and promote the interaction between TDMs. As shown in Figure 10c, the tensor product contribution of R3 mainly comes from the benzene ring. Consistent with eq 2, the product of two tensors determines the strength of ROA. One is the tensor product of TEDM and TMDM, and the other is the tensor product of TEDM and TEQM. In Figure 10, we can clearly see that for all the three molecules, the influence of the second term is greater than that of the first one.
CONCLUSIONS
In this paper, 2D and 3D visualization methods were utilized to analyze the physical mechanism of artemisinin and its derivatives under light excitation, including OPA, TPA, ECD, and ROA. For the three chosen molecules, the atoms that play an important role in charge redistribution in OPA and TPA are generally the same as those in ECD and ROA, which are principally the oxygen-containing ring and the dioxygen bridge. It can be concluded that dioxygen atoms can be regarded as a connection of charge transfer, which can also strengthen this effect. Using this method, we can not only explain the mechanism of traditional molecular chirality but also distinguish whether the molecules without the chiral center mark have chirality. Even if the wavelength of the incident light is the same, the responsivity of RRS to light excitation is different under different vibration modes, and the ring structure containing the dioxygen atom bridge can also enhance the dipole moment of the light-encouraged transition. The methods and conclusions in this paper provide theoretical help for studying the electromagnetic interaction and physical principle of chiral molecules excited by light. This method is suitable for different scale researches.
METHODS
4.1. Calculation Details. The quantum computing part is done with Gaussian 16 software. 29 We use the framework of density functional theory (DFT) 30 to combine B3LYP 31 and 6-31(G) basis sets. 32 The dispersion function in quantum chemistry calculation refers to a basis function with a small exponent and a wide spatial distribution range. The necessity of adding a dispersion function is summarized based on a large number of theoretical calculation articles and practical experience as follows: Calculation of dipole moment, polarizability, hyperpolarizability, Rydberg excited state, anion system energy and electron affinity energy require a dispersion function, but our article does not discuss these contents, so the dispersion function is not considered. CAM-B3LYP was used to calculate and analyze the transition process and the obtained spectra. 33 The TEDM density, the TMDM density, the distribution of electron−hole pair analysis, and the TDM density matrix are completed with the help of the Multiwfn 3.6 program. 34 The VMD software was used to achieve the distribution of the isosurface of the TEDM density and TMDM density maps in a 3D space. 35 4.2. TPA. There are two transitions in TPA, one is a twostep transition through an intermediate excited state, and the other is a symmetrical fracture transition because of the huge difference. 10 The quantification of the TPA process is defined where the first term is combined by the Bohr radius (a 0 ), speed of light (c 0 ), and fin structure constant (α). The second is controlled by the frequency of light (ω) and excited-state lifetime (tie-bar-start f ). The profile of the spectral line is expressed by g(ω where the TPA probability consists of the ground (|g⟩) and final states (|f⟩), and |j⟩ represents any state; μ and ω j are the TEDM and energy of special excited states, and the difference between the permanent dipole moment can be expressed by Δμ fg = ⟨f |μ|f⟩ − ⟨g|μ|g⟩; θ j is the angle between the Dirac bracket ⟨f |μ|j⟩ and ⟨j|μ|g⟩. The angle between the Dirac bracket Δμ fg and ⟨f |μ|g⟩ is represented by ϕ. From eq 4, it is confirmed is that the TPA probability is determined by the TDM product during the two transitions. Compared with the other methods, the results obtained in this paper are in good agreement with the experiment. 8 (5) where P μν tran is the TDM; the allocation coefficient from the occupied track to the virtual track is represented by w. The linear combination coefficients of molecular orbitals are represented by C μi and C νj , and μ represent the amount of basis function χ μ . The contribution of atoms to TEDM can be calculated. TMDM is described as follows 37
|
2020-11-26T09:06:43.352Z
|
2020-11-23T00:00:00.000
|
{
"year": 2020,
"sha1": "dcba91beda3c6df539b4f12b26b8e2364d1fbf7e",
"oa_license": "acs-specific: authorchoice/editors choice usage agreement",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.0c03361",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b3247f866914795428d705747d2a659ad3d538bf",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
239616195
|
pes2o/s2orc
|
v3-fos-license
|
Competing Vortex Topologies in Iron-based Superconductors
In this work, we establish a new theoretical paradigm for vortex Majorana physics in the recently discovered topological iron-based superconductors (tFeSCs). While tFeSCs are widely accepted as an exemplar of topological insulators (TIs) with intrinsic $s$-wave superconductivity, our theory implies that such common belief could be oversimplified. Our main finding is that the normal-state bulk Dirac nodes, usually ignored in TI-based vortex Majorana theories for tFeSCs, will play a key role of determining the vortex state topology. In particular, the interplay between TI and Dirac nodal bands will lead to multiple competing topological phases for a superconducting vortex line in tFeSCs, including an unprecedented hybrid topological vortex state that carries both Majorana bound states and a gapless dispersion. Remarkably, this exotic hybrid vortex phase generally exists in the vortex phase diagram for our minimal model for tFeSCs and is directly relevant to tFeSC candidates such as LiFeAs.When the four-fold rotation symmetry is broken by vortex-line tilting or curving, the hybrid vortex gets topologically trivialized and becomes Majorana-free, which could explain the puzzle of ubiquitous trivial vortices observed in LiFeAs. Origin of Majorana signal in other tFeSC candidates such as FeTe$_x$Se$_{1-x}$ and CaKFe$_4$As$_4$ are also interpreted within our theory framework. Our theory sheds new light on theoretically understanding and experimentally engineering Majorana physics in high-temperature iron-based systems.
The tFeSCs, however, are far from being wellunderstood. For example, unlike other tFeSC candidates, LiFeAs possesses no vMBS signal in any of its free vortices, even though the Fermi level is around the TI gap [34]. This counter-intuitive vortex physics clearly deviates from the naive expectation from the Fu-Kane paradigm, thus calling for a new theoretical interpretation. Meanwhile, most tFeSCs additionally host a pair of massless bulk Dirac nodes in their normal states [25,63], which are energetically above the TI bands with an energy separation dubbed δ so . It has been predicted that a Dirac semimetal (DSM), if going superconducting, would feature gapless, vMBS-free magnetic vortices [64,65], in contrast to the vMBS physics of a superconducting TI [62,[66][67][68][69][70][71][72][73][74]. Notably, existing studies on tFeSCs generally adopt the presumption of an infinite δ so limit, so that only TI or DSM bands are independently studied for simplicity [75]. However, such presumption remains unjustified for some tFeSC candidates (e.g. LiFeAs) where δ so can be as small as 10 meV [25,76].This raises an important open question for the topological nature of vortices in tFeSCs, especially when both TI and DSM bands are highly entangled.
In this work, we provide a new theoretical paradigm for understanding vortex topological physics in general tFeSCs. To fully incorporate both TI and DSM physics, a minimal 6-band model is constructed to capture the key topological ingredients of general tFeSCs [25,51]. For the first time, we have identified the emergence of four competing and topologically distinct vortex states in the vortex phase diagram of tFeSCs, as shown in Fig. 1. Remarkably, a new exotic "hybrid topological" vortex phase manifests as the most probable vortex state for small δ so systems, which features both well-defined vMBS and a one-dimensional (1D) nodal band structure along its k z dispersion. The stability of the hybrid vortex phase re- lies on the protection of four-fold rotation symmetry C 4 , and upon C 4 breaking, the hybrid vortex can be easily trivialized to become vMBS-free. This offers a natural explanation for the observed missing-Majorana puzzle in LiFeAs [34]. Applications of our theory to other tFeSCs and new experimental signatures are also discussed.
Notably, h ( 1 2 ) (k) by itself manifests as a standard Hamiltonian for a 3D time-reversal invariant TI [77,78]. Besides, as shown in Fig. 2 (a), a second band inversion between |p − , ± 1 2 and |d + , ± 3 2 generates an additional 3D Dirac semimetal phase with a pair of four-folddegenerate bulk Dirac nodes [79,80]. The energy separation between the TI and DSM bands is controlled by δ so , the spin-orbit splitting among the d-orbital bands. Remarkably, the robustness of the bulk Dirac points are guaranteed by the combination of C 4 , P, and Θ. In Fig. 2 (b), we exploit iterative Green function method to map out the energy spectrum for (001) surface in a semiinfinite geometry. This clearly reveals the coexistence of a 2D Dirac surface state and the 3D bulk Dirac nodes, a common topological feature shared by most tFeSCs.
To understand possible vortex topologies in tFeSCs, it is suggestive to start with a δ so → ∞ limit, where the normal-state TI phase and DSM phase can be viewed as two independent systems [75]. When the Fermi level lies around the TI gap, the system enters the "TI limit" and its vortex physics is well captured by the Fu-Kane model [62,66]. Specifically, the lowest-energy k zdispersing vortex line modes carries an angular momentum of l z = 0 and there exist two critical chemical potentials µ = µ c0,± where the vortex modes close their energy gap at k z = 0 or π. Such gap closing signals a change of the 1D vortex topology and thus serves as the phase boundaries for two topologically distinct phases: (i) a Majorana-free trivial phase and (ii) a gapped topological phase with end-localized vMBS (dubbed the "Kitaev vortex"), which lives within µ ∈ (µ c0,− , µ c0,+ ). Meanwhile, a "DSM limit" with δ so → ∞ is reached when the Fermi level is near the bulk Dirac nodes. Similarly, there exist two critical chemical potentials µ c1,± where the vortex gap vanishes at k z = 0 or π. For µ ∈ (µ c1,− , µ c1,+ ), however, the lowest-energy vortex modes necessarily carry l z = ±1 and further form a pair of C 4 -protected band crossings at zero energy along k z . While such a nodal vortex is NOT vMBS-carrying, it can be turned into a gapped Kitaev vortex with vMBS by simply spoiling the protecting C 4 symmetry, as we will show later.
In realistic FeSC systems, δ so can be small enough such that neither the TI limit nor the DSM limit applies. In this case, the aforementioned phase diagrams for both TI and DSM limits will now mix and interact with each other. Nonetheless, the notions of µ c0,± and µ c1,± remain well-defined and thus still decide the vortex topological phase boundaries even for a small δ so system. Remarkably, as we show analytically in the SM [83], the energy range for each vortex phase, i.e., ∆µ c0/c1 = µ c0/c1,+ − µ c0/c1,− , will get significantly enhanced by reducing the value of δ so . This fact is crucial for small-δ so systems, where the Kitaev vortex phase and nodal vortex phase tend to have a finite overlap around µ = 0 in the phase diagram, leading to a new hybrid topological vortex phase. This hybrid vortex state inherits two key topological features from its parent vortex states: (i) it features a C 4 -protected nodal dispersion along k z ; (ii) it hosts vMBS with a finite localization length. Notably, the 1D nodal bands and the vMBSs are living in different C 4 symmetry sectors and thus will not hybridize with each other. Similar Majorana-carrying gapless topological phase has also been reported in certain 1D Luttingerliquid systems [84,85].
We now proceed to numerically map out the vortex topological phase diagram (VTPD) for our six-band tFeSC model in Eq. (3) as a function of both µ and δ so . The topological phase boundaries in Fig. 1 are determined by µ c0,± (orange line) and µ c1,± (purple line) for a fixed δ so , which can be extracted by calculating the vortex mode spectrum. We therefore consider a cylin- (II) nodal vortex for µ c0,+ < µ < µ c1,+ ; (III) hybrid vortex for µ c1,− < µ < µ c0,+ ; (IV) Kitaev vortex for µ c0,− < µ < µ c1,− . Further varying the value of δ so , we eventually obtain the complete µ-δ so VTPD in Fig. 1. Just as we expect, the hybrid vortex is indeed the dominating phase for clean tFeSCs with small µ and δ so . The hybrid vortex physics is captured by a minimal four-band effective Hamiltonian, The decoupled 2 × 2 blocks h Kitaev = (m 0 + m 1 k 2 z )τ z + m 2 k z τ x and h nodal = (m 0 + m 1 k 2 z )τ z correspond to the gapped Kitaev vortex part with l z = 0 and the gapless nodal vortex part with l z = ±1, respectively. While it can be easily constructed with the symmetry principle, we also provide an analytical derivation of h hybrid in the SM [83]. The band parameters for h hybrid can be extracted numerically. For example, with µ = 0, δ so = 0.5, we find m 0 = 0.018, m 1 = −0.0022, m 2 = 0.032 and m 0 = 0.025, m 1 = −0.011 for the above model. The sign reversal between m 0 (m 0 ) and m 1 (m 1 ) signatures the topological band inversion for h Kitaev (h nodal ).
The topological nature of the hybrid vortex state delicately relies on the around-axis C 4 symmetry. As schematically shown in Fig. 4 (a), spoiling C 4 drives the nodal component of the hybrid vortex into additional Kitaev vortex degrees of freedom, which further interacts with the original Kitaev vortex component and gets trivialized as a whole. Therefore, a C 4 -broken "hybrid vortex" is essentially a trivial vortex state with no Majorana physics. In practice, local C 4 breaking at nanometer scale appears generally unavoidable and can arise from a plethora of mechanisms in tFeSCs. These scenarios include (i) the applied magnetic field tilts away fromẑ axis [87][88][89]; (ii) bulk impurities bends the vortex line. In the following, we will focus on the effect of vortex line tilting in tFeSCs and a detailed discussion of the vortex line bending can be found in the SM [83].
Vortex Line Tilting -For a small tilting angle φ π 2 [as defined in Fig. 4 (b)], we can adopt the second-order perturbation theory to analytically rederive the hybrid vortex Hamiltonian h φ (k z ), where k z is aligned with the vortex-line orientation. Details of the perturbation theory can be found in the SM [83]. We find that formally, Here h hybrid (k z , φ) resembles the original C 4 -preserving hybrid vortex Hamiltonian in Eq. 4, but with a set of renormalized parameters m 0,1 → m 0,1 + m 5,6 φ 2 and m 0,1 → m 0,1 + m 5,6 φ 2 . We thus expect turning on φ to quantitatively change our VTPD (i.e., µ c0,± and µ c1,± ). Meanwhile, h SB (k z , φ) describes the geometry-induced C 4 breaking terms and its effects on the vortex-state topology is two-fold. First, it generates a topological gap for the nodal vortex bands, with h nodal → h nodal + m 2 φ 2 k z τ x . The linear-k z dependence here is required by the particle-hole symmetry Ξ = τ x K, with K the complex conjugation. Second, the Kitaev and nodal vortex degrees of freedom get hybridized via a coupling matrix that is linearly proportional to φ (see the SM [83] for details). Therefore, it is exactly the above two contributions of h SB (k z , φ) that lead to the "hybrid → trivial" scenario described in Fig. 4 (a). Fig. 4 (b) is a numerical map of VTPD as a function of µ and φ based on a lattice-regularized tight-binding model of Eq. 3, by calculating the logarithmic value of the vortex band gap at k z = π. In the φ = 0 limit, the µ-φ VTPD reproduces the C 4 -symmetric phase diagram in Fig. 3 (c), up to some quantitative differences from the lattice regularization procedure. The topological vortex phase boundaries (red lines) manifest a φ-dependence due to the vortex band renormalization, agreeing with our perturbation theory. Interestingly, we also find that the Kitaev vortex phase will terminate at φ ∼ π/6, which can be feasily checked in experiments by mapping out the local density of states near the vortex core.
Discussions on tFeSCs Candidates -We first note that FeTe x Se 1−x , a paradigmatic tFeSC candidate, features a strong spin-orbital coupling effect with δ so ∼ 35 meV. We believe that such δ so is large enough for FeTe x Se 1−x (a) Hybrid vortex Nodal→Kitaev Trivialization to approach the "TI limit", as justified by earlier firstprinciples-based calculations [51]. Thus, FeTe x Se 1−x should manifest as a standard Fu-Kane superconductor with vMBS signals, agreeing with the experimental observations [23,29]. LiFeAs, however, features a small δ so ∼10.7 meV [25], three times smaller than that of FeTe x Se 1−x . Based on the µ-δ so VTPD in Fig. 1, we expect LiFeAs to carry the hybrid vortex topology for hosting both small δ so and µ. As we discussed earlier, a C 4 -breaking perturbation such as the B field tilting in Fig. 4 can break the hybrid vortex down to a trivial one with no MZM signal, which is likely the reason behind the disappearance of vortex Majorana signals in LiFeAs [34]. In particular, even when the B field is carefully aligned with the crystalline rotation axis, the tilting angle φ of the vortex line can still be greatly enhanced, when a near-surface impurity locally distorts the vortex-line geometry. Notably, these atomic impurities could be completely invisible to surface-sensitive probes, such as scanning tunneling microscopy (STM). In the SM [83], we numerically simulate such impurityinduced vortex-line bending effect and have confirmed its crucial role of trivializing vortex topology.
Ref. [34] also reports the appearance of vMBS signal in LiFeAs due to surface-impurity-induced electron doping. The reported levitation of Fermi level is around 5 meV for the so-called strong impurities. Given that δ so ∼ 10 meV for LiFeAs, this effect would be capable of driving a transition from a trivialized hybrid vortex to a Kitaev vortex, following our µ-φ VTPD in Fig. 4 (b). We further predict that if we continuously lower the Fermi level via hole doping, the vMBS signal will reemerge at a critical µ c1,− [i.e., regime IV in Fig. 3 (c)] and eventually disappear at a negatively large µ c0,− . Such an exotic "reentrant Majorana signal" serves as an experimental "smoking gun" for our theory. We also predict a similar but more complex reentrant vortex Majorana phenomenon in CaKFe 4 As 4 [33], where a detailed discussion can be found in the SM [83].
Conclusion -To summarize, the entanglement between TI and DSM physics has a decisive impact on the topological nature of vortex lines in tFeSCs. A direct outcome of the entangled bulk topological bands is the competition among multiple topologically distinct vortex states in the VTPD, including trivial, Kitaev, nodal and hybrid vortex phases. Notably, the unprecendented hybrid vortex topology naturally explains the puzzling absence of vMBS signal in LiFeAs. Our theory can also be feasibly tested in both LiFeAs and CaKFe 4 As 4 with the state-ofthe-art Fermi level engineering and scanning tunneling microscopy. Besides, by replacing the Te/Se/As atoms in tFeSCs with other atoms with different spin-orbital coupling, the value of δ so can be continuously tuned to manipulate the vortex topology. An interesting future direction is to explore other symmetry breaking effects and their impact on the vortex topology in tFeSCs. For example, breaking inversion symmetry by strain can split the bulk Dirac nodes into pairs of Weyl nodes, which is expected to further complicate the VTPD [71,88,89]. We leave a detailed study on these possibilities of engineering vortex topological physics to future works. [92]. Since the s±-wave pairing will contribute to the vortex topology in the same way as an isotropic s-wave pairing, we choose to use the s-wave pairing in our model for simpliciy.
[82] C.-K. Chiu, J. C. Y. Teo, A. P. Schnyder, and S. Ryu In this section, we study the topological vortex Majorana bound states (vMBSs) in topological iron-based superconductors (tFeSCs), whose normal band structure contains both Dirac semi-metal phase and topological insulator phase. Therefore, the minimal model to capture the main physics for tFeSCs is a 6-band Hamiltonian, given in Eq. (1) in the main text. The basis function for this 6-band model reads which can be rewritten in terms of z-component total angular momentum and the parity of the basis state as, The normal Hamiltonian reads where Here, the D(k) term determines that d xz and d yz bands have distinct masses, leading to two hole pockets in iron-based superconductors. And δ so is the spin-orbital coupling (SOC), which leads to the shifting of the 3D Dirac point when varying the SOC splitting of d-orbital bands. We will show it is the most important parameter for vortex topology later. To simplify the calculation, k z → sin k z and k 2 z → 2(1 − cos k z ) will be used.
For Hamiltonian (8), the important symmetries are where is the z-component of the total angular momentum, K is the complex conjugation and P = diag[−1, −1, 1, 1, 1, 1] is the spatial inversion symmetry. In addition, the system has mirror symmetry M z with respect to z axis, defied as To study the topological vortex states, we introduce the 1D vortex line with π-flux inserted along z axis. Next, we solve the Bogoliubov-de Gennes (BdG) Hamiltonian as, of which we take the Nambu basis {Ψ † k , Ψ T −k }. As a result, the particle-hole symmetry operator Ξ is defined as, where γ x is Pauli matrix acting on particle-hole subspace and K is the complex conjugate. Here the normal Hamiltonian H 0 (k) is given by Eq. (8) with out symmetry breaking perturbations. The s-wave pairing is considered in this work, where the pairing profile in the real space is give by ∆ 0 → ∆ 0 tanh(r/ξ 0 )e iθ . Since the vortex line is orienting along z direction, the 3D Hamiltonian calculation is reduced to a 2D problem by treating k z as a parameter. To solve the 2D BdG Hamiltonian with fixed k z for vortex bound states (VBSs), we take the disc geometry with natural boundary condition. In the polar coordinate system (r, θ), the momentum operators k ± = k x ± ik y can be expressed as, which satisfies that where n is an integer and J n is the Bessel function of the first kind. Given that the vortex line has winding number +1, the eigenfunctions of the reduced BdG equations for Eq.(11) take the following general forms, |E j (n, k z ) = (u j,kz (n, r, θ), v j,kz (n, r, θ)) T , which labels the j th solution in the n-subspace with fixed k z , and u (electron wave functions) and v (hole wave functions) are expressed as, u j,kz (n, r, θ) = e inθ u 1 (n, r), u 2 (n + 1, r)e iθ , u 3 (n, r), u 4 (n + 1, r)e iθ , u 5 (n − 1, r)e −iθ , u 6 (n + 2, r)e 2iθ , v j,kz (n, r, θ) = e inθ v 1 (n, r), v 2 (n − 1, r)e −iθ , v 3 (n, r), v 4 (n − 1, r)e −iθ , v 5 (n + 1, r)e iθ , v 6 (n − 2, r)e −2iθ , where the components u i (n, r) and v i (n, r) with i = 1, 2, 3, 4, 5, 6 can be both expanded by the normalized Bessel function as, where Φ(n, r, α k ) = √ 2 RJn+1(α k ) J n (α k r/R). Please note that n used here is l z used in the main text. Here, c and c are the expansion coefficients, α k is the k th zero of J n (r), and R is the radius of the disc. In our calculation, ξ 0 = 1 and R = 120 are used. And the truncation number for Bessel zeros are N = 140. In this setting, finite size effect is weak enough for the low-energy VBSs.
Therefore, there are two types of numerical results, 1.) Searching for the nodal vortex phase. By fixing the chemical potential µ and δ so , we calculate the vortex line spectrum as a function of k z for the n-subspace. Normally, the 1D nodal vortex lives in |n| ≥ 1 subspaces. The results are shown in Fig. 5. 2.) Mapping out topological phase diagram. For the phase diagrams in Fig. 6, we have adopted the following parameters that are slightly different the ones in the main text, with In this section, let us check the TI limit from the six-band model in Eq. (8) by taking δ sc → ∞. It implies that the bulk Dirac cone is far away from the TI surface Dirac cone, so that we can eliminate the high-energy bands from |d + , + 3 2 , |d + , − 3 2 . Therefore, the six-band model is reduced to a four-band model which describes the topological insulator. The vortex phase transition is studied by P. Hosur in Ref. [66], and they find the critical chemical potential for topological vMBSs is, The analytical result is consistent with numerical calculation in Ref. [66 and 93]. Please note that the results are only for the δ so → ∞ limit in the 6-band model.
Numerically, we find that the critical chemical potential µ − c varies very rapidly and obtains a large negative value for a small δ so . We use perturbative method to provide a semiquantitative understanding about this phenomenon. Then, let us analyze the case with finite but large enough δ so by perturbation theory. We also take the high-energy bands from |d + , + 3 2 , |d + , − 3 2 as perturbation terms, The projected four-by-four effective TI Hamiltonian is where k ± = k x ± ik y . Then we take the approximation around Fermi energy Hereafter, we only keep the k 2 order terms so that we ignore the D k -terms. Then, we can use the analytical criteria derived by P. Hosur for the topological vortex phase transition for H ef f , where M 1 (k) = M 1 (k) − . So the critical chemical potential µ ± eff,c is given by, Now let us decrease δ so from infinity to a finite value (assume δ so > 0 for simplicity), but we still assume M 1 1 − A 2 1 2δso > 0 to keep the validness of the perturbation theory. Therefore, we have The above analysis explains why µ − ef f,c (< 0) is negative large for a small δ so case. For a simple comparison, which shows the significant affect on µ − c by changing δ so . This clearly explains the domination of hybrid vortex in the superconducting vortex line phase diagram of tFeSCs, shown in Fig. (1) in the main text.
In this section, we derive the low-energy effective Hamiltonian for the superconducting vortex line states, including the Kitaev vortex in the n = 0 subspace and the nodal vortex in the n = ±1 subspace. This severs as the starting point to address the symmetry breaking effects induced by tilting external magnetic fields (see Sec. ) or bulk impurity induced curved vortex line (see Sec. ).
Low-energy effective Hamiltonian for Kitaev vortex
Before that, let us briefly discuss the low-energy states. As for the n = 0 subspace, we firstly set two lowest eigen-states of the 2D BdG Hamiltonian at k z = π, with Next, we take k z = 0 terms expanded around k z = π for Eq. (11) as perturbations, including the H n=0 (k 2 z ) ∝ k 2 z -term and the H n=0 (k z ) ∝ k z -term, which obey Without symmetry breaking terms, the BdG Hamiltonian also preserve the Mirror symmetry where M z is defined in Eq.
Here u i = v * i is required by the particle-hole symmetry. Please notice that all the symmetry constraints are numerically checked. According to symmetry constraints of Ξ and M BdG z , we have which leads to, It confirms the topological Kitaev vortex phase discussed in the main text. In the small φ π 2 limit, We then find that the tilted normal-state Hamiltonian with φ consists of two parts where the first part is formally identical to the original one [i.e., φ = 0 case], but up to some renormalization of model parameters. The second part H 0 (k x , k y , k z , φ) is solely caused by the non-zero tilting angle φ and can be treated as a perturbative Hamiltonian. Crucially, H 0 (k x , k y , k z , φ) accounts for breaking the C 4 rotational symmetry about the z -axis.
where the off-diagonal k 2 -order is ignored and the other terms are given by, While the physical out-of-plane mirror symmetry M z is explicitly broken by the vortex line tilting, it can be restored if we require φ to flip its sign under M z as well, In fact, the above "modified" mirror symmetry is the key for us to construct the low-energy effective tilted vortex Hamiltonian, just based on symmetry principles. Hereafter, we will also use k y and k z instead of k y and k z for simplification. Next, we discuss the effective low-energy vortex Hamiltonian h vortex (k z , φ) up to the φ 2 -order to capture the main physics by considering the mirror symmetry and particle-hole symmetry. The basis is taken from the solution of the unperturbed Hamiltonian at k z = π for the four low-energy vortex states, including two n = 0 vortex states [i.e., |f ± given by Eq. (37)] and two n = ±1 vortex states [i.e., |f ± given by Eq. (45)]. These four unperturbed vortex states are listed as follows, Based on the above vortex-state basis {|f + , |f − , |f + , |f − }, the one-dimensional k · p-type low-energy effective vortex Hamiltonian h vortex (k z , φ) up to the φ 2 -order is generally given by where the first term is the unperturbed vortex Hamiltonian that is just a direct sum of Kitaev vortex Hamiltonian in Eq. (40) and nodal vortex Hamiltonian in Eq. (48), namely, where m kz = m 0 + m 1 k 2 z and m kz = m 0 + m 1 k 2 z . And the second term in Eq. (56) is given by All these terms are in principle allowed by the modified mirror symmetry (M z ) and particle-hole symmetry (Ξ). Their representations are And one can easily check that In the φ = 0 decoupling limit, the m 0 m 1 < 0 case describes the topological Kitaev vortex phase (n = 0 states) and m 0 m 1 < 0 is for the nodal vortex phase (n = ±1 states). And h The above updates modify the topological conditions for vortex phases As a result, the topological phase boundaries among all distinct vortex phases will depend on the value of φ.
• The topological gap opening for the nodal vortex due to the spinless p-wave-like pairing term m 2 k z φ 2 . This could make the nodal vortex as another topological Kitaev vortex.
• The hybridization between the Kitaev vortex and the nodal vortex is caused by those off-diagonal terms: m 3 k z φ and m 4 φ.
(73)
Here we take the first θ integral ( 2π 0 dθ) as an example to show the integration does not vanish because of which gives rise to Similar analysis can be also done for the other integral.
• The second-order perturbation for the off-diagonal term: where the first-part contribution to the coefficient m 2 is given by where the "· · · " part means that there are other contributions from |g In a brief conclusion, the second-order perturbation theory is necessary to derive the vortex Hamiltonian in Eq. (58).
Numerical Simulations and Application to LiFeAs
Next, we perform numerical simulations for the BdG Hamiltonian with tilted vortex lines. Since the C 4 symmetry is broken, the simulation based on the Bessel function expansion is no longer valid. Instead, we can still use tight-binding model for the simulation by taking k x → sin k x , k y → sin k y and k 2 x → 2(1 − cos k x ), k 2 y → 2(1 − cos k y ).
Note that k z is still preserved. For the low-energy vortex states, we expect our approximation is valid at relatively small φ. This enables to calculate: • The vortex spectrum E qp (k z ) at fixed µ, δ so and φ.
• The evolution of the minimal gap of E qp (k z ) by varying φ.
• Topological vortex phase diagram as a function of µ and φ.
First, we study the vortex spectrum. The vortex-line band structure evolution of E qp (k z ) by varying φ from 0 • to 20 • is shown in Fig. 8. Clearly, the zoom in figures in Fig. 8 shows that ∆ nodal /∆ 0 increases by increasing φ.
From these results, we find that ∆ nodal /∆ 0 ≈ φ 2 as what we expect. Motivated by this observation, we are also able to figure out a more exacter relationship between ∆ nodal /∆ 0 and φ. The numerical results is shown in Fig. 9 for φ ∈ [0 • , 20 • ]. The fitting (red line) shows good agreement with numerical results (blue solid circles), The tiny constant −3 × 10 −5 is likely due to the numerical error and the φ 4 dependence can be ignored at small φ. This directly confirms the validity of our perturbation theory that ∆ nodal ∼ φ 2 .
As we mentioned earlier, ∆ hybrid ∼ φ implies that its energy scale should be one order of magnitude larger than ∆ nodal . This leads to an estimate that ∆ hybrid should reach 0.1∆ 0 for a relatively small φ ∼ 15 deg. We note that ∆ 0 for LiFeAs is around 5 meV, which directly leads to an estimate of surface Majorana splitting of 2∆ hybrid 1 meV. This clearly is an observable thanks to the state-of-the-art STM resolution.
Moreover, we can also semi-quantitatively discuss the hybridization strength between the Kitaev vortex states and nodal vortex states caused by the off-diagonal terms m 3 k z φ and m 4 φ in Eq. (58) obtained from the first-order perturbation. We denote the hybridization energy scale as ∆ hybrid , which is difficult to be directly figured out. However, one can use ∆ nodal as an energy-scale reference for ∆ hybrid at small φ, since both of them are obtained form perturbation theory or symmetry construction. Furthermore, we have shown that the energy-scale for ∆ nodal is about 0.01∆ 0 , which could imply that ∆ hybrid is order of 0.1∆ 0 that is comparable to the energy resolution in experiments (∼ 0.05∆ 0 ), such as the scanning tunneling microscope (STM). This C 4 -breaking mechanism by tilting external magnetic field is the driving force for the trivialization of a hybrid vortex, thus providing a more concrete explanation to the experimental observation of LiFeAs (non-MZM is detected in a free vortex). With the above understanding of the role of tilting magnetic field on the vortex spectrum, we now study a new topological vortex phase diagram as a function of µ and φ. And these two parameters can be feasibly controlled in experiments. Please notice that it has been experimentally demonstrated for the superconducting vortex matter in LiFeAs by utilizing a combination of vector magnetic field and scanning tunneling microscopy [e.g., see Fig. 3 and Fig. 4 in Ref. [87]]. The results are shown in the main text, indicating that • At non-zero φ, the nodal vortex becomes topological Kitaev vortex. Moreover, the hybrid vortex becomes trivial vortex, because of the cancellation of two Kitaev vortices.
• At small φ, the critical chemical potentials for the topological vortex phase transitions have small modifications, while large φ has strong effects and even eliminates all the Kitaev vortex phases.
Appendix E: Impurity-Induced Curved Vortex Line In this section, we will discuss how a near-surface impurity will cause a curvature of vortex line and its impact on the vortex topology. This effect breaks C 4 and the translational symmetries simultaneously. Note that such impurities, if hidden underneath the surface, can be both invisible and inevitable. As a result, φ could locally reach a relatively large value, even if the magnetic field is carefully aligned withẑ direction.
Effective Model of a Curved Vortex Line
As illustrated in Fig. 10, the vortex line can be elastically distorted by a point pining force due to the bulk impurity [94]. In this case, the vortex line at fixed z deviates from the z-axis with an in-plane distance δr , which reaches the maximal when it reaches the bulk impurity. And the curved vortex line may be parameterized by where R impurity is the position of one bulk impurity, A and B is the position of one bulk impurity. Besides, the curved vortex line is also characterized by the inhomogeneous tilting angle φ even though the external magnetic field is strictly along the z-axis, which in turn breaks the translation symmetry. Based on Eq. (82), φ(z) is given by where i z is the site for the 1D vortex line taken from i z ∈ [1, N z ] with N z is the number of layers. The lattice constant along z-axis is set to be 1. For example, this leads to the spatial distribution of curved vortex line profile in Fig. 11 (a) by using parameters R impurity = 50, N z = 400, A = 80, B = 30.
Since the impurity is near the top surface, the vortex line is curved with a non-zero φ only for i z ∈ [1, 120] and remains vertical (i.e., φ = 0) for i z ∈ [120, 400]. To model the effects of curved vortex line on the vortex Majorana physics, we can map the 3D lattice simulation into a 1D lattice problem. We note that while this simulation approach contains many approximations, it is of great computational efficiency, and we do expect it to qualitatively capture the essential topological physics of the curved vortex line. In particular, we expect it to answer the following two questions: • Starting from a nodal vortex, do translational-symmetry-breaking perturbations favor Kitaev or trivial vortex states?
• Can impurity-induced vortex-line distortion lead to a surface Majorana splitting for a hybrid vortex state?
Gapping and Majorana Hybridization from Vortex Line Curvature
To answer the first question, we consider turning off the hybridization terms between l z = 0 and L z = ±1 sectors, just to focus on whether a translational-symmetry-breaking vortex-line geometry can transform a nodal vortex into a Kitaev one. This is achieved by choosing model parameters as m 0 = 1, m 1 = −0.5, m 2 = 0.5, m 0 = 0.5, m 1 = −0.3, and m 3 = 0, m 4 = 0, m 2 = 0.2 to semi-quantitatively study the vortex topology analysis. We also take m 5 = m 6 = m 5 = m 6 = 0 for simplicity. In this case, the off-diagonal terms vanish so that h vortex (k z , φ) is simplified as why we still have a continuous gapless spectrum that arises from the right-half of the nodal vortex. We summarize the features below: • The continues energy spectrum is due to the nodal vortex in the i z ∈ [120, 400] range where φ(i z ) = 0.
• Four Majorana zero modes: one pair is due to the Kitaev vortex for the n = 0 subspace, and the other pair is attributed to the n = 1 subspace for the i z ∈ [1, 120] range.
normal-state band structure for CaKFe 4 As 4 is shown in Fig. 13 (a), following the DFT+DMFT calculation in Ref. [33]. Note that TI #2 and DP #2 are essentially a duplicate of TI #1 and DP #2, thanks to the Brillouin zone folding, where "DP" is short for bulk Dirac points.
The key to map out the vortex phase diagram lies in the identification of phase boundaries. As we have discussed in Fig. 3 in the main text, each set of TI or DSM bands will contribute to a pair of phase boundaries. As a result, the vortex phase diagram for CaKFe 4 As 4 necessarily consists of 8 critical chemical potentials as the phase boundaries, which we denote as µ ξ,± and ξ ∈ {TI#1, DP#1, TI#2, DP#2}. Therefore, the phase diagram is completely determined by the energy sequence of all eight µ ξ,± s. Notably, such a sequence sensitively depends on the competition among the following energy scales: • δ so : the energy splitting between TI #1 and DSM #1 (or equivalently TI #2 and DSM #2); • δ t : the energy splitting between TI #1 and TI #2; • δ µ : the energy difference between µ ξ,+ and µ ξ,− .
Practically, it is of technical difficulty for us to obtain accurate values of the above quantities, especially due to the strong electron correlations in CaKFe 4 As 4 and the lack of experimental data. Nonetheless, we can make some rough estimate based on the existing DFT+DMFT calculation and ARPES data. We find that δ so ∼ δ t ∼ 50 meV, δ µ ≥ 20 meV.
Values for δ so of other tFeSC candidates can be found in Table. I. Here the lower bound for δ µ is estimated based on the observation that vMBS signal exists for CaKFe 4 As 4 , even though the Fermi level is found to be 20 meV below the surface Dirac point in experiment. We emphasize that a concrete prediction of δ µ will require a first-principlesbased vortex spectrum calculation with correlation effects being carefully included, which is beyond the scope of our work. Nonetheless, based on the large δ µ found in our six-band minimal model (see Fig. 1 in the main text) and the bandwidth of TI bands in CaKFe 4 As 4 , we expect that δ µ should be much greater than 20 meV in the actual material. Assuming δ µ > δ so ∼ δ t , we schematically show in Fig. 13 (b) a possible vortex topological phase diagram for CaKFe 4 As 4 , which contains 7 topologically distinct vortex phases. Compared with the phase diagram in the main text, we now have several new types of hybrid topological vortex states termed "hybrid [m,n] vortex", which is essentially a superposition of m Kitaev vortices and n nodal vortices. According to this new notation, the hybrid vortex in the main text is hybrid [1,1] vortex by definition. In Fig. 13 (c), we list the number of vMBS for each vortex state when C 4 breaking effect is considered. Since the Fermi level is below the surface Dirac point of TI #1, the most probable Majorana-carrying vortex state for CaKFe 4 As 4 is the Kitaev vortex phase, as indicated in Fig. 13 (b)
|
2021-10-25T01:16:24.309Z
|
2021-10-21T00:00:00.000
|
{
"year": 2021,
"sha1": "9f18a2c20295494059c5111e0346c4686c84a68a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9f18a2c20295494059c5111e0346c4686c84a68a",
"s2fieldsofstudy": [
"Physics",
"Education",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
257158852
|
pes2o/s2orc
|
v3-fos-license
|
The microbiome of common bedding materials before and after use on commercial dairy farms
Bovine mastitis is one of the most economically important diseases affecting dairy cows. The choice of bedding material has been identified as an important risk factor contributing to the development of mastitis. However, few reports examine both the culturable and nonculturable microbial composition of commonly used bedding materials, i.e., the microbiome. Given the prevalence of nonculturable microbes in most environments, this information could be an important step to understanding whether and how the bedding microbiome acts as a risk factor for mastitis. Therefore, our objective was to characterize the microbiome composition and diversity of bedding material microbiomes, before and after use. We collected 88 bedding samples from 44 dairy farms in the U.S. Unused (from storage pile) and used (out of stalls) bedding materials were collected from four bedding types: new sand (NSA), recycled manure solids (RMS), organic non-manure (ON) and recycled sand (RSA). Samples were analyzed using 16S rRNA sequencing of the V3–V4 region. The overall composition as well as the counts of several microbial taxa differed between bedding types, with Proteobacteria, Actinobacteria, Bacteroidetes and Firmicutes dominating across all types. Used bedding contained a significantly different microbial composition than unused bedding, but the magnitude of this difference varied by bedding type, with RMS bedding exhibiting the smallest difference. In addition, positive correlations were observed between 16S rRNA sequence counts of potential mastitis pathogens (bacterial genera) and corresponding bedding bacterial culture data. Our results strengthen the role of bedding as a potential source of mastitis pathogens. The consistent shift in the microbiome of all bedding types that occurred during use by dairy cows deserves further investigation to understand whether this shift promotes pathogen colonization and/or persistence, or whether it can differentially impact udder health outcomes. Future studies of bedding and udder health may be strengthened by including a microbiome component to the study design.
proper bedding management plays an important role in increasing the productivity of dairy farms [3]. Choice of bedding material is one crucial aspect of bedding management, and the type of bedding has been shown to have a significant effect on udder health and production outcomes in dairy cows [4].
Bedding materials can be broadly classified into two main groups: inorganic and organic, with the latter category subclassified into non-manure organic materials and manure-based materials [5]. Recent studies reported that inorganic materials were the most common bedding type used by U.S. dairy farms, followed by organic nonmanure materials, and finally manure-based materials [6]. However, these studies comprised convenience samples, and the true distribution of bedding material use on U.S. dairy farms is not currently known, particularly by herd size. Organic bedding materials are typically composed of plant byproducts such as straw, hay, saw dust, wood shavings, crop residues, and composted manure or dried manure solids [7]. Availability and low cost make these materials a popular bedding choice, while a major drawback is that they promote rapid growth of environmental mastitis pathogens after getting mixed with fresh manure and moisture in dairy farms [8]. In contrast to organic bedding, inorganic bedding materials are not made from plants or other organic materials. Sand is the most common inorganic bedding type and is considered to be the gold standard of bedding materials because new (virgin) sand is relatively dry and should contain very low levels of organic matter. As such, bacterial growth is impeded, and mastitis causing pathogens are often significantly lower in used sand bedding compared to organic bedding material [9]. Sand also provides superior comfort [10]. However, sand can be more costly than some other bedding materials, depending on local availability. Recycling and reusing sand bedding can help to reduce this cost, but does not alleviate other complications from sand, including disadvantages during manure handling when the sand settles at the bottom of manure collection pits.
Bedding management practices can greatly affect the cleanliness and bacterial population of bedding on dairy farms. The amount and application frequency of fresh bedding are two management factors that impact the bedding microbiome, i.e., the microbial population on the bedding. Organic bedding materials usually reach maximum bacterial populations within 24 h after the new material is laid down [11,12]. Moisture and pH also influence bacterial growth in bedding materials [8], and infrequent bedding replacement allows for more accumulation of manure, mud and urine which can rapidly deteriorate bedding quality, leading to extensive contamination.
Bacterial growth also varies between different bedding types depending upon the physical, biochemical, and nutritional characteristics of the bedding [9]. Previous studies found that a higher percent of bedding dry matter was associated with reduced total bedding bacterial counts; and that frequent addition of new bedding material into used bedding improved cow hygiene [13].
To evaluate bedding quality and its relation to mastitis in dairy cows, multiple studies have evaluated the total bacterial count and presence of common pathogens in various bedding materials. While certain mastitis pathogens can be considered innate to some types of bedding, others, such as E. coli or Klebsiella spp., are assumed to be introduced through contamination of bedding materials by feces, water, or feed [14]. Different types of bedding have exhibited different levels of both total bacterial counts and counts of bacteria such as Bacillus spp., Klebsiella spp., coliforms and non-coliform gram-negative organisms, streptococci or Streptococcus-like organisms (SSLO), and Staphylococcus spp. [6,15]. While most studies have focused on mastitis-causing pathogens and total bacterial counts derived from aerobic culture, few reports describe a predominance of other pathogens belonging to the families Aerococcaceae, Ruminococcaceae, Moraxellaceae, Corynebacteriaceae, Staphylococcaceae and Lachnospiraceae [16,17].
Intramammary infection (IMI) is a prevalent problem in dairy production, causing huge economic loss for dairy producers and negatively impacting cow health and milk quality. Bedding materials have been associated with mastitis epidemiology [18,19]. Numerous studies have demonstrated a correlation between bedding bacterial counts (BBC) and the counts of bacteria on the teat apex of cows using that bedding, suggesting that bedding may be a substantial source of bacteria colonizing the teat epithelium [8,[20][21][22][23]. Molecular epidemiologic studies have identified IMI-causing strains of bacteria in bedding material, suggesting that bedding can act as a reservoir for some pathogens [24,25]. Aerobic culture of bedding to determine BBC has been used to estimate beddingassociated mastitis risk [5,6]. Furthermore, it has been suggested that certain types of bedding materials have been shown to increase mastitis risk due to propensity to support pathogen growth, which then colonizes the teat, leading to infection [6]. Though many studies have demonstrated a correlation between BBC and teat end bacteria count, and between BBC and mastitis risk [6,26], few studies have investigated potential association between the bedding microbiome and mastitis. Furthermore, it is unknown whether the commensal bedding microbiome plays a role in supporting or preventing colonization of the bedding with potential mastitis pathogens. There are descriptive reports of various aspects of the bedding microbiome, including seasonal variation [16] and changes associated with manure solids recycling [27]; however, none compare the culturable and unculturable microbiome of different types of bedding in relation to use status.
Very little is known about the bedding microbiome, including whether or not it differs by bedding type and during use by cows. Advancing baseline knowledge of the bedding microbiome is a first step towards understanding whether and how the bedding microbiome either supports or degrades udder health and pathogen control. Therefore our objectives were to (a) describe and compare microbial community structure (including potential mastitis pathogens) across common types of bedding materials from the U.S. dairy farms, utilizing culture-independent 16S rRNA sequencing; (b) determine whether use of the bedding by dairy cows alters the bedding microbiome and/or potential mastitis pathogens as measured using 16S rRNA sequencing; and (c) evaluate whether 16S rRNA counts of potential mastitis pathogens correlate with aerobic culture-based total and pathogen-specific bacterial counts.
Results of 16S rRNA sequencing of bedding samples
Complete metadata for each analyzed sample can be found in Additional file 1: Table S1. Sequencing of the V3-V4 hypervariable region of the 16S rRNA gene on the Illumina MiSeq platform generated a total of 7.4 M paired-end sequence reads across all 88 samples, including negative and positive controls (mean 82 K per sample, range 1.3-123 K). The negative and positive control samples yielded 2.5 K and 3.9 K raw reads, respectively. Average number of raw sequences generated for RMS, NSA, ON, and RSA samples were 88 K, 71 K, 84 K, and 84 K, respectively, and these differences were not statistically significant based on regression modeling (ANOVA P = 0.40). However, used bedding samples yielded significantly more raw reads on average than unused bedding samples (β used = 11,554 reads, 95% CI = − 435 to 22,672 reads, ANOVA P = 0.04). After quality filtering, 5.1 M sequences remained across all samples; and after merging the forward and reverse sequence reads and removing chimeras, 4.7 M paired-end sequences remained (Additional file 1: Table S1). Six bedding samples produced very low numbers of reads (Additional file 1: Table S1), which was expected given that these six samples also yielded low total DNA and low 16S rRNA gene copy number as determined by qPCR. All six of these samples originated from unused NSA, ON and RSA beddings, which may have accounted for the very low microbial biomass. Two of these low-biomass samples contained fewer reads than the negative controls and were therefore removed from further analysis. After removing controls and the two outlier samples, the distribution of persample reads (after quality control and filtering) ranged from 20 to 90 K for most samples, with a mean of 54 K reads per sample. Furthermore, the number of raw reads per sample was no longer significantly different by bedding type or bedding status (ANOVA P = 0.74 and 0.14, respectively), confirming that the two very low-yielding unused bedding samples had been significantly influencing the distribution of reads across used and unused samples. After removal of these samples, sequencing effort was evenly distributed across bedding types and status, and therefore sequencing depth was unlikely to introduce systematic bias into the analysis.
Across all sequenced samples, a total number of 31,576 ASVs were identified. Among these, 198 were identified as potential contaminants by decontam. As expected, these ASVs represented a very small number of sequence counts, i.e., 27,343 out of 4.6 M sequences. Following removal of these sequence features, 31,378 ASVs from 86 samples remained for downstream analysis. Analysis of the positive control spike-in sample against the complete SILVA database showed Truepera as the most abundant genus with 51% of all reads, and Imtechella as the thirdmost abundant with 8.5% of reads. The genus Allobacillus was not identified, but the SILVA database only contains one reference for Allobacillus halotolerans. Therefore, we also aligned the sequences from the mock community to a custom database provided by ZymoBIOMICS (see "Methods" section), which resulted in detection of all three expected taxa, with Truepera radiovictrix comprising 33.8% of reads, Imtichella halotolerans 60.3%, and Allobacillus halotolerans 5.9%.
Bacterial community composition across bedding types and status
Of the 31,576 ASVs identified, 31,532 ASVs were classified as bacteria; 27 as eukaryota; 1 as archaea; and 16 remained uncharacterized at the kingdom level. As expected, the percentage of classified ASVs decreased stepwise with increased taxonomic resolution, from 98.8% at the phylum level down to 3.5% at the species level (Additional file 1: Table S2). Given the low classification rate at the species level, we performed all subsequent analyses at the genus level and higher. Detailed species-level results are available in Additional file 1: Table S3. Taxonomic evaluation of the bedding microbiome across all samples revealed that Proteobacteria, Firmicutes, Actinobacteria, Bacteroidetes, Chloroflexi, Cyanobacteria and Patescibacteria were the most abundant phyla, accounting for 95.6% of the total sequence reads, with differential abundance by bedding type and status (Additional file 1: Table S4). For visualization purposes, we grouped together low abundance phyla (i.e., those comprising < 0.5% of the sequence counts at the phylum level), and then compared the phylum-level profile between used and unused bedding materials (Fig. 1). In both unused and used NSA, Actinobacteria, Proteobacteria and Firmicutes were the dominant phyla, accounting for more than 70% of the sequence counts. Acidobacteria was also a dominant phylum in unused NSA, but it was largely absent in used NSA bedding samples (Fig. 1). Conversely, Bacteroidetes comprised a larger proportion of the phylum-level microbiome in used versus unused NSA. Used ON bedding exhibited a considerable increase in Firmicutes compared to the unused ON (Fig. 1), whereas Proteobacteria exhibited a relative decrease in used versus unused ON bedding. Both unused and used RMS bedding showed predominance of Proteobacteria, Firmicutes, Bacteroidetes and Actinobacteria, contributing to more than 80% of the phylum-level sequence counts. Unlike with ON and NSA, the phylum-level profile of the RMS bedding samples did not shift dramatically between used and unused status. As with RMS samples, RSA bedding was dominated by the same four phyla and also exhibited little difference in abundance between unused and used status. Ruminococcaceae, Micrococcaceae, Sphingobacteriaceae and Aerococcaceae, while approximately 7% of the reads remained uncharacterized at the family level. Forty-five family-level taxa had a relative abundance greater than 0.5%, accounting for more than 73% of the reads. The most abundant genera were Pseudomonas (gram-negative, 4.8% of all sequence reads), Corynebacterium_1 (gram-positive coryneform, 3.7%), Acinetobacter (2.4%), Psychrobacter (gram-negative cocci, 2.3%), and Ornithinimicrobium (gram-positive rod shaped, 1.7%). The four non-outlier yet low-yielding bedding samples were dominated by varying bacterial phyla (Additional file 1: Table S5). The unused NSA outlier sample was dominated by Proteobacteria (66% of all reads), followed by Acidobacteria, Actinobacteria, Bacteroidetes. Actinobacteria was highly predominant in two of the lowyielding samples (i.e. > 85% of all sequence reads), while the fourth low-yielding sample (an unused ON sample) contained ~ 50% Cyanobacteria, followed by Actinobacteria, Proteobacteria and Firmicutes (Additional file 1: Table S5).
Taxonomic richness and diversity by bedding type and status
To determine whether alpha diversity differed significantly by bedding type or status, we modeled richness, Inverse Simpson's and Pieluo's Evenness at the phylum, class, genus, and ASV levels using linear mixed-effects models. At the phylum, genus and ASV levels, bacterial community richness was higher in RMS samples compared to both NSA and RSA samples, while ON samples contained the lowest richness values ( Fig. 2A-D). Multivariable modeling results indicated that bedding type was significantly associated with bacterial richness at the phylum (P = 0.01), class (P = 0.001) and genus (P = 0.05) but not ASV levels (P = 0.21, Additional file 1: Table S6). Post-hoc pairwise comparisons at the genus level indicated that the average richness was significantly lower in ON bedding compared to RMS. Similarly, the bacterial richness in used bedding samples was generally higher than in unused bedding samples at all of the analyzed taxonomic ranks, suggesting that used bedding contained more unique types of bacteria ( Fig. 2A-D). However, this difference was only statistically significant at the class level, with unused samples containing 18 fewer classes of bacteria than used samples, on average (95% CI = − 36 to − 1 classes, P = 0.04, Additional file 1: Table S6). The interaction between bedding type and status was not significantly associated with richness at the phylum (P = 0.52), class (P = 0.06) or genus levels (P = 0.67), but was at the ASV level (P = 0.03).
Inverse Simpson and Pielou's Evenness indices showed similar trends to richness across bedding types, with RMS bedding generally containing higher diversity and evenness compared to NSA and RSA, with ON again exhibiting the lowest diversity and evenness across all taxonomic levels ( Fig. 2E-L). Unlike with richness, however, the interaction between bedding type and status was significantly associated with Inverse Simpson's and Pielou's Evenness across all levels of the taxonomy (P < 0.01 for all model results, Additional file 1: Table S6), suggesting that changes in microbiome diversity and evenness during use by cows varied by bedding type, as suggested in Fig. 2. Used ON and RSA consistently contained higher diversity and evenness values than unused NSA, while the diversity and evenness in used RMS samples was not significantly different from unused NSA (Additional file 1: Table S6). The high level of variability in the richness and diversity of NSA samples may have influenced these findings (Fig. 2).
To evaluate differences in overall bacterial composition, we generated NMDS ordination plots based on Bray-Curtis dissimilarity, which demonstrated clustering according to bedding type and status ( Fig. 3A-C and Additional file 2: Fig. S1). The clustering of samples according to bedding status was more apparent in ON and NSA bedding types at every level of taxonomy. We observed that the overall bacterial community composition was impacted by both bedding type (PERMANOVA P = 0.001) and bedding status (PERMANOVA P = 0.001) as well as their interaction (PERMANOVA P = 0.001) at the phylum, class, genus, and ASV levels (Table 1). However, bedding status explained only 5.1-6.6% of the microbiome variation (depending on taxonomic level), whereas the bedding type explained 9.6-14.1% (Table 1). Similarly, the amount of dispersion in the ordination (i.e., dispersion of samples from the centroid of each group) varied significantly by bedding type (ANOVA P < 0.001) as well as bedding status (ANOVA P < 0.001), suggesting that the amount of variability in the microbial composition differed significantly between bedding types and status.
Differentially abundant taxa between unused and used bedding
At the phylum level, 23 unique phyla exhibited statistically significant differences in abundance between used and unused bedding across all bedding types (Fig. 4). We restricted our visualizations to only those phyla whose average abundance was > 50th percentile within each bedding type, given that log-fold differences for very low-count taxa can be spuriously large. In RMS bedding samples, none of the phyla were significantly more or less abundant in used versus unused bedding, suggesting a relatively stable bacterial community at the phylum level. For ON and RSA bedding types, most or all of the differentially abundant phyla were more abundant in the used versus the unused samples. Bedding samples from NSA had overall lower phylum richness than the other sample types, with Bacteroidetes significantly more abundant in used versus unused samples; and Gemmatimonadetes and Acidobacteria more abundant in unused versus Table S7. These trends were consistent at the class level, with NSA samples again containing much lower richness than the other sample types, with lower abundance in used versus unused samples for the classes Gemmatimonadetes, Thermoleophilia and Subgroup_6; and higher abundance for the Bacteroidia class (Additional file 1: Table S8 and Additional file 2: Fig. S2). As at the phylum level, RMS bedding samples did not contain any classes with significant differences in abundance between used and unused samples, suggesting that there were fewer differentially abundant taxa between used and unused RMS bedding compared to other bedding types. In contrast, RSA and ON bedding samples exhibited many phyla with differential abundance in used versus unused samples, the majority of which were more abundant in used versus unused samples (Additional file 1: Table S8 and Additional file 2: Fig. S2). For instance, among the differentially abundant taxa, Bacteroidia were significantly more abundant in used compared to unused NSA (mean expression = 11.6, LogFC = 3.5, P = 0.03) and ON (mean expression = 12.3, LogFC = 2.4, P = 0.02). Within ON bedding, members of the Clostridia class were much more abundant in used as compared to unused samples (mean expression = 11.1, LogFC = 4.0, P = 0.003), while the Alphaproteobacteria class was twofold lower (mean expression = 10.7, LogFC = − 2.7, P = 0.02). Thermoleophilia, a class of bacteria responsible for biogeochemical cycling [28] had significantly lower abundance in used versus unused samples from both ON (mean expression = 3.3, LogFC = − 2.4, P = 0.03) and NSA (mean expression = 6.8; LogFC = − 6.2, P < 0.01).
At the genus level, 486 of the detected microbial genera exhibited statistically significant differential abundance between used and unused bedding, across all bedding types (Additional file 1: Table S9). Within NSA samples, 30 genera were significantly differentially abundant between used and unused samples, with 26 of those more abundant in used bedding and 4 more abundant in unused bedding. Within ON samples, 253 genera obtained statistical significance, with 174 more abundant in used samples and 79 more abundant in unused samples. Within RMS samples, 214 genera were found to differ significantly in abundance, Only phyla with an average abundance > 50th percentile within each bedding type are depicted. Red indicates phyla whose abundance was significantly different between used and unused bedding samples (i.e., adjusted P < 0.05). Circle diameter is proportional to the average abundance of each phylum across all samples within each bedding type. NSA new sand, ON organic non-manure, RMS recycled manure solids, RSA recycled sand bedding type with 99 more abundant in used samples and 115 more abundant in unused samples. Finally, in RSA samples, 165 genera had statistically significant differential counts based on bedding status, with 105 genera more abundant in used samples and 60 more abundant in unused samples. These differential abundance testing results suggest that both bedding status and bedding type influenced the presence and abundance of specific bacterial taxa.
Presence of potential mastitis pathogens within 16S rRNA sequence data
In addition to looking at commensal bacteria, we also wanted to specifically evaluate bedding type and status for potential mastitis pathogens as identified by 16S rRNA sequencing (Additional file 1: Table S10). Although these potential mastitis pathogens were present in very low overall abundance (i.e., very low total sequence counts), we did detect several genera that could be considered potential mastitis causing pathogens (Additional file 1: Table S11). In general, most of the strict mastitis pathogens (e.g., Staphylococcus, Streptococcus), as well as other rare mastitis pathogens (e.g., Acinetobacter, Pseudomonas, and Aerococcus) were found in higher abundance in used RMS compared to unused RMS bedding (Fig. 5, Additional file 1: Table S11). Although low in abundance, Escherichia/Shigella increased in used ON, RMS and RSA bedding (Additional file 1: Table S12). Among the relatively rare mastitis pathogens, Pseudomonas and Acinetobacter were both prevalent and relatively abundant across all the bedding materials ( Fig. 5), while Corynebacterium was also predominant in used and unused RSA and present in almost all other bedding types.
Based on differential abundance testing, several mastitis pathogens significantly differed in their abundance between unused and used bedding for each bedding type (Additional file 1: Table S12, Additional file 2: Fig. S3). For instance, Staphylococcus and Streptococcus had significantly higher abundance in used versus unused samples for both RMS (mean abundance = 3.92 and 3.0, logFC = 4.36 and 2.37, P = 0.005) and RSA (mean abundance = 1.84 and 4.76, logFC = 2.7 and 2.7, P = 0.003 and 0.002, respectively). Similarly, Escherichia/Shigella abundance was significantly higher in used versus unused ON (mean abundance = 1.73, logFC: 3.26, P < 0.001) and RMS bedding materials (mean abundance = 1.0, logFC: 3.67, P = 0.003). Similar results were observed for Mycoplasma, with significantly higher abundance in used compared to unused ON bedding type (mean abundance = 1.24, logFC: 2.19, P = 0.006). Some of the unused ON samples contained a preponderance of Pantoae, which was not identified in any of the other bedding types (Fig. 5); the prevalence of Pantoae was lower in used ON, however this was not statistically significant. Aerococcus was found to be significantly higher in used versus unused samples across all of the bedding types whereas Lactococcus was significantly more abundant in used versus unused NSA and RSA samples and was identified only very rarely in RMS (Additional file 2: Fig. S3). Bacillus was prevalent in both used and unused RMS and unused NSA, and was significantly lower in the used NSA samples; but was not prevalent in any other bedding type (Fig. 5).
Presence and abundance of sequences from potential mastitis pathogens, by bedding type and status
The genus-level composition of potential mastitis pathogens varied by both bedding type (PERMANOVA P = 0.001) and status (PERMANOVA P = 0.001), with both factors explaining > 9% of the variation in the community structure of these potential pathogens (12.6% and 9.3%, respectively, Table 1, Additional file 2: Fig. S1). Post-hoc pairwise testing between bedding types indicated that the composition significantly differed between ON and RMS (P = 0.003), ON and NSA (P = 0.007), ON and RSA (P = 0.006), RMS and RSA (P = 0.003), NSA and RSA (P = 0.007), but not RMS and NSA (P = 0.14). However, there was also a significant interaction effect between bedding type and status (P = 0.002), suggesting that differences in the microbial composition between bedding matrices varied depending on whether the bedding was used or unused.
Associations between transformed potential mastitis pathogen counts based on 16S rRNA data (log 10 scale) and bedding type, status and their interaction were evaluated using linear mixed models. Bedding type was not statistically significantly associated with 16S rRNA pathogen counts (P = 0.11), but bedding status was (P = 0.05). Specifically, these pathogen counts were higher in used versus unused bedding (average 4.2 vs. 4.0, 95% CI = 4.1-4.3 and 3.9-4.1, respectively). The interaction between bedding type and status was not significantly associated with the counts of potential mastitis pathogens at the genus level (P = 0.57).
Relationship between 16S rRNA counts of potential bedding mastitis pathogens, Staphylococcus and Streptococcus, and bedding bacterial culture results
We performed Spearman correlation analysis to evaluate the relationship between 16S rRNA counts for all (See figure on next page.) Fig. 5 Barplot of total number of sequence reads ("total count", left-hand side) and proportion of potential mastitis pathogens out of all genus-level counts, grouped by bedding status and type. Only genera with > 0.1% of the total genus-level counts are depicted as individual colors within the bars; those representing < 0.1% are grouped together as "low count pathogens". potential mastitis pathogens and total bacterial count (TBC) obtained from bedding aerobic culture (Table 2). There was a positive relationship between TBC and 16S rRNA pathogen counts for each bedding type and status except RSA (Additional file 2: Fig. S4). In unused RMS bedding, we found a strong positive correlation between TBC and 16S rRNA counts of potential mastitis pathogens (ρ = 0.74, P = 0.002, adjusted P = 0.03, Additional file 2: Fig. S4). Likewise, Staphylococcus exhibited a positive relationship between results obtained from 16S rRNA and culture, for both used NSA (ρ = 0.89, P = 0.04, adjusted P > 0.05) and used RMS (ρ = 0.68, P = 0.005, adjusted P = 0.1, Additional file 2: Fig. S4). Streptococcus -only counts obtained from 16S rRNA sequence and SSLO-CFU counts from bedding culture were also correlated for unused RMS (ρ = 0.58, P = 0.02, adjusted P > 0.05), as were SSLO 16S rRNA counts (ρ = 0.59, P = 0.02, adjusted P > 0.05, Additional file 2: Fig. S4). We did not find a significant correlation between 16S rRNA counts and culture-based results for any other sample types. For Bacillus and Klebsiella, culture-based results were largely invariable (i.e., each sample contained the same CFU/mL), and thus correlation analysis could not be performed (Additional file 1: Table S1). Prototheca was not identified in any of the samples based on culture.
The microbiome of unused bedding differs significantly by bedding material, and use by cows differentially alters this microbiome
Our results showed that the four evaluated bedding materials contained bacterial communities with significantly different structure and diversity, which was not surprising given the differing physico-chemical properties of these materials [6,13]. In general, RMS was found to have greater microbiome richness, diversity and evenness at every taxonomic level compared to other bedding types, but there was no significant difference between unused and used RMS (Fig. 2). This result could indicate that the recycling process itself does not significantly decrease the number of unique types of bacteria in the bedding material, and does not significantly alter the relative distribution of the organisms in relation to each other. However, it is important to note that the 16S rRNA sequencing approach captures both live and dead bacteria, and thus these results cannot be used to make inferences about the viable portion of the microbial community; in other words, our results may have included remnant bacterial DNA that carried over from the recycling process. While previous studies have reported significant bacterial reductions during the manure recycling process using culture-and 16S rRNA based analyses, these studies focused specifically on potential pathogens, not the entire microbial community [6,27], which may account for the discrepant findings. Unlike RMS samples, RSA and ON samples exhibited clear increases in microbial richness, diversity and evenness when comparing used versus unused bedding (Fig. 2). This suggests that use by cows introduces new microbes to the microbiome of the bedding material. Furthermore, overall composition of the microbiome shifted significantly between used and unused bedding of all types (Fig. 3), indicating that use by cows and exposure to the dairy environment significantly alters the microbiome of all bedding materials, even when this bedding has high microbial diversity and a relatively stable microbiome, as in the case of RMS. The use of bedding by cows includes not only contact between the cow's skin and the bedding, but also contamination of the bedding with urine and feces, which would introduce not only new bacteria into the bedding, but also novel substrates and physico-chemical conditions that could support differential growth or reduction of existing bacterial taxa.
The common impact of the cow microbiome may have also been reflected in a consistent increase in abundance during use, regardless of bedding type; such genera included Marinobacter, Aerococcus, Confluentibacter, and Ornithobacterium (Additional file 1: Table S9). Increase in other bacterial taxa were specific to certain types of bedding materials. For example, Staphylococcus was found in higher abundance in used bedding of all types except NSA; Escherichia in used ON and RMS; Streptococcus in used RSA and RMS; and Mycoplasma in used ON. These results indicate that both bedding status and bedding type play a role in the growth of various bacterial taxa during use by cows. Conversely, for some bacteria, use by cows was associated with a decrease in abundance. For example, Pantoea was found in high prevalence and abundance in unused ON samples, but then decreased significantly in used samples.
The common impact of the cow and farm environment on the bedding was also demonstrated by the lower betadispersion in the used versus unused samples across all bedding types except RMS. Given the geographic dispersion of the farms in this study, it is likely that the unused bedding materials were sourced from different suppliers, which likely explains the relatively high within-type heterogeneity of the unused bedding samples, particularly the NSA and ON samples. Additionally, the ON samples were sourced from a variety of raw materials including wood shavings, sawdust, rice hulls and paper, which likely also contributed to the high within-type heterogeneity of the unused ON samples. However, once the bedding was used by dairy cows, the heterogeneity reduced, likely due to exposure to cow feces, urine and skin, some of which have been shown to contain a core Table 2 Spearman correlation coefficients (ρ) between culture-based bacterial counts and 16S rRNA based bacterial counts The correlation was considered statistically significant at P < 0.05, and correlations were considered (±) strong when ρ ≥ 0. 40 microbiome that is common across most dairy cows [29]. In effect, the cow microbiome becomes a "regularizing" factor that equilibrates the bedding microbiome as it is used. Based on our results, we conclude that different bedding types harbor differential microbiome profiles prior to use, but that ultimately the exposure to cows and the farm environment exerts a common influence on the in situ bedding microbiome, resulting in a significant shift in the bedding microbiome profile. The specific temporal and microbial ecological dynamics of this shift likely vary by bedding type and probably depend largely on the initial microbiome composition of each bedding lot. In any case, however, these dynamics may play a role in the differential influence that bedding type can have on prevalence of intramammary infection in late lactation dairy cows [26] and udder hygiene [6]. However, the existing literature on bedding and mastitis and/or udder health outcomes is mixed, with some studies finding no such associations [30]. This ambiguity could be driven by numerous potential confounding factors, including heterogeneity amongst the bedding materials used within a given bedding type, as well as variability in bedding management protocols between dairies. Further investigation into this question is warranted, especially given our findings that the microbiome differed significantly between bedding types, but that nearly all of the bedding samples exhibited a consistent shift during use by dairy cows, even across the diverse farms that comprised this study. While further research is needed, the fact that diverse bedding types all experienced a similar shift may represent an interventional opportunity for improved udder health in many herds. More research is needed to understand whether the dynamics of the microbiome shift during cow use are associated with udder health, mastitis epidemiology, or other important health and production outcomes on dairy farms.
Low levels of DNA from potential mastitis pathogens were present in most bedding samples, with some differences between bedding types
We detected potential mastitis pathogens in most samples, however at very low relative abundance and with taxonomic resolution mostly to the genus level (Fig. 5). We observed significant differences in the composition and prevalence of genus-level taxa of potential mastitis pathogens across sample types ( Fig. 5 and Additional file 1: Table S11), suggesting that different bedding matrices may support the presence of different potential mastitis pathogens. Though some studies have not observed significant associations between bacterial load, pH or dry matter and abundance of pathogens on the teat epithelium [31], others have reported epidemiological associations between bedding type and mastitis outcomes [23], and many of the previous investigations have focused on the differing physicochemical properties of the bedding material [32,33]. Our findings support these interpretations by demonstrating that different bedding matrices support differential presence and abundance of genera that contain potential mastitis pathogens. However, not all of the differentially abundant bacteria are equally likely to cause mastitis, and each has a unique epidemiology within dairy herds. Our analysis treated each potential pathogen with equal weight, and thus must be interpreted cautiously, especially considering that many of the bacterial taxa on our list are very uncommon causes of mastitis [34] (Additional file 1: Table S10).
Some potential mastitis pathogens were more abundant in used versus unused bedding, with highest levels in used RMS
We observed that many potential pathogenic genera were more abundant in used versus unused samples, across all bedding types (Additional file 1: Table S12). This again supports the hypothesis that exposure to both the cow and farm environment increases the likelihood that bedding material becomes contaminated with potential mastitis pathogens from these sources. This dynamic was most evident in the used versus unused NSA samples (Additional file 1: Table S11), which was expected given that these samples had no previous exposure to dairy cows. However, even in the case of RMS, we observed a significant increase in Streptococcus, Staphylococcus, Escherichia/Shigella and Aerococcus genera in used bedding, suggesting that even the high microbial diversity and biomass present in RMS was not enough to obscure the signal of contaminating mastitis pathogens in used samples. Indeed, used RMS samples contained the highest counts of Staphylococcus out of all sample types (Additional file 1: Table S12), and used RMS samples were the only ones in which we detected both Staphylococcus chromogenes, which is also considered a cow-adapted bacteria (Additional file 1: Table S3). While 16S rRNA data are typically reported at the genus level or higher, the use of ASVs does allow for species-level differentiation for some sections of the 16S rRNA taxonomy, depending on nucleotide-level variability within the relevant taxa. In these cases, identification of species is highly specific, which is one of the primary benefits of using ASVs [35]. Therefore, we can be confident that these species-level identifications within the used RMS samples are valid. However, the lack of species-level identification in other samples could be a potential false negative finding, particularly given the low classification rate at the species level, which is common to all 16S rRNA studies including those that use ASVs (Additional file 1: Table S2). Unfortunately, it is difficult to compare our species-level 16S rRNA results to previous bedding and udder microbiome studies because the use of ASVs for classification is a relatively recent advancement, and therefore existing studies report only at the genus level or higher based on non-ASV approaches. Previous culturebased studies have reported low prevalence of S. chromogenes in environmental samples taken from dairies [36], but the vast majority of results were obtained from udder or milk samples and thus relatively little is known about the extra-mammary ecology of this important bacteria [37]. Therefore, our detection of DNA from Staphylococcus chromogenes within bedding samples is difficult to contextualize, and warrants closer study. Previous studies have reported that RMS bedding supports the persistence and growth of some mastitis pathogens better than other bedding materials [26,38], which is supported by our microbiome-focused results. However, the details of the recycling process can vary significantly between farms [39], and further research is needed to understand how different steps of the various recycling processes could impact the microbiome and presence/abundance of potential mastitis pathogens. Although the counts of potential mastitis pathogens in our dataset were generally very low, we considered this to be a true reflection of the relative abundance of these taxa within each sample, as we observed a positive relationship between total BBC, Staphylococcus and Streptococcus sequence counts and bacterial culture data for most of the bedding types (Additional file 2: Fig. S4, Table 2). This correlation was particularly strong (and statistically significant) for RMS samples, again suggesting that this matrix occupies a unique position in the complex epidemiology of mastitis pathogens and bedding microbial ecology. Further research is needed to evaluate correlations between mastitis pathogen results obtained from culture-independent and culture-dependent approaches, as we found varying correlations depending on the pathogen and bedding type (Additional file 2: Fig. S4, Table 2). Additionally, future work should consider techniques that can more robustly differentiate species-level taxa, including more systematic use of MALDI-TOF for culture-based work and shotgun metagenomic sequencing for culture-independent workflows.
Some bedding samples contained very low microbial biomass, which complicates interpretation of microbiome data
Some of the samples in this study, particularly those collected from unused NSA, yielded very low concentrations of total DNA and 16S qPCR copy number, suggesting very low microbial biomass. Previous studies have demonstrated that the physicochemical properties of these types of samples, such as very low organic matter or very low moisture levels, may not support rapid bacterial growth [9], and thus the low microbial biomass was expected. However, such low biomass samples require careful consideration in microbiome studies given the possibility of contamination from extraction kit reagents, especially PCR master mix and even molecular biology grade water [40][41][42], which can sometimes exceed the abundance and diversity of the resident microbiome [41]. To control for this, we included negative controls and used them to identify and remove likely contaminants from the sequence data [43,44]. Despite these internal controls, it is important to note that cross-contamination could still explain some of the extreme variability in microbial composition of the data obtained from these samples, particularly within unused NSA samples (Figs. 2,3,5). Future bedding microbiome studies of low biomass samples such as sand should include extensive negative controls, including samples from collection buckets and gloves, which can be used to account for contamination that occurs during the sampling process. Additionally, sample collection strategies may need to be optimized specifically for these low biomass samples; fortunately, recommendations exist [45]. Previous research has shown that larger volumes of low-biomass samples don't necessarily lead to significantly increased DNA biomass [46], and therefore future efforts may yield more success by focusing on improved extraction methods [47].
Comparison with previous descriptions of the bedding microbiome
While the literature regarding the microbiome of dairy bedding is scarce, previous investigations also show predominance of Micrococcus, Arthrobacter, Staphylococcus, Bacillus, Corynebacterium, Microbacterium, Streptomyces, Acinetobacter, Proteus, Pantoea, Pseudomonas, Thermoactinomyces, and Saccharopolyspora [48]. However, some previous results are discordant with our observations. For example, Aerococcaceae have been characterized as a dominant and prevalent taxon within bedding [16], but our results show that this taxon only appears in substantial abundance in used bedding material, suggesting that Aerococcus growth is an outcome of bedding use by cows, and not necessarily a resident bacteria in unused bedding. Ambiguous results such as these emphasize the need to carefully document the status, type and physicochemical properties of the bedding being analyzed; and to report these details so that microbiome results can be reliably and robustly compared across studies. Such challenges are not unique to bedding microbiome research, and numerous efforts are underway to promote standardized collection and reporting of such metadata [49,50].
Study limitations and strengths
Many of the limitations of our study are common to microbiome studies, including well-documented biases and limitations in detection of some taxa. To provide a measurement of these potential biases, we utilized Zymo-BIOMICS Spike-in Control II and aligned the resulting sequence data to a database containing only the three bacteria contained within the mock sample, i.e., Truepera radiovictrix, Imtechella halotolerans, and Allobacillus halotolerans). Classifying all of the reads from the mock community dataset to the SILVA database identified T. radiovictrix as the most abundant organism and I. halotolerans as the third-most abundant organism (Additional file 1: Table S13) as expected based on the true composition of the mock community, which contains a predominance of T. radiovictrix and tenfold lower abundance of I. halotolerans. The distribution of these two bacteria in our mock sample, however, was not precisely tenfold different, likely due to the known lysis resistance of Truepera, which reduced the efficiency of the DNA extraction. Furthermore, Truepera's high GC content challenges primer-based assays and is a well-known issue [51]. In addition, we did not identify A. halotolerans when aligning the sequence data for the mock sample to the SILVA database, likely due to the lack of specieslevel ASV resolution for Bacillaceae in the V3/V4 region. To circumvent this limitation, we aligned the sequence data from the mock community to only the 16S rRNA sequences of the three expected bacteria, which resulted in detection of all three taxa with A. halotolerans comprising ~ 6% of the reads. Together, our positive control results suggest that hard-to-lyse and high-GC-content bacteria may be systematically underrepresented within the data, which is not uncommon for microbiome studies [38]. Furthermore, we were able to detect Allobacillus halotolerans in the positive control sequence data, suggesting that our sequencing depth was sufficient to detect low-abundance taxa within the microbial communities.
The inability to classify sequences to the species level is a further well-documented limitation of 16S rRNA based analysis [52,53]. While the V3-V4 hypervariable regions used in this study are very common and provide a comprehensive overview of most microbiomes [54], they may not be the optimal targets for identification of mastitis pathogens at the species level, which limits our ability to fully characterize potential pathogens [55]. Previous studies have reported that a 28 nucleotide-long region within the V1 hypervariable region has the most discriminatory power for differentiating Staphylococcus aureus from other coagulase negative Staphylococcus sp., and future studies may want to use this region if pathogen evaluation is the primary goal [56]. Additionally, future studies of the bedding microbiome should consider including multiple complementary approaches for more robust and comprehensive species-level identification, including shotgun metagenomics, MALDI-TOFconfirmed culture, and qPCR.
Finally, the inability to distinguish live versus dead bacteria is a limitation of the 16S rRNA based approach and may obfuscate any associations between the bedding microbiome and biological outcomes in dairy cows. Our findings provide some counterweight in this regard, as we identified a consistent positive correlation between genus-level counts of mastitis pathogen sequences and counts obtained from cultural bacteriology of these same pathogens from the same samples, suggesting that at least some of the DNA in the microbiome workflow originated from viable cells. Further studies are needed to confirm whether (and under what specific conditions) 16S rRNA-based counts correlate with culture-based results, as well as to differentiate DNA from viable versus non viable bacteria. The use of multiple complementary culture-independent and -dependent workflows is especially important in this regard, as their results will support improved understanding of whether bedding microbiome dynamics support pathogen persistence or transmission, and whether the bedding microbiome plays a role in mastitis etiology.
In addition to these limitations, our study contained several notable strengths, including the evaluation of multiple bedding materials across 44 farms located in ecologically diverse climates. The heterogeneity of this source farm population provides increased external validity of our findings compared to many bedding studies that were conducted on fewer or more homogeneous farm populations. However, it should also be noted that the distribution of bedding types represented in this study may not reflect the distribution of bedding used across the U.S. dairy farms. The inclusion of used and unused bedding samples was a strength in the study design, and highlighted the fact that the bedding microbiome experiences significant temporal shifts. This insight should be used to guide the design of future bedding microbiome studies, and emphasizes the importance of reporting detailed sample-level metadata for bedding samples. Finally, our use of negative control allowed us to differentiate contaminating from non-contaminating DNA, which is very germane for the low-biomass bedding samples we encountered in this work.
Future research
Our study was limited to description and comparison of the microbiome of various bedding types and status. While we identified significant differences in the microbiome of different bedding materials, we were not able to connect these differences to important outcomes of udder health such as mastitis incidence or somatic cell count. Future studies that wish to evaluate associations between bedding and mastitis should consider integrating bedding microbiome analysis into their plans in order to account for the microbiome as either a confounder or a primary risk factor. Furthermore, our results support previous work suggesting that RMS is a complex bedding material, which may cause variable impacts on udder health and mastitis. The body of work on RMS and mastitis is somewhat ambiguous, potentially due to the wide variability in how RMS are produced [39]. Further research is needed to elucidate potential interactions between the manure solids recycling process, the microbiome and mastitis pathogens, and udder health outcomes.
Conclusions
In the present study, we aimed to describe the microbiome of used and unused bedding samples representing a variety of commonly used materials. Our results demonstrated that different bedding materials harbored different microbiomes prior to use by cows; and that use by cows significantly shifted this microbiome. These differential microbiomes may explain some of the previous epidemiological associations reported between bedding material and mastitis outcomes, but further research is needed to test this hypothesis. We found that genera containing potential mastitis pathogens generally comprised a very small proportion of the overall microbial community; however, the counts of these genera correlated positively with culture-based results, suggesting that the sequence-based counts may represent biologically meaningful information. Samples obtained from RMS bedding exhibited different microbiome and potential pathogen dynamics than the other types of bedding, supporting previous findings that RMS may play a unique role in mastitis epidemiology and suggesting that the recycling process may need closer investigation. Overall, these results emphasize that the bedding microbiome deserves closer investigation, particularly with respect to its potential mechanistic role in explaining epidemiological associations between bedding management and mastitis outcomes in commercial dairy herds.
Farm description and sampling
This study used samples collected from commercial dairy herds across 10 states in the U.S., and was part of a larger study that evaluated bedding and mastitis epidemiology [26]. The intent to use these samples for microbiome analysis was conceived before samples were collected, but after funding for the larger study had been obtained. For the larger study, 80 herds were selected based on the following inclusion criteria: herd size > 200 cows; collaborative work with the University of Minnesota or a local Zoetis Quality Milk Specialist; and use of one of four common bedding types, described previously [26]. From the 80 enrolled herds, 44 were selected for inclusion in this microbiome analysis, with farms chosen based on availability of samples that had undergone fewer than two freeze-thaw cycles, as freeze thaw cycles were previously reported to introduce bias in microbiome studies [57,58]. Further details on the study population can be found in [26].
Bedding sample collection
Different dairy farms in the study utilized different bedding materials (with only one type used per farm), with the following types represented: new inorganic or new sand (NSA, N = 5, collected from WI, TX, CA and ID), recycled manure solids (RMS, N = 15, collected from NY, CA, ID, MN, WI and WA), other organic non-manure (ON, N = 13, collected from WI, MN, NY and WA), and recycled inorganic or recycled sand (RSA, N = 11, collected from NY, WI, IN, OR and MI). From each farm, 'unused' (ready-to-use) bedding was collected from the stockpile, while 'used' bedding was collected from stalls that were actively being used by dairy cows. Unused and used bedding samples (hereafter referred to as "bedding status") were collected on the same day at each participating farm. For sampling, collectors from Zoetis Quality Milk Specialists followed a standardized collection protocol in which 20 handfuls of unused bedding materials from various sections of the unused bedding pile were placed into a disinfected bucket and mixed thoroughly. From that homogenized sample a subsample of approximately one litre was transferred into a resealable plastic bag, which was manually expressed to remove excess air, and then sealed. All the used bedding samples in this study were collected from freestall herds in the following manner: one handful of bedding material was collected from the top 5 cm of the back third of at least 20 stalls in the late-lactation pen, with care taken to avoid obvious manure pats during sampling. The 20 handfuls were placed into a bucket and the procedure followed the same protocol as described for unused bedding. The bucket was disinfected with chlorhexidine between each sampling and investigators used new gloves before handling each bedding sample. Samples were frozen at the time and location of collection (− 20 °C), and later shipped, on ice, to the Laboratory for Udder Health, University of Minnesota (St. Paul, MN). Upon arrival at the University of Minnesota (UMN), the bedding samples were immediately placed in − 80 °C for long-term storage after taking an aliquot of each bedding sample for aerobic bacterial culture.
Bacterial culture of bedding samples
For bacterial culture, 50 mL of bedding material was sub-sampled, weighed and transferred to a sterile plastic bag (Whirl-Pak, Nasco, Fork Atkinson, WI) along with 250 mL of sterile water to create a 1:5 dilution. After the bedding-water mixture was homogenized, four different dilutions (1:5, 1:50, 1:500, and 1:5000) of the bedding suspension were made to inoculate onto Columbia CNA agar with 5% sheep blood (CNA) and MacConkey agar plates. Bedding cultures were incubated in aerobic conditions at 37 ± 2 °C for 42 to 48 h before reading colonies. Bacterial groups were identified using visual inspection and enumerated from the dilution plate with the optimal number of colonies (25 to 250 per plate). Representative isolates from each plate were further subjected to confirmation via matrix assisted laser desorption ionizationtime of flight mass spectrometry (MALDI-TOF MS). Organisms belonging to the "Streptococcus and Streptococcus-like organisms" (SSLOs) were grouped together due to inability to differentiate the taxa based on visual inspection; these taxa comprise Streptococcus, Enterococcus, Lactococcus and Aerococcus. The counts from each bacterial group were summed to determine total bacterial count. These results have been published and the details are available [26].
DNA extraction, library preparation and 16S rRNA gene sequencing
Bedding samples were removed from − 80 °C, thawed at − 20 °C and then room temperature and homogenized before DNA extraction. DNA was extracted using the DNeasy PowerSoil Pro Kit (Qiagen, Cat No. 47016, Hilden, Germany) following the manufacturer's instructions. Briefly, bedding materials were weighed inside a biosafety cabinet using a sterile disposable spatula. Straw type bedding materials could not be weighed to the maximum capacity (0.25 g) due to volume constraints of the bead tubes (Additional file 1: Table S1). Lysis (CD1) buffer was added to the bead tubes after adding samples. The volume of the CD1 buffer varied depending on the sample type; most samples were processed with 800 µL of CD1 buffer, but the absorbency of sawdust and straw bedding materials necessitated 1200 µL of CD1 buffer to extract 600 µL for the subsequent steps of DNA extraction. After vortexing, bead tubes were processed on a Mini Bead-beater (Biospecproducts Cat. No. 1001, Bartlesville, OK, U.S.) at 2200 rpm for 20 s, which was repeated 3 times with an interval of 30 s in between rounds. Bead tubes were then centrifuged at 15,000 g for 1 min to precipitate the debris, and 600 µL of supernatant was transferred to the rotor adapter of QIAcube Connect (Qiagen, Cat No. 9002864, Hilden, Germany) for DNA extraction. All samples were processed with Inhibitor Removal Technology (IRT) to eliminate inhibitors. Finally, extracted DNA was eluted in a 50 µL elution buffer. DNA concentration was measured with Qubit 4 Fluorometer (ThermoFisher Scientific, Cat No. Q33226, Hercules, CA, U.S.) and quality was checked with Tapestation genomic screen tape (Agilent Technologies, Palo Alto, CA). In addition to the bedding samples, we extracted DNA from 100 µL of ZymoBIOMICS Spikein Control II (Zymoresearch, Cat No. D6321, Irvine, CA, U.S.) as a positive control in the same way along with the samples except 700 µL of CD1 buffer was added in the bead beating tube. We also included molecular biology grade water (AccuGENE ™ Water, Cat No. BE51200), as negative control (NTC1) or amplification blank.
The 16S rRNA gene copy number in each sample was measured using qPCR in order to begin the library preparation with an approximately equal amount of bacterial DNA across samples. For sequencing, the target copy number threshold was set at 167,000 molecules/uL. For 16S rRNA library preparation, samples were amplified using a dual-indexing 16S rRNA Illumina primer set (Forward primer: 5′-TCG TCG GCA GCG TCA GAT GTG TAT AAG AGA CAG CCT ACG GGA GGC AGC AG-3′ and Reverse primer: 5′-GTC TCG TGG GCT CGG AGA TGT GTA TAA GAG ACA GGG ACTACHVGGG TWT CTAAT-5′-) specific to the V3-V4 region [59]. PCR products were quantified using a PicoGreen dsDNA assay kit (Life Technologies, Carlsbad, CA), normalized and multiplexed in equimolar amounts. The sample pool was spiked with 15% PhiX and sequencing was performed at the University of Minnesota Genomics Center (UMGC) using Illumina's v3 cluster chemistry (2x300 bp paired-end reads) on the MiSeq platform (Illumina Inc., San Diego, CA).
Sequencing data processing
Amplicon primers were removed from the 5′ and 3′ ends of forward and reverse reads using cutadapt [60]. Trimmed sequence reads were processed using the DADA2 (Divisive Amplicon Denoising Algorithm) pipeline, version 1.12 [35]. The filterAndTrim function was used for further quality filtering. Forward and reverse reads were truncated to 250 and 220 base pairs, phiX reads were discarded as were reads with a maximum expected error rate greater than 3. Filtered sequence reads were then used as input to the learnErrors function for error-rate estimation. The error-rate matrix was used as input to the dada function for denoising (i.e., read error correction). Error corrected forward and reverse reads were merged into contigs using the mergePairs function. Amplicon sequence variant table (ASV) table was generated after removing chimeric contigs using the removeChimera function. The assignTaxonomy function was used for taxonomic assignment of ASVs using the SILVA reference database by native implementation of the naive Bayesian classifier method [61]. The addSpecies function was used to assign species-level labels to annotated ASVs. The positive control sample sequenced in this study was aligned to both the SILVA database and a ZymoBIOMICS sequence database (https:// s3. amazo naws. com/ zymo-files/ BioPo ol/ D6321. refseq. zip) containing reference sequences for each mock bacterium, using the same procedures described above. The abundance matrix and taxonomy table produced by the DADA2 pipeline was imported into phyloseq for microbiome analysis and visualization [62]. Contaminating ASVs were identified using the frequency method implemented in decontam, and removed from further analysis [43].
Sequencing depth
To evaluate potential sequencing bias by bedding type and status, the number of raw reads generated for each sample were compared using generalized linear modeling as implemented in the glm function. Model results and confidence intervals for each variable (i.e., bedding type and status) were extracted using the summary and confint functions. The significance of each variable was evaluated using the anova function.
Analysis of microbial community structure by bedding type and status
Alpha diversity was measured from the decontaminated abundance matrix by computing richness, Inverse Simpson, and Pielou's evenness [63] indices. Richness and diversity were computed using the estimate_richness function in phyloseq [62]. Evenness was computed using the evenness function in the microbiome package (https:// micro biome. github. io/). Alpha diversity was measured following aggregation of ASVs to the phylum, class, family, and genus levels using the "tax_glom" function in phyloseq. The association between each alpha diversity metric (i.e., richness, Inverse Simpson's and Pielou's evenness) and bedding type and bedding status and their interaction (explanatory variables) were analyzed using linear mixed-effect models as implemented in the lme function [64]. Farm identity was included as a random effect. The significance of each explanatory variable in improving model fit was assessed by comparing the full model with the reduced model using the ANOVA function in R, with a significance level of P < 0.05. For variables that significantly improved model fit, post hoc pairwise comparisons of bedding type and status were performed using the lsmeans function.
The overall pattern of microbial community composition (beta-diversity) across all bedding types and status was visualized using non-metric multidimensional scaling (NMDS) plots from a Bray-Curtis distance matrix, using the vegdist function of vegan in R. To test for significant associations between bedding type and status on the ordination, permutational multivariate analysis of variance (PERMANOVA) was used via the adonis function of vegan in R. The R 2 value was used to estimate the relative effect size (i.e., percent of community structure variation explained by each explanatory variable), and the corresponding P value was used to determine the statistical significance of this value. If a significant result (P < 0.05) was observed, post hoc pairwise comparisons of bedding types and status were conducted using the pairwise.adonis function. The betadisper function was used to calculate the homogeneity of multivariate dispersions by bedding type or status (i.e. deviation from centroids), with analysis of variance (ANOVA) testing to determine if the dispersion differed significantly between bedding types and status. Differences in microbial community structure between bedding types and status were also tested using analysis of similarities (ANOSIM) with the anosim function in vegan.
Differential abundance testing to identify differences in relative abundance of bacterial taxa between bedding types and status
To identify sequence features that were differentially abundant between unused and used bedding for each bedding type, we performed multivariate zero-inflated Gaussian mixture models as implemented in the fitZig function in metagenomeSeq [65], following aggregation of sequence features to the phylum, class, and genus levels. Sequence features with fewer than 5 total read counts were discarded. The filtered abundance matrices from each aggregated matrix were normalized using the cum-Norm function in metagenomeSeq, using a default normalization factor of 0.5 [65]. Farm identity was included as a random effect. Pairwise comparisons of taxa abundance between bedding status and type were calculated using the makeConstrast function in limma [66] with Benjamini-Hochberg ("BH") correction for multiple testing. Log 2 -fold change (LogFC) and mean expression values between comparison groups for each taxon with BH adjusted P values were derived from the models using the topTable function in limma. Taxa with a mean expression value above the 50th percentile within the relevant comparison groups were selected and visualized in a stratified manner for each bedding type.
Sequenced-based evaluation of potential mastitis pathogens, with comparisons between bedding types and status
To evaluate the presence of potential mastitis pathogens within the 16S rRNA data, we first identified a list of potential pathogens (Additional file 1: Table S10) [67] and then subsetted the genus-level count matrix to only include the listed pathogen candidates. Beta-diversity analysis and differential abundance testing were performed on this subsetted count matrix as described above for the complete count matrix. Briefly, ordination was performed, followed by PERMANOVA testing to assess the effect of bedding status and type. Differential abundance (logFC) was evaluated for each potential pathogen, comparing used and unused bedding by type. Associations between the normalized genus-level sequence counts of the potential pathogens and bedding type and status were determined using linear mixed models, using the same modeling approach as described above for alpha diversity comparisons.
Correlation between 16S rRNA based sequence counts of potential mastitis pathogens and culture-based bacterial counts
To test whether the 16S rRNA sequence data correlated with culture-based data, we performed Spearman correlation analysis between the genus-level log 10 -transformed counts of potential pathogens from the 16S rRNA sequence data and culture-based counts measured as log 10 colony-forming units per mL, or CFU/mL obtained from aerobic culture of the same bedding samples. From the companion culture-based study [26], we obtained culture results for Staphylococcus spp., Streptococcus spp. and Streptococcus-like organisms (SSLO, which included Streptococcus, Enterococcus, Lactococcus and Aerococcus), coliforms, Klebsiella spp., non-coliform gram-negatives, Bacillus spp., Prototheca, and all bacteria (i.e., total bacterial count or TBC). When possible, we compared these culture-based results to the 16S rRNA results using correlation analysis at the genus level. This analysis was not performed for coliforms and non-coliform gram-negatives due to an inability to extract the appropriate taxa from the 16S rRNA taxonomy. To investigate correlation specifically between Streptococcus 16S rRNA counts and culture results, we compared Streptococcus-only 16S rRNA counts with culture-based CFU counts of SSLO's; and we compared 16S rRNA counts for Streptococcus, Enterococcus, Lactococcus and Aerococcus (combined) with culture-based SSLO CFU counts. We also performed correlation analysis on TBC and the log 10 -transformed sequence counts from all potential mastitis pathogens at the genus level. Results of all correlation analyses were visualized using scatter plots. Bonferroni correction was used to account for multiple comparisons, and adjusted P values were reported along with unadjusted P values (Stata/MP 17.0, StataCorp LLC, College Station, TX, USA).
All statistical analysis was performed in R (version 3.6.1, https:// www.r-proje ct. org/), and results were visualized using ggplot2 [68]. For all statistical analysis, unless otherwise indicated, significance was determined as P < 0.05.
Additional file 1: Table S1. Metadata file for all samples included in this study. Table S2. ASVs assigned to each taxonomic level. Table S3. Read counts and mean abundance of assigned species stratified by sample ID. Table S4. Proportion of phylum-level counts, by bedding type and status. Table S5. Proportion of phylum-level counts, for the four non-outlier low-yielding bedding samples. Table S6. Modeling results for associations between bedding status and type, and alpha diversity metrics (LMM output). Table S7. Results from phylum-level differential abundance testing of used versus unused samples, by bedding type. Table S8. Results from class-level differential abundance testing of used versus unused samples, by bedding type. Table S9. Results from genus-level differential abundance testing of used versus unused samples, by bedding type. Table S10. List of potential mastitis pathogens considered in the analysis of 16S rRNA sequence data. Table S11. Proportion of genus-level counts for genera that contain potential mastitis pathogens, by bedding type and status. Table S12. Results from genus-level differential abundance testing of taxa that contain potential mastitis pathogens, comparing used versus unused samples, by bedding type. Table S13. Read counts and proportion of genus-level counts for the mock community and negative control samples.
Additional file 2: Fig. S1. Non-metric multidimensional scaling (NMDS) ordination plots based on Bray-Curtis distances for (A) used versus (B) unused status for each bedding type, at the phylum, class and ASV level; and for potential mastitis pathogens at the genus level. NSA-new sand, ON-organic non-manure, RMS-recycled manure solids and RSA-recycled sand bedding type. Fig. S2. Log 2 -fold change (Log2FC) in abundance of classes between used and unused bedding samples, separated by bedding type. Only classes with an average abundance >50th percentile within each bedding type are depicted. Red indicates classes whose abundance was significantly different between used and unused bedding samples (i.e., adjusted P < 0.05). Circle diameter is proportional to the average abundance of each genus across all samples within each bedding type. NSA-new sand, ON-organic non-manure, RMS-recycled manure solids and RSA-recycled sand bedding type. Fig. S3. Log 2 -fold change (Log2FC) in abundance of genera that contain potential mastitis pathogens, comparing used and unused bedding samples, separated by bedding type. Only genera with an average abundance >50th percentile within each bedding type are depicted. Red indicates genera whose abundance was significantly different between used and unused bedding samples (i.e., adjusted P < 0.05). Circle diameter is proportional to the average abundance of each genus across all samples within each bedding type. NSA-new sand, ON-organic non-manure, RMS-recycled manure solids and RSA-recycled sand bedding type. Fig. S4. Scatter plots of 16S rRNA gene counts and culture results obtained from the same bedding samples, for: total bacteria (panels A-D, TBC); Staphylococcus (panels E-H); 16S rRNA gene counts for Streptococcus and culture-based Streptococcus and Streptococcus like organisms (SSLO) counts (panels I-L); 16S rRNA gene counts for Streptococcus, Aerococcus, Enterococcus and Lactococcus and culture-based Streptococcus and Streptococcus like organisms (SSLO) counts (panels M-P); Bacillus (panels Q-T), and Klebsiella (panels U-X). NSA-new sand, ON-organic non-manure, RMS-recycled manure solids and RSA-recycled sand bedding type.
|
2023-02-25T14:33:05.237Z
|
2022-03-07T00:00:00.000
|
{
"year": 2022,
"sha1": "2c7c7dbadf521db8b613727880852d15dad640d3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s42523-022-00171-2",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "2c7c7dbadf521db8b613727880852d15dad640d3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
212523899
|
pes2o/s2orc
|
v3-fos-license
|
Breastfeeding as A Method of Health Promotion
The literature depicts many health benefits for both mother and newborn associated with breastfeeding. Although breastfeeding initiation rates have met the goals of Healthy People, the United States lags behind the achievement of optimal exclusive breastfeeding continuation and duration rates. Billions of health care dollars could be saved along with hundreds of lives with improved breastfeeding rates. The purpose of this article was to examine the association between disease prevention, health promotion, and breastfeeding rates. The findings indicated that breastfeeding does offer some maternal and newborn protection against certain diseases and conditions which may be dose dependent, although some diseases and conditions reported mixed reviews and will require additional research. For the newborn, reduced rates of certain respiratory and gastrointestinal have been reported. Maternal benefits include a reduced rate of hypertension, ovarian, and breast cancer. Although breastfeeding rates are increasing, healthcare providers should continue to utilize multiple strategies to continue to improve breastfeeding rates including education about the association between breastfeeding and improved maternal and newborn health outcomes, the importance of breastfeeding support, workplace breastfeeding support, and community resources.
Introduction
Health promotion is defined as an opportunity for individuals to take control of their own health [1]. Health promotion can involve numerous types of interventions but should always be directed at prevention of ill health [1]. Research suggests an association between breastfeeding and health promotion [2] and supports the importance of breastfeeding relative to disease prevention and health promotion. Breastfeeding contributes to disease prevention, health promotion, and saves healthcare dollars because breastfed babies are healthier overall [3,4]. Maintaining breastfeeding has been noted to be an important health preventive measure [5]. According to Healthy People [4], improving the wellbeing of mothers, infants, and children is a crucial public health goal because their health and well-being influences not only their future health but may also portray future public health issues in the community. The Healthy People Initiative is a collaborative effort between government agencies and professional organizations directed at improving the health of all Americans [4]. The goals of Healthy People focused on two broad areas: the elimination of health disparities and an increase in the quality and years of a healthy life [4]. Breastfeeding has been strongly associated with improved short and long-term health [6]. Breastfeeding is widely recognized as the optimum method of infant feeding and should be considered the "normative standard" of infant feeding [1,3,7]. Breastfeeding has been associated with a decrease in healthcare costs across the United States [7]. In fact, lower rates of breastfeeding have increased maternal and newborn health care costs to over three billion dollars a year [7]. Based on a cost analysis completed in 2005, if exclusive breastfeeding rates were at 90% for 6 months, $13 billion dollars would be saved annually along with over 900 lives [8]. Current exclusive breastfeeding rates based on the 2015 Breastfeeding Report Card for 6 months are 24.9%, closer to the Healthy People goals [4,9]. Breastfeeding initiation rates are now 83.2% and have exceeded the goals set by Healthy People 2020 [4,9].
Review of the Literature
A review of the literature depicts both maternal and newborn benefits relative to breastfeeding. A reduced risk of numerous health conditions both for the mother and newborn have been associated with breastfeeding [7]. There is increasing evidence that supports the risks of not breastfeeding on future health chronic conditions [5]. Health outcomes are substantially different for infants of mothers who do not breastfeed [10]. Research suggests breastfeeding as a modifiable factor that could reduce health risks in both the mother and newborn [10]. In their Position Statement on Breastfeeding, the AAP [3] indicated that studies have shown that breastfeeding results in improved health outcomes for the mother and the newborn. Dieterich et al. [11] reported that breastfeeding reduces the maternal disease burden and saves infant lives. Anatolitou [12] also reported improved maternal and newborn health outcomes stemmed from breastfeeding.
Infant/Newborn Health Outcomes
Breastfeeding provides both short and long-term benefits to infants and newborns even after breastfeeding has been discontinued [13]. Newborn benefits that have been reported are reduced risk of asthma, obesity, otitis media, respiratory, and gastrointestinal infections [2,7,14]. Breastfeeding has been found to play a key role in the development of the infants immune system [10]. This is significant given that diarrhea, acute respiratory infections, and fever have been reported to be major causes of mortality in children less than 5 years old [15]. A study by Khan and Islam [15] reviewed data on 1918 infants, less than 6 months old, from 2007-2014 from the Bangladesh Demographic and Health Survey. Comparisons were made among the groups and adverse outcomes related to diarrhea, fever, and acute respiratory infection. When exclusive breastfeeding was discontinued at less than 6 months, the risk for diarrhea, fever, and acute respiratory infection was increased compared to exclusive breastfeeding at 6 months [15]. This study noted that 27.37% cases of diarrhea, 8.94 % cases of acute respiratory infection, and 13.24 % of cases of fever could be prevented if newborns were exclusively breastfed to 6 months [15]. Similarly, a study by Duijts et al. [14] from the Netherlands compared infants never breastfed to those breastfed exclusively to 4 months as well as to infants exclusively breastfed between 7 to 12 months. Lower risk for gastrointestinal infections and respiratory infections were noted among infants exclusively breastfed and partially breastfed for 4 months which continued until 6 months of age [14]. In infants breastfed exclusively for 6 months or longer, tended to have greater protection against gastrointestinal and respiratory infections than those breastfed only to 4 months [14]. This suggests that strategies should be promoted that encourage exclusive breastfeeding for at least 4-6 months [14]. Additionally, a 5-year prospective cohort study was conducted in Turkey that included 418 infants born between January and December of 2011 [16]. The purpose was to determine if there was an association between exclusive breastfeeding duration and infectious diseases as well as to examine long-term protective effects of breast milk [16]. Although this study did not report a significant decrease in respiratory infections as seen in other studies, a reduction in respiratory conditions was noted in infants breastfed 12 months or longer [16]. A significant decrease in otitis media and gastrointestinal conditions was reported in infants breastfed for 12 months or more [16]. This suggests an association between breastfeeding duration and reduction in common infectious conditions. Several studies have examined the impact of breastfeeding on childhood obesity [12]. The World Health Organization [17] reported that evidence exists that breastfeeding offers some protection against obesity. The literature depicts a 15-30% reduction in obesity with any breastfeeding, in observations through childhood and adulthood [12] although this has been noted to be controversial [11]. Maternal obesity, lifestyle, feeding practices may influence the development of obesity in children [11]. A study in Canada included 81, 226 children [18]. Height, weight, breastfeeding status during first 5 months of life along with maternal diabetes status was studied in pre-school children 4-6 years of age [18]. Data was collected from 2005 to 2013. Findings indicated that rates of overweight and obesity were lower in children who were breastfed than those who were not with the exception of children that were large for gestational age (LGA) or gestational diabetes [18].
Maternal Health Outcomes
Breastfeeding provides benefits for mothers not only during breastfeeding, but long-term benefits have been reported after breastfeeding has been discontinued [19]. Formula feeding or premature weaning from breastfeeding has been associated with increased health risks for the mother [10]. Maternal health benefits include a reduced risk of high blood pressure, Type 2 diabetes, breast, and ovarian cancer [2,7]. A reduction in the risk of stroke has also been noted in postmenopausal women who breastfed although more research is needed [20]. The literature suggests that exclusive breastfeeding for longer durations offers both short and long-term maternal health protection [11]. A study of women with live born infants in the Danish National Birth Cohort (1996-2002) examined how the effects of any, partial, and exclusive breastfeeding were associated with hypertension and cardiovascular disease [21]. Additionally, the study looked at whether prepregnancy BMI and waist circumference 7 years postpartum effected these associations [21]. A longer duration of breastfeeding was associated with a reduced risk of hypertension. Kirkegaard et al. reported any breastfeeding for greater than 4 months was linked to a reduced risk of 20-30% for development of hypertension and cardiovascular disease [21]. Study findings also included that continuation of partial breastfeeding after exclusive breastfeeding contributed to a lower rate of hypertension and cardiovascular disease although more studies are needed to further explain the link between breastfeeding and reduced rates of cardiovascular disease [21]. Jacobsen et al. [20] completed research based on data from the Women's Health Initiative Observational Study, which examined the relationship between breastfeeding and stroke risk. The study included 80, 191 participants from 40 centers in 24 states and the District of Columbia. Women were recruited between 1993-1998 and follow up continued through 2010. After adjustment for nonmodifiable confounders, the findings showed that any breastfeeding was associated with a lower risk of stroke [20]. This was found to be strongest for non-Hispanic Blacks. According to Jacobsen et al. [20], a longer duration of breastfeeding was associated with a lower risk of stroke in all women studied and among non-Hispanic whites and non-Hispanic blacks. The study concluded that there was an association, which included a dose response between breastfeeding and a lower risk of stroke among postmenopausal women. This was noted to be significant even after adjustments were made for stroke risk factors and lifestyle variables [21]. This could be an important finding6 given the fact that stroke is the fourth leading cause of death in women [22]. Some earlier studies reported limited evidence to support an association between breastfeeding and reduced risk of ovarian cancer. A study in China from 2006-2008 examined the association between breastfeeding and ovarian cancer [23]. There were 493 women with ovarian cancer compared to 472 women in the control group. Data collected included pregnancy history, livebirths, number of children breastfed, as well as the duration of breastfeeding. Face-to-face interviews were completed with family members in attendance to verify patient recall. In this study, the number of children breastfed, and duration of breastfeeding was found to have a reduced risk of ovarian cancer [23]. Schwarz and Nothnagle [24], who reviewed 30 case-controlled studies and 5 cohort studies, reported women who never breastfed were 32% more likely to be diagnosed with ovarian cancer. [25]. The dose response meta-analysis showed significant decreased risk of pre and postmenopausal breast cancer with increased breastfeeding duration [25]. The longer a woman breastfeeds, the lower the risk. Other studies showed a decreased risk but was not as significant [25]. The committee concluded that breastfeeding probably protects against breast cancer [25]. Islami et al. [26] reviewed 27 studies and concluded that any breastfeeding was associated with a 10% lower risk of breast cancer in tumors that were negative for estrogen and progesterone compared to women who did not breastfeed. However, those with positive estrogen and progesterone did not depict any significance. More research is needed on breastfeeding and positive receptor breast cancers to determine if any association exists [26]. A metaanalysis of 47 studies by Schwarz and Nothnagle [24] concluded that breast cancer rates decrease by more than 4 % per year that a woman breastfeeds in her lifetime. Schwarz and Nothnagle noted that women with the BRAC 1 gene saw a 37% decrease in breast cancer risk [24].
Conclusion
The literature suggests an association between not breastfeeding and an increase in adverse maternal and newborn health outcomes [10]. Breastfeeding affords the best opportunity for optimal newborn/infant health outcomes [3]. WHO reports breastfeeding to be one of the most "effective ways of improving child health and survival." [27]. The CDC supports breastfeeding as the best nutritional source for infants [7]. A cost analysis by Bartick and Reinhold [8] indicated that $13 billion and over 900 lives, primarily infant and newborn, could be saved annually if we were able to attain a 90% exclusive breastfeeding rate for 6 months in the United States. The continuation of suboptimal breastfeeding rates will amass billions of dollars as well as hundreds of preventable deaths [8]. Great strides have been made in improving breastfeeding rates, particularly exclusive breastfeeding rates, but there is room for improvement. Although breastfeeding initiation rates are high at 83.2%, exclusive breastfeeding rates at 3 months are 46.9% and at 6 months are 24.9% [9]. While these rates have increased, they have not increased significantly between 2014-2015 [9]. The high initiation rates suggest that mothers do want to breastfeed [9]. Numerous factors could impact continuation rates including lack of education and support, early return to work, lack of workplace breastfeeding practices that support breastfeeding women among others [9]. Establishing and maintaining practices that offer education and support are critical in improving breastfeeding rates [28]. Support and guidance during early breastfeeding experiences are crucial to prolonged breastfeeding [28]. Breastfeeding women need education and support from healthcare providers, as well as society as a whole, to encourage and promote exclusive breastfeeding to reduce the risks of adverse health outcomes reported in formula fed infants [9,11]. Healthcare practices should be reviewed to support and promote breastfeeding in all phases to improve long-term breastfeeding continuation. Interventions leading to improved health outcomes should include the benefits of breastfeeding as well as the reported association between breastfeeding and adverse health outcomes [11].
|
2020-02-20T09:09:19.643Z
|
2019-09-19T00:00:00.000
|
{
"year": 2019,
"sha1": "1644db627d66b671bf26517e8cf581b6d10b9e04",
"oa_license": "CCBY",
"oa_url": "https://lupinepublishers.com/nursing-journal/pdf/LOJNHC.MS.ID.000134.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8d2320a60f96abaccbdfdb7c074c8f2f3b778dee",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52519920
|
pes2o/s2orc
|
v3-fos-license
|
PREDICTION OF HEART DISEASE USING K-MEANS and ARTIFICIAL NEURAL NETWORK as HYBRID APPROACH to IMPROVE ACCURACY
— The heart is important organ of human body part. Life is completely dependent on efficient working of the heart. What if a heart undergoes a disorder, cardiovascular diseases are the most challenging disease for reducing patient count. According to survey conducted by WHO, about 17 million people die around the globe due to cardiovascular diseases i.e 29.20% among all caused death, mostly in developing countries. Thus there is a need of getting rid of the this complicated task CVD using advanced data mining techniques, in order to discover knowledge of Heart disease prediction. In this paper, we propose an efficient hybrid algorithmic approach for heart disease prediction . This paper serves efficient prediction technique to determine and extract the unknown knowledge of heart disease using hybrid combination of K-means clustering algorithm and artificial neural network. In our proposed model we considered 14 attribute out of 74 attributes of UCI Heart Disease Data Set [19]. This technique uses medical terms such as age, weight, gender, blood pressure and cholesterol rate etc for prediction. To perform grouping of various attributes it uses k-means algorithm and for predicting it uses Back propagation technique in neural networks. The main objective of this paper is to develop a prototype for predicting heart diseases with higher accuracy rate.
II. RELATED WORK
In this section, Data mining techniques used for decision making in heart disease are analysed. Ankita Dhewan and Meghana Sharma proposed a methodology of hybridizing two data mining techniques like Artificial Neural Network and Genetic Algorithm which was implemented to achieve high accuracy with least error [1].
Limitations:
The very big disadvantages of GA are unguided mutations. The mutation operator in GA functions like adding a randomly generated number to a parameter of an individual of the population [10]. This is the only reason of a very slow convergence of genetic algorithm. The time consumed for optimization is much high.
M.Akhil jabbar, B.L Deekshatulua proposed algorithm into two parts i.e first part deals with evaluating attributes using genetic search and second part deals with building classifier and measuring accuracy of classifier. In this paper it compares the accuracy of datasets with and without GA. Results shows that accuracy is increases by 5% when this two are combined.
Limitations: Accuracy is very low with K-nearest neighbour and genetic algorithm takes much more time for optimization [7].
Rovina Dbritto, Aniruddha has given three data mining techniques viz. Naïve Bayes, Support Vector Machine, K-nearest neighbour and Logistic regression. Results shows that Naïve Bayes gives more accuracy compared to other classifier even.
Limitations:
The disadvantage is that the Naive Bayes classifier makes a very strong assumption on the shape of your data distribution, i.e. any two features are independent given the output class. Due to this, the result can be very bad. Dependencies among attributes cannot be modelled using Bayesian classifier [2].
Humar Kahramanli, Novruz Allahverdi, used a hybrid neural network that includes artificial neural network and fuzzy neural network. A datasets of 303 samples were taken from patients with heart disease which give 87.4% accuracy on attributes of UCI repository [5] [12].
Limitations: When fuzzy system is combined with neural network, fuzzy systems need to be tuned which is very time consuming and error-prone.
Sudha, Sarath Kumar proposed two algorithms i.e KNN and K-means [15]. Their accuracy was measured which shows that KNN achieve 100% accuracy for different cluster with nearest value while K-means achieves 100% accuracy when value of K number of cluster have is very high.
Limitations:
Computation cost is very high as we have to calculate the distance of each query instance to all training samples [8] Mai Shouman, Tim Turner used a single data mining techniques on different datasets which shows that results can't be compared because of use of different datasets. When single and hybrid data mining techniques on Cleveland datasets in heart disease diagnosis results shows that hybrid techniques shows better results than single techniques. The hybrid technique used was Neural Network ensemble [3].
Limitations: Ensemble training is several times slower than traditional neural network. When solving some rare problems, the ensemble error is greater than error of a traditional neural network.
M Limitations: PLS-DA is a complex algorithm which is very difficult to use [4].
III. PROPOSED SYSTEM In this section we mentioned about the system architecture. fig 1 represents the overview of systems architecture. The core modules of the proposed system consist of : a) Understanding the input data and selecting the attribute related to heart disease. b) Data Preparation: transformation and pre-processing of missing data is carried out. c) Processing Module: it specifies about the algorithmic approach applied over the system to obtain high accuracy result. Pre-processing modules are separately discussed in upcoming section. d) Evaluation and deployment: Final Analysing modules provide information related to generated output. It compares and conclude about measurable resultant artefacts like sensitivity, accuracy etc. For diagnostic purpose we have considered these 14 attributes [14]: Age in years, sex (male, female), chest pain type, resting blood pressure, serum cholesterol in mg/dl , fasting blood sugar, resting electrocardiographic results, maximum heart rate achieved, exercise induced angina, ST depression induced by exercise relative to rest, the slope of the peak exercise ST segment, thalassemia, number of major vessels and angiographic disease status, etc.
IV. ALGORITHMIC DESCRIPTION A. K-means Algorithm
The main goal of using Kmeans clustering technique is that it organizes the data into classes such that there is high intra-class similarity low inter-class similarity K-means algorithm [15] [16] is famous clustering algorithm widely used in data mining project. The main aim of this clustering is to find the positions µ i , i=1...k within-cluster to minimize sum of squares distance from the centroid. K-means algorithm depends on k clusters, and it may stuck for different solutions. So to remove such dependency, modified or improved k-means was proposed. Kmeans is accompanied with Lloyd's algorithm to get rid of dependencies. Using this method the results show the quality of clusters is not compromised.
Steps for K-means algorithm are [15]: 1. Initialize the center of the clusters from n data points x i , i=1...n that have to be partitioned in k clusters 2. Attribute the closest cluster to each data point using Euclidean distance 3. Set the position of each cluster to the mean of all data points belonging to that cluster 4. Repeat steps 2-3 until convergence In our system Kmeans algorithm plays a crucial role in order to obtain the appropriate number of data groups. Using this algorithm along with Euclidean distance centroids are calculated for different patient attribute. Mean value is taken into account for sample data and henceforth it is judgemental to predicate the patient status. If the mean value of the patient is nearest to the sample mean value, the patient more likely to be affected by heart disease.
B. Artificial Neural Network
An artificial neural network (ANN), usually called neural network (NN). ANN is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks [17] [18]. There are three input layers are present in ANN: input layer, hidden layer also called as intermediate layer and output layer. Hidden layers are present in between input and output layer.
Input layer:
The input units present in this layer shows the raw information that is fed into the network. Hidden layer: The activity of each hidden unit is based on the activity of each input unit and weights on the connection between them.
Output layer: The activity of each output unit is based on the activity of each hidden unit and weights on the connection between them.
The ANN algorithm follows: 1. The data from input layer is given to hidden layer. 2. Input values from input layer are used and modified using some weight value and sent to output layer. 3. The value is again modified by some weights from connection between hidden and output layer. 4. This information is processed and output layer gives final output. Finally, this output is processed by activation function. ANN follows trial and error method in order to get optimal solution. The structure of neural network is shown in Fig 2 [ Where, y j represents output neuron.
x i is input neuron w ij is the weight connecting x i and y j ∑ is sigmoidal function As mention in figure 2, ANN consist of three layers input layer, hidden layer also called as intermediate layer and output layer. In this system former clustered normalized data groups are feed as input to neuron. The patterns vital to heart attack prediction are selected on basis of the computed significant weightage. Weightage are provided based on the range decided for the selected attribute from the dataset. For example sex : (1 = male; 0 = female) fbs : (fasting blood sugar > 120 mg/dl) (1 = true; 0 = false) V. CONCLUSION As heart disease patients are increasing every year, huge amount of medical data is available. Researchers are applying data mining techniques on this data to diagnosis heart disease. It is analysed that artificial neural network algorithm is best for classification of knowledge data from large amount of medical data. Population is growing in exponential way. Death rate due to cardiovascular diseases is also increasing. The only solution to control this is to predict the heart disease and medicate it before it gone worse. Our hybrid approach gives higher accuracy rate of 97% of disease detection than earlier proposed method.
|
2019-02-17T14:08:39.928Z
|
2017-08-31T00:00:00.000
|
{
"year": 2017,
"sha1": "163340919f98d772e13b7844f8c78bd3eb332ca2",
"oa_license": "CCBY",
"oa_url": "http://www.enggjournals.com/ijet/docs/IJET17-09-04-101.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a41cef125f86db383866b64582ffaf9b508cfbf8",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
119128416
|
pes2o/s2orc
|
v3-fos-license
|
Multi-dimensional scalar conservation laws with unbounded integrable initial data
We discuss the minimal integrability needed for the initial data, in order that the Cauchy problem for a multi-dimensional conservation law admit an entropy solution. In particular we allow unbounded initial data. We investigate also the decay of the solution as time increases, in relation with the nonlinearity. The main ingredient is our recent theory of divergence-free positive symmetric tensor. We apply in particular the so-called compensated integrability to a tensor which generalizes the one that L. Tartar used in one space dimension. It allows us to establish a Strichartz-like inequality, in a quasilinear context. This program is carried out in details for a multi-dimensional version of the Burgers equation.
Introduction
Let us consider a scalar conservation law in 1 + n dimensions We complement this equation with an initial data u(0, y) = u 0 (y), y ∈ R n .
Together with the affine functions, they span the cone of convex functions.
We recall that an entropy solution is a measurable function u ∈ L 1 loc ([0, +∞) × R n ) such that f (u) ∈ L 1 loc ([0, +∞) × R n ), which satisfies the Cauchy problem in the distributional sense, together with the entropy inequalities ∞ 0 dt R n (η a (u)∂ t φ + q a (u) · ∇ y φ) dy + R n η a (u 0 (y))φ(0, y) dy ≥ 0, When it enjoys higher integrability, an entropy solution is expected to satisfy additional entropy inequalities of the form for more general convex entropies η. In particular, one is interested in inequality (4) for the entropy-entropy flux pairη The theory of this Cauchy problem dates back to 1970, when S. Kruzhkov [7] proved that if u 0 ∈ L ∞ (R n ), then there exists one and only one entropy solution in the class L ∞ (R + × R n ) ∩C(R + ; L 1 loc (R n )).
The operator S t : u 0 → u(t, ·), which maps L ∞ (R n ) into itself, enjoys several additional properties. On the one hand, a comparison principle says that if u 0 ≤ v 0 , then S t u 0 ≤ S t v 0 . For instance, the solution u associated with the data u 0 is majorized by the solutionū associated with the data (u 0 ) + , the positive part of u 0 . On another hand, if v 0 − u 0 is integrable over R n , then S t v 0 − S t u 0 is integrable too, and Finally, if u 0 belongs to some L p (R n ) space, then S t u 0 has the same integrability, and the map t → S t u 0 p is non-increasing. We warn the reader that the contraction property (5) occurs only for the L 1 -norm, but not for other L p -norms.
Because of (5) and the density of L 1 ∩ L ∞ (R n ) in L 1 (R n ), the family (S t ) t≥0 extends in a unique way as a continuous semi-group of contractions over L 1 (R n ), still denoted (S t ) t≥0 . When u 0 ∈ L 1 (R n ) is unbounded, we are thus tempted to declare that u(t, y) := (S t u 0 )(y) is the abstract solution of the Cauchy problem for (1) with initial data u 0 . An alternate construction of (S t ) t≥0 , based upon the Generation Theorem for nonlinear semigroups, is due to M. Crandall [2], who pointed out that it is unclear whether u is an entropy solution, because the local integrability of the flux f (u) is not guaranted. It is therefore an important question to identify the widest class of integrable data for which u is actually an entropy solution of (1).
To achieve this goal, we develop a new strategy, based on the Compensated Integrability that we introduced in our previous papers [10,11]. It uses a map a → M(a) ∈ Sym d , whose lines are entropy-entropy flux pairs, where the entropies are precisely the functions id R , f 1 , . . . , f n which appear in the conservation law. The map M is a non-decreasing function of a. This tensor was already used when n = 1 by L. Tartar [13] to prove the compactness of the semi-group, and by F. Golse [5] (see also [6]) to prove some kind of regularity. An essential ingredient is the amount of non-linearity displayed by the flux f . We illustrate our strategy by carrying out the details on the most typical nonlinear conservation law, a multi-d generalization of the Burgers equation.
Outline of the article. We begin with a detailed, definitive, analysis of the multi-d Burgers equation. The equation is described in the next section. Our main result is a well-posedness when the initial data is integrable. It is based on a dispersion estimate, which has the flavour of a Strichartz inequality, from which we derive a decay estimate of L p -norms for p ≤ d 2 d−1 . The proof is given in Sections 3 and 4. We explain how the strategy extends to general fluxes f in Section 5.
Acknowledgements. I am indebted to C. Dafermos, whose precious comments helped me to improve this article, in particular in giving full credit to previous contributors. I also thank L. Silvestre for correcting a miscalculation.
The multi-d Burgers equation
For a conservation law of the general form (1), it is harmless to assume f (0) = 0. By chosing an appropriate inertial frame, which does not affect the norms u(t) p , we may also assume f ′ (0) = 0. Thus f (s) = O(s 2 ) at the origin. Say that f (s) ∼ s k v 1 as s → 0, where v 1 is a non-zero constant vector. We may perform a linear change of the spacial coordinates such that f 1 (s) ∼ s k k and f j (s) = o(|s| k ) otherwise. Unless we meet a flat component, he process can be continued until we find a new coordinate system (y 1 , . . . , y n ) in which Generically, we have k j = j + 1 for every j ∈ [ [1, n]]. This is the reason why we consider from now on the following scalar conservation law, which we call the multi-dimensional Burgers equation : This particular flux was already considered by G. Crippa et al. [3]. If n = 1, we recognize the original Burgers equation. The equation (6) is a prototype for genuinely nonlinear conservation laws, those which satisfy the assumption The latter condition is a variant of the non-degeneracy condition at work in the kinetic formulation of the equation (1) ; see [8] or [9].
Let us review two preliminary answers to our natural question, in the context of (6).
• On the one hand, we might assume that u 0 ∈ L 1 ∩ L p (R n ) for some p > 1. Let us define which tends towards u 0 in the L 1 -norm. We have u = lim m→+∞ u m , where u m is the solution associated with the data u 0m , and the limit stands in . We infer that u ∈ L ∞ (R + ; L p (R n )), and u m converges towards u in L ∞ (R + ; L q (R n )) for every q ∈ [1, p). In addition, u m converges weakly in L ∞ (R + ; L p (R n )). If p > n + 1, we may pass to the limit as m → +∞ in the sequences Passing to the limit in the integral formulations (2) and (3), we conclude that u is a genuine entropy solution of the Cauchy problem. Notice that the argument does not work out when p = n + 1, because of the last component of the flux: we are not certain that u n+1 m converges in L 1 loc towards u n+1 . If p > n + 2, we find as well that u satisfies the entropy inequality for the pair (η,q).
The drawback of this argument is that it does not exploit the nonlinearity of the equation, a property which is expected to imply some kind of regularization or dispersion (see Theorem 4 and Proposition 1 of [8]). We should be able to lower somehow the threshold p > n + 1.
• The other answer concerns the one-dimensional case (n = 1). The Kruzhkov solution of the classical Burgers equation satisfies the inequality due to Bénilan & Crandall [1], who exploit the homogeneity of the flux. It is extended by Dafermos [4] to situations where the flux f has an inflexion point and the data u 0 has bounded variations, by a careful use of the generalized backward characteristics. It implies in particular an estimate This shows that the assumption u 0 ∈ L 1 (R) is sufficient in order that u be a true entropy solution. This is definitely better than the threshold L 1 ∩L 2 (R n ) considered in the previous paragraph.
Dafermos ' argument, which is the most general one, uses the ordered structure of the real line. Backward characteristics are not unique in general. Given a base point (x * ,t * ) in the upper half-plane, one has to define and analyse the minimal and the maximal ones. These notions have not yet been extended to the multi-dimensional situation (see however [12] for a weaker notion).
Our main result here is the following statement. It tells us that L 1 (R n ) is the right space for initial data.
Define u(t) = S t u 0 and set u(t, y) = u(t)(y) for t > 0 and y ∈ R n . Then 1. There holds an algebraic decay: 3. The function u is an entropy solution of the Cauchy problem.
It satisfies the additional entropy inequality
Comments.
• The assumption that u 0 ∈ L 1 (R n ) extends that available in the 1-dimension situation. However, when n = 1, Theorem 2.1 provides an estimate of u(t) in L 4 (R) only, instead of the known L ∞ (R) or BV (R). Our results are new only when n ≥ 2.
• The decay result is optimal when n = 1, where it states that This is the exact rate for an N-wave It raises therefore the question whether the decay rate given by (10) is accurate also when n ≥ 2.
• Estimate (11) ressembles a Strichartz inequality. It seems to be new in this situation where the principal part in not a linear operator, but a quasilinear one.
• A useful contribution in this direction was obtained recently by L. Silvestre [12], whose Theorem 1.5 tells in particular that if u 0 ∈ L 1 ∩ L ∞ (R n ), then This decay is almost the same as that suggested by extrapolation to q = ∞ of ours, because of It would be exactly that one if the limit exponent µ 0 was allowed, and the dependency of the constant upon u 0 ∞ was removed.
Other "monomial" scalar conservation laws
As suggested above, we may be interested into more general conservation laws, whose fluxes are monomial. Denoting P k (s) = s k k , consider the PDE (13) ∂ t u + ∂ 1 P k 1 (u) + · · · + ∂ n P k n (u) = 0, where 1 < k 1 < · · · < k n are integers. We leave, as a tedious exercise, the interesting reader to adapt the calculations of the two next sections to (13), to prove the following result. We denote Theorem 2.2 Suppose that nk n < N. Then for every initial data u 0 ∈ L 1 (R n ), the abstract solution given by the continuous extension of the semi-group (S t ) t≥0 to L 1 (R n ), is actually an entropy solution of the Cauchy problem for (13). It satisfies a dispersion estimate .
It decays as follows The rôle of the assumption nk n < N is to allow us to estimate u(t) k n in terms of u(t) 1 and u(t) N/n , in order to apply a Gronwall argument to the dispersion estimate. Notice that it is always satisfied in one space dimension, because then 1 · k 1 < N = 2k 1 Remark. If k n is larger than N n , there should be a weaker result. There will be some exponent p = p(k n , N) ∈ (1, k n ) such that if u 0 ∈ L 1 ∩ L p (R n ), then the abstract solution is actually an entropy solution. We leave the calculation of p(k n , N) to the motivated reader.
Proof of Estimate (11)
Because u is obtained as the limit in C(R + ; L 1 (R n )) of u m , the solution associated with the data u 0m = Proj [−m,m] u 0 , the estimates (10) and (11) need only to be proved when the initial data belongs to L 1 ∩ L ∞ (R n ), that is within Kruzhkov's theory. Then they extend to L 1 -data by a density argument.
When u 0 ∈ L 1 ∩ L ∞ , (11) will provide a uniform bound of Then, because of u m → u in C(R + ; L 1 (R n )) as m → +∞, we infer by interpolation that the convergence holds true in every space L q (R + ; L p (R n )) for which Because of (S t u 0 ) ± ≤ S t (u 0 ) ± , it is enough to consider data that are either non-negative or non-positive. But since v(t, y) = −u(t, −y 1 , y 2 , . . ., (−1) n y n ) is the entropy solution associated with v 0 (y) = −u 0 (−y 1 , y 2 , . . . , (−1) n y n ), it suffices to prove (11) for non-negative data and solutions. We therefore assume from now on that u 0 ≥ 0, and thus u ≥ 0 over R + × R n .
Remarking that where is the determinant of the Hilbert matrix (this is the only case where we do not write c d for a dimensional constant). Let us form the symmetric tensor with positive semi-definite values. Its first line is formed of (u, f (u)) and therefore is divergencefree by (6). The second line is formed of (η(u),q(u)), an entropy-flux pair. It is not divergencefree in general, although it is so away from shock waves and other singularities of the solution u. But the entropy inequality tells us that the opposite of its divergence if a non-negative, hence bounded measure, The total mass of µ 1 over a slab (0, τ) × R n is given by Notice that the latter bound does not depend of τ. The same situation occurs for the other lines of T . They are of the form (η(u), q(u)) where (η, q) is an entropy-flux pair with η convex over R + (recall that u takes only non-negative values). The distribution is therefore again a bounded measure, whose total mass over R + × R n is bounded by R n η(u 0 (y)) dy.
We conclude that the row-wise divergence of T is a (vector-valued) bounded measure, whose total mass is bounded above by d ∑ j=2 R n u 0 (y) j j dy.
We may therefore apply Compensated integrability (Theorems 2.2 and 2.3 of [11]) to the tensor T , that is Because of The only bad feature in the estimate (14) is the lack of homogeneity of its right-hand side. To recover a well-balanced inequality, we exploit an idea already used in [10]. We begin by remarking that if λ > 0 is a constant parameter, then the function v(t, y) = 1 λ u(λt, λ 2 y 1 , . . ., λ d y n ) is the entropy solution associated with the initial data v 0 (y) = 1 λ u 0 (λ 2 y 1 , . . . , λ d y n ).
Applying (14) to the pair (v, v 0 ) instead, then using we get a parametrized inequality In order to minimize the right-hand side, we choose the value The extreme terms, for j = 1 or d, contribute on a equal foot with
Proof of Theorem 2.1
We now complete the proof of our main theorem.
The decay result
We keep working with the assumptions u 0 ∈ L 1 ∩ L ∞ (R n ) and u 0 ≥ 0.
Let us define
From the Hölder inequality, we have The inequality (11) implies therefore .
Considering the solution v(t, y) = u(t + τ, y), whose initial data is u(τ, ·), we also have We recast (15) as Multiplying by Y −1/β and integrating, we infer (mind that 1 − 1 β is negative) This provides a first decay estimate Remarking that t → X (t) is a non-increasing function, so that we deduce the ultimate decay result Restated in terms of a Lebesgue norm of u(t), it says
The function u is an entropy solution
We already know that the functions u m are entropy solutions, with initial data u 0m ∈ L 1 ∩L ∞ (R n ).
Because of (11), we have seen that u m converges towards u in the norm of L q (R + ; L p (R n )) whenever we see that we may pass to the limit as m → +∞ in the identity as well as in the Kruzhkov inequalities and in the inequality Therefore u is an entropy solution with initial data u 0 , which satisfies in addition the entropy inequality for the pair (η,q).
Remark. When n ≥ 2, the Compensated Integrability cannot be applied directly to the solution u, when the data is only integrable. Because we don't know whether the jth line of T is locally integrable if j = 3, · · · , n + 1 ; its last component is u n+ j n+ j , where the exponent n + j is larger than d 2 d−1 .
The strategy for general fluxes f
We come back to the study of a multi-dimensional conservation law of general form (1). Following the ideas develloped in the Burgers case, we begin by considering a signed, bounded initial data: Let us define T (t, y) := M(u(t, y)). Because u ∈ L ∞ (0, τ; L 1 ∩ L ∞ (R n )), the tensor T is integrable over (0, τ) × R n . The first line of T is divergence-free. The other lines are made of entropy-entropy flux pairs ( f i , Q i ). Since f i might not be convex, we cannot estimate the measure µ i = −∂ t f i (u) − div y Q i (u) directly by the integral of f i (u 0 ). To overcome this difficulty, we define a convex function φ over R + by Remark that | f ′ | ≤ φ ′ and | f | ≤ φ. Let Φ be the entropy flux associated with the entropy φ. Then the measure ν := −∂ t φ(u) − div y Φ(u) is non-negative and a bound of its total mass is as usual We now use the kinetic formulation of (1), a notion for which we refer to [9], Theorem 3.2.1. Recall the definition of the kinetic function χ(ξ; a), whose value is sgn a if ξ lies between 0 and a, and is 0 otherwise. There exists a non-negative bounded measure m(t, y, ξ) such that the function g(t, y, ξ) = χ(ξ; u(t, y)) satisfies ∂ t g + f ′ (ξ) · ∇ y g = ∂ ∂ξ m, g(0, y; ξ) = χ(ξ; u 0 (y)).
We may therefore apply the compensated integrability, which gives here τ 0 dt R n ∆(u(t, y)) dy ≤ c d F(u 0 ) 1 + F(u(τ)) 1 + R n φ(u 0 (y)) dy Because of | f | ≤ φ and φ(u(τ)) 1 ≤ φ(u 0 ) 1 , we end up with an analog of (11) To improve the inequality above, we use again a scaling argument. However, because the components f j of the flux are not homogeneous anymore, we modify simultaneously the solution and the flux, using the fact that the constant c d in (17) does not depend upon f . Our new dependent variables are v(t, y) = 1 λ u(λt, Py), v 0 (y) = v(0, y) = 1 λ u 0 (Py) where P ∈ GL n (R) is a matrix to be chosen later. The function v is an entropy solution of the Cauchy problem associated with the conservation law ∂ t v + div y g(v) = 0 for the flux We have det N(s) = λ n−1 (det P) −2 det M(λs), from which we derive When applying (18) to v and g, one integral in the right-hand side transforms easily: R n v 0 (y) dy = 1 λ det P R n u 0 (y) dy.
All the identities above, together with (18) applied to (v, g), yield our parametrized estimate We optimize this inequality with respect to λ, by choosing λ = R n u 0 dy R n ψ P (u 0 ) dy . There remains to minimize the right-hand side with respect to P : The calculation of I[w] has to be made on a case-by-case basis.
We infer
Let us define again X (t) = R n ∆(u(t, y)) dy.
Applying (19) on an interval (τ, ∞) instead, and using the decay of the L 1 -norm, we arrive to A decay result will be obtained through a Gronwall argument, whenever we can estimate I[u(t)] in terms of u(t) 1 and X (t).
|
2018-07-27T07:58:17.000Z
|
2018-07-27T00:00:00.000
|
{
"year": 2018,
"sha1": "96f39a0a78925e14b3661282ae3b24dffb8adfac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "96f39a0a78925e14b3661282ae3b24dffb8adfac",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
245942530
|
pes2o/s2orc
|
v3-fos-license
|
PROPOSAL OF MODIFICATION OF EUROCODE 2 IN TERMS OF CALCULATION OF THE PUNCHING SHEAR CAPACITY OF RC COLUMN FOOTINGS
. The paper first presents the calculation of the punching shear capacity of concentrically loaded reinforced concrete column footings according to the current Eurocode 2, which can be carried out in two ways: by conducting an iterative procedure and by a simplified procedure applying the diagrams. By using these procedures, the punching shear capacity calculation was performed for the footings examined within the experimental research of the authors of this study, as well as for the footings that were considered by experiments conducted by other authors. Based on the conducted analysis of the calculation results and experimentally recorded results, a modification of the expression of the current Eurocode 2 with regard to the calculation of the punching shear capacity of concentrically loaded RC column footings is proposed. The proposed modification more realistically takes into account the influence of compressive strength of concrete and the reinforcement ratio in the footing, so that its application provides the results of punching failure forces that are closer to the results recorded by experimental tests.
INTRODUCTION
The construction of monolithic reinforced concrete skeleton construction systems of buildings with floor structures in the form of flat slabs and foundations in the form of foundation slabs and column footings is very widespread. At the same time, there is a constant tendency to improve the calculation methods and the way of designing the mentioned structural elements in order to achieve savings in work and materials, i.e. to increase the economy of both these elements and the entire structure. On the other hand, we are witnessing a growing number of buildings in the world where damage has occurred, and even the collapse of the structure, which results not only in material damage, but, unfortunately, also in human casualties. Such events often occur due to exceeding the load-bearing capacity of individual columns or footings under them, which leads to their damage or, in certain situations, to failure. As a consequence, the forces are further redistributed to the adjacent columns and associated footings, thus causing significantly increased loads in them, which can cause their fracture, i.e. lead to a chain reaction and progressive failure of the entire structure. With all this in mind, in recent times, the attention of researchers is increasingly focused on increasing the resistance to progressive failure of buildings and structures in general, and thus their sustainability, reliability, and durability. Related to this is the growing number of studies with regard to the bearing capacity of foundations, in particular the punching shear capacity of column footings, as a type of unannounced failure. Control of foundations to punching shear is an obligatory part of the foundations design, primarily of column footings and foundation slabs, which are exposed to the action of concentrated forces in the columns. The behavior of these types of foundations under load will depend on the characteristics of the foundation and soil, as well as on the intensity of the load.
In most national and international regulations, an empirical method of calculating the punching shear capacity of concentrically loaded reinforced concrete foundations based on experiments conducted on flat floor slabs and foundations resting on a simulated subsoil has been adopted. When it is necessary to check whether the foundation is safe in terms of punching shear, for the known load and characteristics of the foundation, the calculation is based on first calculating the shear stress in the critical section, at a certain distance from the column face, for a known force in the column. Then, the shear stress calculated in this way is compared to the punching shear resistance of concrete . If < , then there is no risk of punching shear event, otherwise, the height of the foundations needs to be increased as well as the class of concrete, or the reinforcement to secure against punching shear needs to be designed. The critical section is the section along the effective depth of the foundation slab or footing and along the perimeter of the critical section which is at a certain distance from the column face (the so-called critical perimeter as presented in Fig. 1). Shear stresses in the critical section are calculated according to the expression: where: , is the reduced force in the column, is the critical section perimeter, i.e. the length of the critical perimeter, dis the effective depth of the footing (a mean value for two perpendicular directions).
In most of the codes, the reduced force in the column is calculated by subtracting from the force in the column a part of net reactive soil pressures σ n (without the effect of the footing dead weight) inside a considered critical perimeter having the area 0 : where A is the area of the footing base. Finally, the punching shear capacity of footings is expressed through the ultimate force in the column in terms of the punching shear: On the other hand, the punching shear resistance of concrete vd depends on multiple parameters, which reflect the characteristics of the footings such as the column and footing dimensions, compressive strength of concrete, as well as the implemented reinforcement ratio and the quality of the reinforcement. For calculation of parameter vd, the existing codes to a smaller or larger extent take into account the mentioned footing properties. Thus: ▪ Eurocode 2 (EC2) [1] takes into account the compressive strength of concrete, the reinforcement ratio of footing and the size-effect coefficient that depends on the effective depth of the footing; ▪ Current ACI 318-19 [2] takes into account only the compressive strength of concrete and the size-effect coefficient that depends on the effective depth of the footing; ▪ fib Model Code 2010 [3] takes into account the compressive strength of concrete, the reinforcement ratio of footing and the size-effect coefficient that depends on the effective depth of the footing; ▪ BS 8110-1:1997 [4], likewise EC2, takes into account the compressive strength of concrete, the reinforcement ratio of footing and the size-effect coefficient that depends on the effective depth of the footing; ▪ СНиП-84 [5] takes into account the design strength of concrete to axial tension, which is calculated depending on the concrete class, with the corresponding working conditions coefficients (types of load, environment in which the element is situated, and the method of concreting).
The method of critical section in the control of punching shear capacity does not reflect the true nature of punching, but when the properties of the footing affecting its punching shear capacity are taken into account in the appropriate correlation, the acceptable results of prediction of the footing punching shear capacity are obtained. Bearing in mind that in Serbia Eurocode 2 has been adopted as a code in the field of design of reinforced concrete structures, in the following part the attention is paid to determining the punching shear capacity of footings according to this standard, and to evaluation of standard expressions based on the experimental research of column footings on a real soil, both of the authors of this paper and of other researchers.
CALCULATION OF THE PUNCHING SHEAR CAPACITY OF CONCENTRICALLY LOADED RC COLUMN FOOTINGS ACCORDING TO EUROCODE 2
Calculation of punching shear of column footings and foundation slabs according to Eurocode 2 (EC2) is mostly based on the calculation concept provided in fib Model Code 1990. According to this code, it is necessary to check the shear stresses in two sections. The first section is the cross-section of the footing along the column perimeter, while the position of the second section is not directly determined, but is determined using the iterative procedure. Namely, unlike other codes where the position of the critical section is defined in advance, in Eurocode 2 the calculation of punching shear is performed in several control sections, and the finally adopted control section is a critical section. Thus, in order to determine the ultimate force in the column, it is necessary to consider several control perimeters within a distance of 2d from the edge of the column (which is the socalled basic control perimeter according to Fig. 2) and by iterative procedure to determine the position of the critical perimeter resulting in the ultimate force in the column in terms of punching shear. When an unknown concentric force of punching shear of the footing is to be determined for a footing of known characteristics, it is necessary to first calculate the punching shear resistance of concrete (marked , in EC2) for each control section considered, as follows: where: CRd,c = 0.18/γcthe empirical factor which takes into account the partial safety coefficient for concrete γc (1.5), deffective depth of footing (in mm), = 1 + √200/ ≤ 2.0coefficient depending on the effective depth of footing, = √ • ≤ 0.02average value of the reinforcement ratio in two orthogonal directions taken at a width equal to the width of the column increased for the distance 3d on each side of the column, fckcharacteristic value of compressive strength of concrete for a standard cylinder, = 0.035 • 3/2 • 1/2the minimum punching shear resistance of concrete, aEC2distance from the edge of the column to the observed control section.
In the following step, on the basis of the calculated value of concrete punching shear resistance , and expression in Eq. (1), for each considered control section of perimeter u, a reduced force in the column is calculated (in EC2 marked as , ) in the following way: , = , • • . Also, for each considered control section it is necessary to calculate 0 , i.e. to determine area inside the considered control perimeter, and then using Eq. (3) calculate ultimate punching shear force, in EC2 marked as . Since the described calculation procedure is performed in several chosen control sections, several values of ultimate punching force are obtained, the relevant being a force which is minimal in terms of its value. The distance of the critical control section determined in this way in relation to the edge of the column is marked as acr (Fig. 2).
Apart from the described iterative procedure, the position of the critical section can be determined somewhat more simply, based on the diagram derived from the parametric studies and presented in the European Concrete Platform − ECP [6]. The diagram for determining the position of critical section acr based on the dimensions of the crosssection of the column (bc) and geometry of the footing (B and d) is provided in Fig. 2.
As already mentioned, Eurocode 2, in addition to the critical section defined by the iterative procedure or using the diagram in Fig. 2, also requires checking the shear stresses in the footing cross-section along the perimeter of the column. In the process, for the force in the column reduced for the part of soil reaction beneath the column footing, shear stress νEd along the column perimeter u0 is calculated, and it must not exceed the value of the maximum punching shear stress , , i.e.: For the final conclusion on the punching shear capacity of column footings, of the two considered sections (section within a distance of 2d from the edge of the column and the section along the perimeter of the footing column) is relevant the one that results in a lower value of punching shear force.
Yet, more recent research conducted by Hegger et al. [7−9], Siburg et al. [10], Ricker and Siburg [11], indicate that expression vRd,max is not the most adequate for determining the values of maximum stress at punching shear, regarding that it is only a function of the compressive strength of concrete. Therefore, in the German national annex of Eurocode 2, the calculation of the punching shear force for the section along the perimeter of the column is considered obsolete and is not taken into account. Therefore, in the analyses conducted in this paper, the calculation of the punching shear force for the section along the perimeter of the column is omitted.
ANALYSIS OF EXPERIMENTAL RESEARCH CONDUCTED PREVIOUSLY
Although the number of studies on the punching shear capacity of column footings has been growing recently, unfortunately, a larger number of experimental tests still relate to footings that rely on some kind of simulated subgrade (springs, presses, line support). An overview of previous experimental tests of concentrically loaded reinforced concrete column footings in terms of the punching shear capacity, according to the available technical literature, is given in Table 1. Experimental tests have shown that the punching shear capacity of column footings is significantly higher in the case of footings rested on a real subgrade soil compared to footings in which the subgrade is simulated. Therefore, when analyzing the influence of concrete compressive strength and reinforcement ratio on the punching shear capacity of column footings, only footings supported on the ground are taken from Table 1. In addition, the analysis included the footings examined during specially designed and constructed experimental setup in Niš, Serbia, where many tests were performed (more data can be found in Bonić et al. [27]). Experimentally recorded values of punching failure forces of the analyzed footings Vtest are provided in Table 2, column (7). Using the expression for calculation of the punching shear capacity of column footings according to Eurocode 2 in the iterative procedure, for the mentioned footings resulted in the values VEC2(i) that are provided in column (8), and the ratio of experimental and calculated punching shear forces Vtest/VEC2(i) in column (9). In addition, for comparison, in column (10) are presented the design values of punching shear force VEC2(ECP), determined by using the diagram shown in Fig. 2 according to the European concrete platform (ECP, 2008), and in column (11) ratio of these forces in comparison to the experimentally recorded punching shear forces Vtest/VEC2(ECP). What can be observed by comparing the values in columns (8) and (10), i.e. in columns (9) and (11), is that the extensive iterative procedure and simplified procedure using the diagram result in approximately identical results.
EFFECTS OF CONCRETE COMPRESSIVE STRENGTH AND REINFORCEMENT RATIO TO THE PUNCHING SHEAR CAPACITY OF CONCENTRICALLY LOADED RC COLUMN FOOTINGS
Effects of compressive strength and reinforcement ratio to the punching shear capacity of concentrically loaded RC column footings was considered on two series of footings tested by the authors of this study according to Table 2 (Bonić et al.). In each of them, all characteristics of footings, except that whose effect was considered, were approximately identical. For the analysis of the effects of considered characteristics on the punching shear capacity of column footings, the footing deflection was observed in the function of the increase of load in the footing column. There, footing deflection comprises the difference between the registered soil settlements under the column and the angle of the footing.
The first series consisted of three footings made of concrete, whose compressive strengths (average values of multiple tested specimens, on a cylinder of standard dimensions) varied and amounted to fcm = 7.92 MPa (footing marked as F6), fcm = 15.83 MPa (footing F8) and fcm = 30.37 MPa (footing F2), whereas the remaining characteristics were approximately the same. In the other series of the tested footings, the used reinforcement ratios were 0.27% (footing F7), 0.48% (footing F8) and 0.91% (footing F9), whereas the other characteristics were again approximately identical. Qualitative effects of considered characteristic on the punching shear capacity of column footings are illustrated on the diagrams in Fig. 3.
In Fig. 3(a) it can be observed that the effects of compressive strength of concrete to punching shear force of the footings is considerable, because the recorded punching shear forces of the footings with markings F2, F6, and F8 were respectively 1050 kN, 440 kN, and 645 kN. Such a result was expected and it is in agreement with the previous research (Hegger et al. [7−9]; Siburg and Hegger [15]; Simões et al. [13]). Moreover, in the diagram can be seen that the footings with a lower concrete compressive strength (F6 and F8) exhibit much more ductile behavior under load.
In Fig. 3(b) it can be observed that effects of the reinforcement ratio are not as prominent as the previously observed effect, whereby the recorded punching shear forces of the footings F7, F8, and F9 were 527 kN, 645 kN, and 720 kN, respectively. This result was expected and in accordance with the previous research (Hallgren et al. [19]; Menetrey [28]). In terms of ductility, these footings showed relatively similar behavior. The stress in concrete at punching shear for the registered force of punching shear during the experiment , was calculated in the critical cross section of the foundations with a goal of determining the quantitative impact of compressive strength and reinforcement ratio: where the designations from the previous expressions are retained. The values used in the iterative calculation procedure according to EC2 (calculation provided in columns (8) and (9) of Table 2) are used for A0 and u. Fig. 4 shows the punching shear stress in concrete at the moment of punching, vtest, for the footings which were rested on a real subsoil (according to Table 2), depending on the compressive strength of concrete (fck) and reinforcement ratio (ρt) of tested footings.
The conducted regression analysis, Fig. 4(a), shows that the stress in concrete at punching shear vtest is proportional to the compressive strength of concrete with the exponent of 0.50. This corresponds with the conclusions of Hallgren et al. [19], which state that the punching shear capacity of slabs having a low shear slenderness, such as column footings, is proportional to the compressive strength of concrete in a ratio of 0.76, whereas the tests with thin slabs by Braestrup and Gardner (according to [19]) showed that this impact is smaller and amounts from 1/3 to 1/2. According to Fig. 4(b), punching shear stress in concrete at the moment of punching vtest increases with the reinforcement ratio with the exponent of 0.23, which also agrees with the research by Hallgren et al. [19]. On the basis of this, it can be concluded that the reinforcement ratio has a smaller influence on the concrete punching shear resistance than the compressive strength of concrete. The obtained results indicate that Eurocode 2, which in the expression of Eq. (4) includes the impact of these two parameters with the same exponent (1/3), on the one hand underestimates the impact of compressive strength of concrete, whereas on the other hand overestimates the impact of reinforcement ratio on the punching shear capacity of RC footings.
where the coefficient k is also modified and is calculated according to the expression = √200⁄ , whereas other designations and method of calculation are the same as in the expression of Eq. (4). Finally, for the footings given in Table 2 the procedure of calculation of the ultimate punching shear force according to Eurocode 2 was repeated, but with implementation of the proposed calculation modification, provided by Eq. (7). As the relevant critical section (αEC2 in Eq. (7)) was taken the section determined using diagrams provided in the European concrete platform − ECP [6], i.e. according to Fig. 2.
The obtained results are provided in Fig. 5. As previously observed, the iterative procedure and procedure using the ECP diagram result in almost identical values. By comparing the results according to the standing Eurocode 2 and to the proposed solution, it can be seen that the proposed solution provides the results which are considerably closer to the experimentally registered values. For the footings F1 to F9, the proposed modified solution gives the values of Vtest / Vcalc that are significantly less conservative (closer to 1.0) compared to the current Eurocode 2. On the other hand, for the remaining footings from Figure 5, for which the original Eurocode 2 gives the ratio Vtest / Vcalc lower than 1.0 (which is an undesirable situation), by the proposed modified solution values equal to or greater than 1.0 are achieved, which is on the safety side. Marks of the tested footings (Table 2) Vtest / Vcalc ▪ Recommendations for determining the position of the critical perimeter based on the diagram proposed by the European concrete platform − ECP yield almost the same results as the calculation which identifies the minimum punching force inside the area bounded by the basic control section (iterative procedure). Therefore, the use of this diagram can be recommended instead of a complicated iterative procedure; ▪ The conducted regression analysis of the footings rested on the real soil indicates that the punching shear capacity is more affected by the compressive strength of concrete than reinforcement ratio, even though Eurocode 2 takes them in the calculation in the same measure. It is proposed to calculate the compressive strength of concrete and reinforcement ratio with the exponents of 1/2 and 1/4 respectively, when calculating punching shear capacity of footings, instead with the same exponent of 1/3 for both characteristics; ▪ The proposed calculation modification according to Eurocode 2, which in a different way takes into consideration the impact of the size-effect coefficient (k), reinforcement ratio (ρt), and compressive strength of concrete (fck), provides the results which are considerably closer to the experimental results in comparison to the current Eurocode 2.
|
2022-01-15T16:29:09.715Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "009d10b40892af1bdae9a0af2fd79548e84a4a4e",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46052102141B",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c37734606a68432e0c4bd4c349f9fcb21fa92536",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
13160703
|
pes2o/s2orc
|
v3-fos-license
|
Spectrum sensing with spatial signatures in the presence of noise uncertainty and shadowing
In this paper, we consider a system of cognitive radios that collaborate with each other with the aim of detecting the random waveforms being emitted from licensed users. We study the problem of fusing the statistics from collaborating sensors, assuming that they send their statistics to a base station, where the final decision is made. The main contribution of this work is the derivation of a cognitive detector based on the generalized likelihood ratio test and the use of spatial signatures, a novel concept that allows the detector to capture the spatial correlation inherently embedded in measurements coming from neighboring sensors. The problem is formulated in terms of a model order detection problem, where a set of active and inactive sensors can be distinguished, thus allowing the detector to operate with a rank-reduced version of the observed covariance matrix. Since the estimation of this matrix may be a challenge in large-scale networks, we study the application of shrinkage techniques to cope with the problem of having more sensors than available observations. Finally, we analyze the performance of the proposed detection scheme in the presence of log-normal shadowing effects and noise power uncertainties, the latter due to presence of interferences. For the proposed detector, numerical results are drawn, showing a significant gain in performance compared to traditional approaches.
Introduction
Due to the rapid growth in the field of radio communication, most of the available spectrum has already become congested, and the assignment of frequencies to new services is currently a critical problem. Nevertheless, studies show that assigned frequencies are not occupied all the time, implying that the traditional way of spectrum allocation has resulted in underutilization of such a precious resource. In that sense, cognitive radio (CR) has the potential to become the solution to the spectrum underutilization problem. The CR paradigm is based upon the coexistence within the same frequency band, of both licensed and unlicensed users, in such a way that the latter are allowed to utilize the free spectrum holes left by the former in a dynamic and opportunistic manner [1,2]. This technology, which is currently on the forefront of next-generation wireless systems, and regulatory as well as standardization bodies are starting to support the idea of spectrum reuse [3,4]. Among *Correspondence: sadiq.ali@uab.es Department of Telecommunications and Systems Engineering, Engineering School, Universitat Autònoma de Barcelona (UAB), Bellaterra, Barcelona 08193, Spain the various functions of a CR system, reliable sensing of the licensed or primary users' (PU) spectrum is certainly of paramount importance. Such spectrum sensing is performed by unlicensed or secondary users (SU), either following a single-sensor or a multisensor approach. The process of spectrum sensing with a single sensor is fundamentally limited by local impairments, such as the noise level, the signal-to-noise ratio (SNR) wall [5], and radio propagation effects such as path loss and fading experienced by this sensor, which significantly deteriorate its sensing performance [6]. In contrast, collaborative spectrum sensing relies on the combination of measurements coming from multiple neighboring sensors [7]. Therefore, collaborative approaches are able to circumvent most of the propagation impairments of single-sensor spectrum sensing due to the presence of diversity in the set of measurements being processed at the fusion center [8]. It has to be taken into account that for the case of large-scale sensor networks, the signal of the PU will only reach to a subset of sensors (i.e., those sensors located close to the PU), which will typically be closely spaced, thus forming a cluster having highly correlated observations [9]. This observation motivates our interest on exploiting the http://jwcn.eurasipjournals.com/content/2013/1/150 spatial information of the received signal at closely spaced sensors, with the aim of providing an additional degree of robustness to the overall network decision metric.
There have been some attempts to consider correlated measurements into the formulation of collaborative signal detection. However, many of these studies consider the presence of correlation as a deleterious effect [10,11] rather than as a form of side information that can be used to enhance the detection performance. Similarly, most of the work done on correlated detection problems just focuses on the discrimination between correlated and independent observations, by means of exploiting the structure of the covariance matrix. For this particular problem, the study in [12, extensively discusses the use of multivariate detectors for testing the independence of random observations with the help of the generalized likelihood ratio test (GLRT) based on covariance matrices. These GLRT-based detectors typically end up with a simple quotient between the determinant of the sample covariance matrix and the determinant of its diagonal version. Recently, covariance-based detection techniques in [12] have been widely adopted for the detection of signals with distributed sensors, especially in the context of cognitive radios [13,14]. However, these detectors typically focus on detecting the presence of correlated data, as a possible indication of the presence of signal from a PU. They do not focus, instead, on exploiting the actual correlation structure that impinges onto the sensor field when an emitting PU is present. This observation suggests that the performance can be further improved by exploiting the sensor proximity information, leading to new schemes based on the concept of location awareness [15][16][17]. For the case of collaborative spectrum sensing, prior work has demonstrated that information on the sensor position can lead to more reliable spectrum sensing, thus confirming the convenience of this information, when available [10,18].
Motivated by these facts, we propose a modified GLRTbased detector that achieves the regularization of the unknown covariance matrix with the help of spatial signatures. The concept of signatures would be somehow equivalent to steering vectors in the field of array signal processing [19], which are adopted herein as a way to capture the structure of spatially correlated measurements between neighboring sensors. Furthermore, selecting just some of the sensors of the network allows the proposed detector to operate on a rank-reduced subspace of the received signal, thus achieving a significant SNR gain. This approach, which was preliminary introduced in our earlier work [20], is extended herein to the problem of detecting Gaussian random waveforms emitted from a PU with unknown covariance matrix. This is in contrast to the deterministic approach considered in [20] where an unknown but constant waveform was assumed to be transmitted by the PU and where ideal propagation conditions as well as perfect knowledge on the signal parameters was also assumed. In that sense, the present contribution offers a much more realistic approach by assuming the emission of random waveforms, by including the presence of shadowing and noise power uncertainty, and by taking into account the practical problems that may arise in large-scale networks when estimating the unknown covariance matrix of the PU. The latter is indeed related to the required number of observations to avoid ill-conditioning in the estimation of this matrix, which is typically on the order of the number of sensors [21]. Therefore, detection algorithms requiring the inverse or the determinant of this matrix can no longer be applied for short observation periods. To cope with this practical problem, the present work incorporates the concept of shrinkage estimation, a method that is found to improve the stability of estimated covariance matrices with short data records [21]. Simulation results have been obtained to compare the proposed detection schemes with and without spatial signatures, as well as with and without shrinkage estimation, showing that the introduction of spatial structures and shrinkage estimation significantly improves the overall detection performance.
The remaining of the paper is organized as follows. In Section 2, the problem statement and details about the signal model are presented. Section 3 presents the structured signal model based on the concept of spatial signatures, and 4 introduces the proposed detection algorithm. In 5 we briefly discuss the shrinkage method for estimating the covariance matrix. Finally, simulation results are presented in 6, and conclusions are drawn in Section 7.
Problem statement
We consider herein a large cognitive radio network where both primary and secondary users coexist in the same geographical area. We assume an infrastructure-based secondary network [22], where each cell consists of a single base station (BS) working as a fusion center and K SUs working as sensors. We also assume that sensors are deployed in the region following a uniform distribution and that the sensors and PU remain stationary in their position during the observation interval. The signal power emitted by the PU decays isotropically as a function of distance and is affected by a significant path loss attenuation due to the large area being covered by the network, as well as by fading/shadowing effects. As a consequence, only a subset of sensors will be able to receive enough power levels so as to easily detect the presence of the PU with a given detection performance [23,24]. The rest of the sensors will typically receive extremely weak power levels, and this observation allows us to distinguish between the so-called active and inactive sensors, respectively. In the process of collaborative spectrum sensing, the BS http://jwcn.eurasipjournals.com/content/2013/1/150 coordinates the opportunistic spectrum access of all SUs within its cell. This is done by directing sensors to perform spectrum sensing periodically. At the end of each sensing period, all the sensors report their measurements to the BS, which makes the final decision about the presence or absence of the PU [8]. Once the final decision is made at the BS, it is broadcast back to SUs within the cell in order to inform them about the presence or an absence of the PU. Similarly to [25], we further assume that the BS knows the location of the SUs, either through the use of positioning techniques or through some calibration process.
Signal model and test statistics at the SU
In the collaborative sensing system considered herein, we assume that sensors simply measure the PU signal power on a target frequency band using an energy detector, and they report their sensing results to the BS [26]. This is a simplistic interpretation of collaborative sensing, which indeed covers a much wider area [27], but it allows us to concentrate on the specific problem of energy detection. Indeed, the energy detector is the simplest detector that can be constructed in practice. It uses very limited a priori information regarding the signal since the detection is based only on the received signal power. In the sequel, we will consider an observation interval of n = 1, 2, . . . , N sensing periods. During the nth sensing period, every SU captures a snapshot of m = 1, 2, . . . , M received signal samples in order to estimate the received signal power. At the ith sensor (i.e., SU), the corresponding received samples are denoted by y i (m; n), and two possible hypotheses arise for the spectrum sensing problem under study. On the one hand, we have the null hypothesis denoted by H 0 , which represents the case in which no PU signal is present in the received samples y i (m; n). On the other hand, we have the signal-present hypothesis denoted by H 1 which represents the case in which some PU signal is actually present in these samples. The signal model for these two hypotheses can be formulated as follows: where z i (m; n) are the i.i.d. zero-mean samples encompassing the aggregate of random disturbances affecting each sensor, whereas g i (m; n) are the received signal samples corresponding to the random waveform emitted from the PU. Based on these samples, the energy detector at sensor i for the nth sensing period is given by: where y i (n) y i (1; n), y i (2; n), . . . , y i (M; n) T .
It is interesting to note that the energy detector in (2) can fairly be approximated by a Gaussian distribution in virtue of the central limit theorem (CLT), provided that M is sufficiently large 1 . Moreover, it should also be taken into account that the overall noise in a wireless receiver is often considered to be an ensemble of various effects, including not only the thermal noise contribution but also other degradations such as the presence of interference signals from distant PUs or from other opportunistic SUs. All these random disturbances are included within the z i (m; n) samples in (1), whose overall unknown power will be denoted herein by σ 2 For the sake of clarity, we will loosely refer to these samples as noise samples. In practice, and because of the unknown and random nature of the underlying disturbances, it is very difficult to determine the exact noise powers σ 2 ε,i even if we calibrate the system [28]. In some situations, this noise power uncertainty may lead to an increase of the SNR wall, which can be understood as the minimum SNR below which a signal cannot be detected, thus hindering the overall detection process [5]. Consequently, and from a practical point of view, it is of interest to assume that σ 2 ,i is unknown. A similar statement can be made for the power being received from the PU at sensor i, which is referred herein as P ε,i E |g i (m; n)| 2 , and is also considered to be unknown due to the unknown location of the PU and the presence of shadowing/fading that may alter the actual received power from its nominal value.
With the above considerations, and in virtue of the Gaussian assumption in (2) provided by the CLT, the test statistics for the energy detector at sensor i can be modeled by the following Gaussian distribution [26]: where both σ 2 ε,i and P ε,i are assumed to remain constant during the whole observation interval of N sensing periods, and thus, they can be treated as unknown deterministic parameters herein. For the sake of clarity, note that we have used the notation N (μ, σ 2 ) for some μ, σ 2 in order to represent a Gaussian (i.e., normal) distribution with mean μ and variance σ 2 .
Signal model and test statistics at the BS
Every sensor calculates an estimate of its received power level according to (2) and transmits this power estimate to the BS through a reporting channel. At the BS, the power estimates received from the K sensors at the nth sensing period are stacked into the (K × 1) vector x(n) http://jwcn.eurasipjournals.com/content/2013/1/150 T , where x i (n) stands for the noisy and attenuated version of T i (n) after propagation through the reporting channel from sensor i to the BS [29]. Although the actual propagation effects of the K reporting channels are assumed to be unknown herein, they are considered to remain constant within the observation interval of N sensing periods. Therefore, the received signal model at the BS can still be expressed as some noisy contribution under hypothesis H 0 and some signal plus noise contribution under hypothesis H 1 as follows: for n = 1, . . . , N and with w(n) ∼ N (μ w , w ) a (K × 1) vector containing the reported noise power levels at each sensor when no PU is present, whereas s(n) ∼ N (μ s , s ) is a (K × 1) vector with the PU power levels. It is important to recall here that both w(n) and s(n) are power measurements, and thus, the mean vectors μ w E [w(n)] and μ s E [s(n)] contain the mean noise powers and the mean signal powers at each sensor, respectively, whereas the covariance matrices w the variability of the corresponding power estimates being reported by the sensors. At the BS, and because of the disturbances that may appear due to propagation through the reporting channel, we will assume that power measurements under hypothesis H 0 are i.i.d. with some common variability σ 2 w in such a way that w = σ 2 w I K for some unknown σ 2 w , and I K the (K ×K) identity matrix. Note also that both μ s and s depend on the characteristics of the random waveform being emitted by the PU and its position with respect to the K sensors. Finally, and for the sake of clarity, we can express the signal model at the BS to be Gaussian distributed as follows: where μ 0 μ w and 0 w under the H 0 hypothesis, whereas μ 1 μ s + μ w and 1 s + w under the H 1 hypothesis. In practice, s will depart from a diagonal matrix, and the correlation represented by non-diagonal elements in s will typically indicate the presence of correlated shadowing effects in the received PU signal strengths [10].
Preliminaries
For an improved sensing performance, intuition suggests that the detection rule should rely on the observations of active sensors, thus discarding observations from the rest of inactive sensors. This approach can be understood as a kind of rank reduction method, whereby removing the most noisy dimensions of the received signal subspace leads to a significant improvement of the overall signalto-noise ratio. Moreover, since active sensors are typically located close to the PU, and also close to each other forming a spatial cluster, this side information should also be considered in the design of the detector. It is for this reason that one of the key points of this paper is the identification of the set of active sensors, a purpose that will be achieved through the help of model order selection techniques and the spatial structure of the neighboring sensors. To this end, we propose a structured signal model based on the concept of spatial signatures. For the case of the ith sensor, its signature is a vector that contains the attenuation terms to all the K sensors of the network, in such a way as if a signal source was located at the ith sensor position. Thus, the ith signature is a (K × 1) vector h i as follows: where i,j takes into account the deterministic attenuation loss due to the distance between ith and the jth sensor locations, with β as the known path loss exponent. We assume herein, similarly to [25], that the BS has complete knowledge of the sensors' positions in the network.
Full-structured signal model
The signatures of all K sensors can be stacked into the so-called signatures matrix H [ h 1 , h 2 . . . , h K ], which is typically a full-rank (K × K) matrix having K signatures as columns. Based on this formulation, the PU power levels received at any of the sensors can be expressed as a linear combination of the network signatures. That is to say, the values of s(n) in (4) can be expressed as s(n) = Ha(n), thus making explicit the role played by the spatial structure of the network, through the dependence on H. Then, the signal model in (4) can be rewritten as: where a(n) ∼ N (μ a , a ) is a (K × 1) vector containing the random weights of each signature onto the received signal. That is to say, the elements within a(n) quantify the importance of each of the sensor signatures in the reconstruction of the signal field emitted by the PU. Therefore, by selecting the largest weights, we are actually choosing the most relevant sensors on the basis of their physical proximity to the PU. The closer the sensor is located to the PU, the more aligned will be its signature vector to s(n), and thus, the larger the weight assigned to this signature. Using this linear combination of signatures in (7), we are http://jwcn.eurasipjournals.com/content/2013/1/150 taking into account both the distances between neighboring sensors and the location of the sensors with respect to the PU, thus fully exploiting the spatial information contained within the received signals. From a statistical point of view, the only difference with respect to the conventional unstructured signal model in (4) to (5) is that now, a specific structure is imposed onto both μ 1 and 1 , with μ 1 = Hμ a + μ w and 1 = H a H T + σ 2 w I K . Finally, once we have the signal model with the embedded spatial structure, the next step will be to select the relevant signatures contributing to the received signal, a topic that is discussed in Section 3.3.
Rank-reduced structured signal model
The PU will typically appear at an unknown and random position, and it will be surrounded by a given number of L ≤ K active sensors. In these circumstances, and in order to improve the spectrum sensing detection performance, we need to select the relevant signatures of active sensors so that the rest of K − L signatures can reasonably be ignored. In some sense, we are in front of a detection problem where it is convenient to use a rank-reduced version of the signal model in (7). To do so, we will select the L most relevant signatures using model order selection techniques [30]. Once we select the set of L active sensors, their signatures will be stacked into a truncated (K × L) matrix H L . Similarly, the selected weights will be stacked into a (L × 1) vector a L (n), which is the reduced version of vector a(n) in (7). The resulting rank-reduced signal model can be written as: where the random weights a L (n) continue to be Gaussian distributed with a L (n) ∼ N (μ a L , a L ). Therefore, the difference with respect to the full structured model in (7) is that now, we have μ 1 = H L μ a L + μ w and both depending on the unknown model order L. It is important to remark that in addition to the spatial information provided by the use of spatial signatures, the rank-reduced version of matrix H will indeed allow us to benefit from an equivalent SNR gain by removing those subspace dimensions where the noise contribution is the dominant effect [31].
Detection algorithms
In our spectrum sensing detection problem, there are unknown parameters under both hypotheses that prevent us from adopting the well-known Neyman-Pearson detector. This obstacle is typically circumvented by adopting the GLRT approach, whereby the unknown parameters are substituted by their maximum likelihood estimates resulting in a simple and asymptotically optimal detector [32]. The main drawback, however, occurs when the dimension of the unknown signal vector (i.e., the model order) is unknown [33]. This situation occurs in the signal model (8), where the model order (i.e., the number of L active sensors) is actually unknown, and thus, we cannot use the GLRT in a straightforward manner. Instead, we need to modify the GLRT in order to determine the appropriate value for L to be used, a task that can be done using model order selection techniques [30]. In Section 4.1, we will derive the traditional detector based on the conventional GLRT, which will be used as a benchmark. Since no spatial information is considered in this detector, it will be referred to as the unstructured GLRT. Later on, in Section 4.2, we will derive an improved detector that incorporates both spatial information and the minimum description length (MDL) criterion for carrying out the model order selection. For the sake of clarity, this latter detector will be referred to as the structured GLRT.
Unstructured GLRT
In the original detection problem to be solved in (5), we need to estimate the unknowns {μ 0 , 0 } under hypothesis H 0 , as well as the unknowns {μ 1 , 1 } under hypothesis H 1 . To do so, we will assume that the BS has available the measurements of K sensors for N consecutive sensing periods which are stacked into the ( In these circumstances, the expression for the traditional or unstructured GLRT (UG) can be written as: where γ is a threshold that determines a given probability of false alarm. Following the GLRT approach, the values of the unknown parameters required in (9) are substituted by their maximum likelihood estimates (MLE). For the unknown mean vector μ 1 , its MLE can easily be found as: Regarding the unknown covariance matrix 1 , its MLE can be written as [12, Lemma 3.2.1]: It is interesting to note that the correlation matrixR x and the mean vectorx are the sufficient statistics under hypothesis H 1 . Similarly under hypothesis H 0 , the MLE of μ 0 can be obtained aŝ μ 0 =x, and the ML estimate of σ 2 w can be found as: where Tr(·) is the trace operator. Replacing all the unknowns with their estimates, and after some mathematical manipulations, the final expression for the unstructured GLRT in (9) turns out to be given by: The test statistic in (13), which does not take into account any spatial information, is nothing but the traditional Mauchly's sphericity test [12,Chap 10]. The detector operates with the full sample covariance matrixˆ x , and it neither considers the relevance of active sensors nor the spatial structure as side information. A modified sphericity test has been proposed in [34,35] by exploiting the fact that the signal covariance matrix may be of lowrank dimensionality. However, the resulting detector does not really exploit the spatial structure of neighboring sensors. In that sense, the performance of this unstructured sphericity test-based detector can be further improved by incorporating the proposed concept of spatial signatures, which acts as an additional side information and allows us to select only those particular observations reported by active sensors. This novel feature will be introduced next in Section 4.2.
Structured GLRT with spatial information
As we have already mentioned in Section 3.3, the key point in the proposed rank-reduced signal model in (8) is the determination of the spatial model order L. Since L ≤ K, the detector with spatial information can operate with a reduced signal subspace by rejecting those dimensions (i.e., those spatial signatures) where the PU signal contribution is almost negligible. Therefore, some performance gain is expected compared to traditional unstructured signal detectors. The process of determining the optimal L is coupled with the one of signal detection, and this leads to the following structured GLRT (SG) detector: The denominator in (14) is the likelihood function of the observation under hypothesis H 1 , which includes the unknown model order L as an additional parameter to be determined by searching the maximum of the GLRT from l = 1, . . . , K. As a result, the mean vector and the covariance matrix under hypothesis H 1 now depend on the tentative model order l, according to the rank-reduced spatial structure considered in (8). That is, for the mean vector, we have μ 1,l H l μ a l + μ w and for the covariance matrix 1,l H l a l H T l + σ 2 w I K . The problem with (14) is that the inner GLRT SG,l (X) monotonically increases as a function of l, and thus, the result to the overall search process is always given by the test statistic having the maximum model order l = K. This occurs because the tentative probability density functions f (X; μ 1,l , 1,l ) are a set of nested families. The net effect is that the test statistic will always overestimate the actual model order, thus including dimensions of the signal subspace where there is almost no signal, but only noise is present. This results in a reduction of the power of the detector to produce a desired result [33]. This problem will occur whenever the number of signal components is unknown, and that is why the conventional GLRT poses some limitations in this type of nested or model order-based detection problems. To cope with this problem, several model order selection criteria have been proposed in the literature. They are based on the incorporation of an additional penalty function, which prevents the likelihood function to increase without bound when increasing the model order [30]. Herein, we will consider the well-known MDL criterion. With the help of this selection technique, both the estimation of the true model order L and the evaluation of the GLRT can be done jointly. The detector combining the structured GLRT and the MDL can be expressed as follows: where l log K is a penalty function that prevents the GLRT statistic to monotically increase with increasing model orders. In (15), SG,l (X) stands for the structured GLRT statistic while consideringL = l as the tentative model order, whose likelihood functions under both H 1 and H 0 hypotheses will be derived in Section 4.2.1. Later on, in Section 4.2.2, we will propose an algorithm to evaluate the structured GLRT in (15) and take advantage of the available spatial information embedded onto the signatures matrix.
Derivation of the structured GLRT for the tentative model orderL = l
In this section, we will derive the expression for the structured GLRT SG,l (X) required in (15), which assumes a tentative model orderL = l for the parameters μ 1,l , 1,l in the likelihood function under hypothesis H 1 . Bearing in mind the Gaussian nature of the received measurements http://jwcn.eurasipjournals.com/content/2013/1/150 at the BS, as already introduced in Section 3.3, the likelihood function in the denominator of SG,l (X) in (14) is given by: where the reduced rank spatial covariance matrix a l is embedded into 1,l , since 1,l = H l a l H T l + σ 2 w 1 I K . Regarding the sample covariance matrixˆ x,l in (16), it has the following expression: (17) where the MLE of the unknown mean vector μ a l is found (17) (17) as: wherex p,l P H lx , and P H 1 H l H T l H l −1 H T l is the projection matrix onto the l-dimensional subspace spanned by rank-reduced spatial signatures matrix H l . The vectorx p,l is therefore the projected version of the mean vectorx onto the subspace spanned by the columns (i.e., signatures) of H l . Next, in order to find the MLE of a l , we can apply the logarithm on both sides of (16), take the derivative w.r.t. a l , and equate to zero. By doing so, we get [36, Sec. 8.5 ], where H † l H T l H l −1 H T l is the Moore-Penrose pseudoinverse, andσ 2 w 1 is given by: with P ⊥ H l I K − P H l , the orthogonal projection matrix of P H l . In (20), the variance σ 2 w 1 of power estimates at the BS is estimated using the projected version of the observation vector x(n) onto the noise subspace, since P ⊥ Consequently, for the overall covariance matrix 1,l in (16) we have, 1,l = P H l ˆ x,l −σ 2 w 1 I K P H l +σ 2 w 1 I K (21) which can also be written as, 1,l = P H lˆ x,l P H l + On the other hand, under hypothesis H 0 , we need to determine the unknown parameters {μ 0 , 0 } required by likelihood function in the numerator of SG,l (X) in (14). Regarding the MLE of 0 = σ 2 w 0 I K , it can be obtained fromσ 2 where we already used the fact thatμ 0 =x. With these results in mind, we can obtain the expression for the structured GLRT with tentative model orderL = l as: Substitutingˆ 1,l = P H lˆ x,l P H l +σ 2 w 1 P ⊥ H l andσ 2 w 0 = 1 K Tr(ˆ x,l ), and after some mathematical manipulations, (23) can equivalently be expressed as: SG,l (X) The expression in (24) provides a closed-form expression for the structured GLRT with tentative model order L = l. The main feature of this expression is that it selects the most relevant spatial signatures, and then on the basis of these signatures, it reduces the rank of the measurements covariance matrix, x . This statement can be explained by defining l P H lˆ x,l P H l and noticing that l = P H lR x P H l − P H lxx T P H l , which can be equivalently expressed using the properties of projection matrices as The expression of l clearly shows that it is indeed the sample covariance matrix of a vector achieved by projecting the received observations x(n) onto the specific subspace being spanned by the signatures of active sensors. This will indeed result in an SNR gain due to the projection of the observation vector onto a reduced dimensionality subspace.
Implementation of the structured GLRT with MDL model order selection
The next step is to substitute the expression in (24) into (15) and perform the joint PU signal detection and model http://jwcn.eurasipjournals.com/content/2013/1/150
4. Reorder the signature vectors in H according to the sortedμ a , to get H. 5. Implement the detector as: • Initialize t = and l = 1.
-Push the result of l log K − 2 log SG,l (X) onto the vector t.
order selection from l = 1, . . . , K. To do so, we summarize the implementation of the resulting detector in the pseudocode description indicated in Algorithm 1.
Improved estimation of the covariance matrix
Both the unstructured and the structured GLRT detectors presented in this paper are found to be based on the determinant of covariance matrices, which are typically estimated through the sample covariance, as in (11) and (18), respectively. Therefore, and although it is often taken for granted, a critical requirement for the GLRT detectors under study is that, the sample covariance matrices must be non-singular and positive definite. To this end, we have to make sure that the number of available observations at the BS, given by N, is much larger than the number of sensors K (i.e., N K). However, in many sensor network deployments, we typically have a very large K, and thus, using a number of samples greater than K is a requirement that is difficult to fulfill in practice. In these circumstances, it is therefore needed to estimate the covariance matrix with fewer samples while keeping a reasonable detection performance. Stein in [37], introduced the concept of shrinkage applied to high-dimensional estimators, and he derived the striking result that the performance of MLE can always be improved upon by shrinking with a given factor α (shrinkage intensity). This improved covariance estimator is well-conditioned and always positive definite, even for small sample sizes [21]. The basic principle of shrinking estimators is to shrink the variation of the eigenvalues in the sample covariance matrix, proceeding as follows: where F 0 is the target matrix, which is chosen to be positive definite (and therefore nonsingular) and wellconditioned, and we assume it herein to be given by The interested reader can find further details about the target matrix in [21]. Now, for the expression in (25), we need to choose an appropriate α, the shrinkage intensity parameter. In [38], the authors discuss shrinkage methods that calculate the intensity parameter on the basis of received observations. They present what they call an oracle approximating shrinkage (OAS) estimator, which is an iterative method presented in Algorithm 2. http://jwcn.eurasipjournals.com/content/2013/1/150 Algorithm 2 The oracle approximating shrinkage estimator (OAS). 1. Initialize α and δ target . 2. Implement the shrinkage estimation as: • while covariance matrix estimation error δ > δ target do: In Algorithm 2, δ Target represents a specified threshold for the covariance matrix estimation error. The algorithm stops once the estimation error turns out to be less than this threshold. Once the algorithm has converged, it reaches the following stable value of the shrinkage parameter, In our detection schemes, we will use α approx in (26), the approximate value of the shrinkage parameter in the covariance matrix estimation process indicated in (25).
Simulation results
The motivation of this section is to assess the performance of the proposed structured GLRT detector in (15) and (24), which takes advantage of the novel concept of spatial signatures introduced in Section 3 and whose implementation is described in the pseudocode description of Algorithm 1. For the analysis to be conducted herein, we consider a wireless sensor network with a total of K = 30 sensors deployed in a squared field. The sensors are randomly placed within the field following a uniform distribution, and we assume that the PU appears at an unknown position. We have tested the detectors considered in this paper for many different uniformly distributed topologies of K sensors, and we have found that the results have similar characteristics for different random topologies.
For the signal generation, we are assuming a quasistatic block-fading channel in which both the PU received power and the noise power at each sensor do remain constant within the observation interval of N measurements. For a given observation interval, the PU received power at sensor i is given by P ,i = P 0 d −β i 10 X σ /10 , where P 0 is the power at a reference distance from the PU, β is the signal decay exponent with typical values from 2 to 5, d i is the Euclidean distance between the PU and sensor i, and X σ is the value of the log-normal shadowing. From one observation interval to the following, we allow the shadowing to vary according to X σ ∼ N (0, σ 2 X ), with σ X as the standard deviation [26]. Regarding the noise power at sensor i, we are assuming σ 2 ε = 10 σ /10 for all sensors, with σ modeling the log-normal noise uncertainty as σ ∼ N (0, σ 2 σ ), from one observation interval to the following [39]. At the BS, the variability of the received power measurements is given by σ 2 , where the first term represents the variability due to the sensors themselves, as indicated in (3), and the second term incorporates an additional disturbance σ 2 f due to the noisy reporting links that connect sensors to the BS.
Regarding the assessment of the detectors being considered in this paper, we will analyze their performance with and without the shrinkage estimation through the use of receiver operating characteristic (ROC) curves. Although the ROC curves fully characterize the performance, it is also desirable to have a single and quantitative figure of merit in order to compare different detectors. This metric is typically the area under the ROC curve (AUC), which varies between 0.5 (poor performance) and 1 (good performance). The AUC is mathematically expressed as where T represents some specific detector, P D indicates the probability of detection, and P FA is the probability of false alarm. For the traditional unstructured GLRT in (9), the theoretical characterization of both P D and P FA (as well as the associated detection threshold γ ) can be determined for an asymptotically large observation interval. In that asymptotic case, closed-form expressions can be found because the statistics of the GLRT can be well approximated by a chisquared (χ 2 ) distribution [32,Sec.6.5]. Unfortunately, this is not the case for the structured GLRT in (15), whose performance turns out to be coupled with that of the MDL model-order selection criterion, thus posing insurmountable obstacles to the derivation of a closed-form statistical characterization. In order to circumvent this limitation, we resort to the numerical evaluation of P D and P FA through the numerical computation of the ROC curve. http://jwcn.eurasipjournals.com/content/2013/1/150 To do so, we use the algorithm proposed in [40],which provides a computationally efficient method for determining the performance of a given detector in terms of P D as a function of P FA . Once the ROC curve is available, we calculate the AUC curve on the basis of integrating the areas of small trapezoidal bins from the ROC curve. That is to say,
Experiment 1: ROC curves for the detection schemes
In Figure 1, we evaluate the ROC curves for the proposed detection schemes by setting the PU transmit power P 0 = −7 dB so that the mean received power in the sensor field turns out to be −37 dB. In Figure 1a, we present the results in the absence of noise power uncertainty (i.e., σ σ = 0 dB), where two conclusions can be drawn. First, that the structured GLRT SG (X) clearly outperforms the unstructured GLRT UG (X) when no shrinkage is implemented. Second, that when shrinkage is implemented, the performance of the unstructured GLRT is boosted and becomes close to the one provided by the structured GLRT. This observation suggests that in the absence of noise power uncertainty, shrinkage has a similar effect to rank-reduction implemented by the use of spatial signatures. Interestingly, the situation changes when noise power uncertainty appears. This can be observed in Figure 1b, where we plot the ROC curves for the case of σ σ = 2 dB. In that case, the unstructured GLRT UG (X) is severely degraded irrespective of whether shrinkage is implemented or not, whereas the proposed structured GLRT SG (X) is found to exhibit a more robust and superior performance, particularly for small P FA .
Experiment 2: Sensitivity to noise power uncertainty
In Figure 2, we compare the AUC plots Xx of the detectors under study in order to further analyze the effects of noise power uncertainty preliminary highlighted in Experiment 1. To do so, and for the same parameters as in the previous experiment, we now let the noise power uncertainty range from σ σ = 0 dB to σ σ = 12 dB. The AUC plots clearly show that the unstructured GLRT UG (X) with shrinkage estimation starts to degrade for noise power uncertainties greater than σ σ = 3 dB. In contrast, the structured GLRT SG (X) (both with and without shrinkage) is able to cope with higher noise power uncertainties and provide on the order of a 15% to 20% improvement in terms of AUC in the range σ σ ∈ [ 5.5, 8.0] dB. Indeed, the impact of noise power uncertainty is even more severe in the case of low SNR, as suggested in [39], and SG (X) is able to counteract this situation by increasing the system's SNR by selecting the relevant samples of active sensors.
It is also found that the performance of different detection schemes improves by shrinkage estimation, though the improvement is very small in the case of the structured GLRT SG (X), for which the use of signatures already provides the required robustness to cope with harsh working conditions. In this experiment, we have also analyzed the impact of noise power uncertainty on a energy detector at a single node. We remark here that we selected a sensor that is located close to the PU, and it receives signal with high SNR. In spite of that, we can see that the performance of the energy detector at a single node is severely affected by noise uncertainty, thus confirming the advantages of the proposed approach of collaborating sensing with spatial information.
Experiment 3: Sensitivity to shadowing in the channel between the PU and SUs
In this experiment, we analyze the sensitivity to the shadowing present in the channel between the PU and SUs, quantified by the parameter σ X , and for two different working conditions (i.e., low and high SNR). We first start with the low SNR scenario in Figure 3, where two different cases of noise power uncertainties are analyzed as a function of shadowing. These two cases correspond to σ σ = 0 dB in Figure 3a and σ σ = 5 dB in Figure 3b. As already highlighted in previous experiments, the results in the presence of shadowing also show a superior performance for the structured GLRT SG (X), particularly for high noise power uncertainties, i.e., as in Figure 3b. Interestingly, the detection schemes under analysis are found to perform better as the shadow fading becomes more variable (i.e., higher σ X ). This is because of the heavy-tailed distribution of the PU-received power in the presence of log-normally distributed shadow fading, which helps to improve the overall performance for large σ X [41]. Similarly, in Figure 4, we plot the AUC curves for the high SNR regime. Figure 4a is for σ σ = 0 dB, and Figure 4b for σ σ = 5 dB. Here again, we can see that using spatial signatures, the detection performance also improves compared to the unstructured GLRT UG (X), with or without shrinkage. Therefore, and particularly for large noise power uncertainty, it becomes clear again the superior performance of the structured GLRT compared to any of the unstructured GLRT implementations.
Experiment 4: Sensitivity to the available sample support for estimating the covariance matrix
In this final experiment, we analyze the sensitivity to the observation interval length N, which has a direct impact on how accurate is the estimated covariance matrix and thus how reliable is the overall GLRT detection metric. To do so, we consider again two different noise power uncertainties, σ σ = {0, 5} dB, as shown in Figure 5a,b, respectively. Taking a look at these figures, two important conclusions can be drawn. First, that the adoption of shrinkage becomes essential for very short observation intervals (i.e., when the number of measurements N is smaller than the number of sensors K). In this case, the incorporation of shrinkage does actually prevent the illconditioning of the estimated covariance matrix, and this leads to a much higher AUC for both the unstructured and the structured GLRT. This can be observed in both Figure 5a,b for N < 30, since K = 30 is the number of sensors being simulated in these experiments. The second conclusion is that for high noise power uncertainty (i.e., as in Figure 5b, the structured GLRT SG (X) clearly outperforms the unstructured GLRT UG (X). This is found to be true even for the case when the structured GLRT does not use shrinkage but the unstructured GLRT does, thus confirming the remarkable advantage of exploiting spatial information in uncertain scenarios. Moreover, the http://jwcn.eurasipjournals.com/content/2013/1/150 performance of the structured detector is not only better than the one provided by the unstructured one but also increases at a higher rate as a function of N. For that reason, we can also state that including spatial information results in a much more efficient exploitation of the information contained in the available measurements.
Conclusion
In this paper, a new GLRT-based collaborative spectrum sensing scheme has been proposed. The aim has been to achieve an improvement in the sensing performance by exploiting the implicit spatial correlation that is present among neighboring sensor nodes. Prior information on the sensor positions has been incorporated through a novel signal model based on the concept of spatial signatures, leading to the so-called structured GLRT detector, which is able to capture the correlation among different sensors. The performance of the proposed structured GLRT detector has been compared to that provided by the conventional unstructured GLRT by means of http://jwcn.eurasipjournals.com/content/2013/1/150 computer simulations. In order to further improve the detection performance, shrinkage estimation has been considered in both detectors as a way to circumvent the ill-conditioning problems that arise with short observation intervals. Interestingly, for the case of benign working conditions (i.e., in the absence of noise power uncertainty and shadowing), the use of shrinkage has been found to significantly improve the performance of the unstructured GLRT, leading to similar results to those provided by the use of spatial information in the structured GLRT. Nevertheless, this similar performance between both methods no longer holds when severe noise power uncertainty and shadowing do appear. In that case, the performance of the unstructured GLRT severely degrades, whereas the proposed structured GLRT is able to provide a more robust and superior performance. The results obtained for harsh working conditions confirm the suitability of this novel approach compared to traditional detectors that ignore spatial information.
Endnote a For instance, in the case of IEEE 802.22 WRANs, sensors measure the entire 6 MHz DTV channel at the Nyquist rate during observation intervals of 1 ms, and thus, a total of M = 6 · 10 3 samples are typically processed per snapshot [26].
|
2018-01-23T22:38:58.012Z
|
2013-06-03T00:00:00.000
|
{
"year": 2013,
"sha1": "08baa095a9512a24aba24fefc668c6fc0294b5a0",
"oa_license": "CCBY",
"oa_url": "https://jwcn-eurasipjournals.springeropen.com/track/pdf/10.1186/1687-1499-2013-150",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ebc77b4a2be588d78ea84bbbc4f138bec6f6e20c",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
12841238
|
pes2o/s2orc
|
v3-fos-license
|
Occurrence and seasonality of internal parasite infection in elephants, Loxodonta africana, in the Okavango Delta, Botswana
Highlights • The prevalence and density of internal parasite ova were recorded from wild elephants in the Okavango delta.• Coccidian oocysts, and eggs of nematode and fluke parasites, were found to be common.• Associations were found between infection and age, sex, group size composition, month and year of sampling.• Coccidia appeared to be transmitted predominantly in the rainy and flood seasons.• Formalin appeared to adversely affect recovery of all parasite taxa after prolonged storage.
Introduction
Parasites can reduce body condition, reproductive success, and survival in their hosts (Irvine, 2006). Although parasite infections have been associated with mortality in the African elephant, Loxodonta africana (Vitovec et al., 1984;Obanda et al., 2011), research on the parasite fauna of this species is limited. More is known about parasites of Asian elephants (Elephas maximus), whose large captive population and significance to livelihoods underpin more detailed study (Lei et al., 2012). Apart from well recognised generalist taxa, most elephant-associated parasites so far described appear to be specific to either Asian or African elephants (Fowler and Mikota, 2006), suggesting that they have evolved to become host specific in the 7.6 million years since the African and Asian elephants diverged (Rohland et al., 2007). Due to the relatively limited amount of work that has been carried out on these parasites in African elephants, very little is known about their identity, occurrence, importance, life cycles and transmission dynamics.
Among African elephants, nematodes are frequently found (Kinsella et al., 2004;Fowler and Mikota, 2006;Thurber et al., 2011), with hookworms in particular reported to cause pathological lesions and haemorrhages in the bile ducts and liver, as well as the intestines (Obanda et al., 2011). The elephant-specific intestinal fluke Protofasciola robusta, likely to be an ancestral species within the Fasciolidae (Lotfy et al., 2008), has been associated with intestinal tissue damage, haemorrhage and death in free-ranging African elephants (Vitovec et al., 1984;Obanda et al., 2011). Coccidian infections, while apparently common, have not been widely associated with adverse clinical consequences (Fowler and Mikota, 2006).
This study sought to determine, by means of a coprological survey, the occurrence of and levels of infection with gastrointestinal parasites among African elephants in the Okavango Delta ecosystem, and to test for associations with potential drivers of transmission, including age, sex, group size and composition, season and year. Additionally, serial sampling of a small group of domesticated elephants at the study site was utilised to investigate the seasonality of transmission in this unusual and important part of the elephant's range.
Study site
The study was conducted in the Ngamiland Wildlife Management Area 26 concession in the Okavango Delta, Botswana (19°25'N,22°35'E). This is a private game concession of around 180,000 hectares, used for tourism, comprising riverine forest and grasslands, which flood seasonally along with the rest of the Delta. Rainfall is concentrated between November and February, and flood levels (from prior, upstream rains) rise from March to June, and then recede to September (Gumbricht et al., 2004). The study area has an estimated wild elephant population of 1350 (http://www.elephantdatabase.org/survey_aerial_total_count_strata /175, accessed 29th December 2014), as well as a small herd of domesticated African elephants used for transporting tourists on elephant-back safaris. Both populations have been subject to detailed behavioural studies in recent years (Evans and Harris, 2012;Evans et al., 2013), facilitating access to known groups and individuals for faecal sampling.
Faecal sampling
Fresh faecal samples were collected from individual free-living elephants during daylight hours (6 am-7 pm), between November 2008 and April 2012. Elephants were observed until they had defecated and had moved off to a safe distance. A sample was then taken, comprising separate aliquots from the surface and the interior of the dung bolus, to control for eggs having a heterogeneous distribution in the faeces. Only samples able to be collected within one hour of being dropped were taken. This was to avoid rapid parasite egg hatching or bolus drying, as well as disturbance and dispersion by insects such as dung beetles. Seven domesticated African elephants were kept at the study site. This group, known as the Abu herd, were used for elephant-back safaris, and enclosed at night but allowed to forage in the bush during daylight hours, as well as being walked regularly to water and on safari routes. This group therefore provided an opportunity to track temporal patterns in parasite load through longitudinal sampling, and hence reflect seasonal fluctuations in infection pressure to which wild elephants might also be exposed. Samples were collected from all seven members of the Abu herd over a shorter period (January to April, 2012), also during daylight hours. Samples were placed in plastic bags and stored in a cool box for transfer to the laboratory and processing on the day of collection.
The following information was collected for each sample: the date and time of collection, the age and sex of the elephant sampled, and the size and composition of its social group. Group composition was categorised as follows: Group 1 comprised either all females or females with males below the age of 15 years, while Group 2 comprised all males aged 15 years or more. These two categories represent the differential group living dynamics of wild African elephants. Female elephants remain in their matriarchal groups for life, while males are pushed out of these herds at the onset of puberty, and may remain solitary or form groups of their own Harris, 2008, 2012). Elephants were assigned an age based on a number of visually observed variables, including body size, and tusk size and damage Harris, 2008, 2012). The elephant population sampled has been the subject of long-term, on-going behavioural studies (Evans and Harris, 2012), and many of the observed elephants could be matched to a previously compiled identification database by observing ear markings, tusks and tail hair, and precise age consequently confirmed. This also minimised the risk of repeat-sampling of individuals.
For each sample collected between 12th November 2008 and 20th January 2012, three grams of faeces were weighed out, stored in a 15 ml storage pot and filled to the top with 10% formalin. These samples, hereafter referred to as formalin-preserved samples (FP-samples) were stored at ambient temperature and analysed between one and 15 months after collection. For samples collected between 21st January and 11th April 2012, hereafter referred to as unpreserved samples (UP-samples), three grams were also measured out, but were stored in a domestic refrigerator at around 4°C, and analysed within 24 hours of collection.
Parasite enumeration
Nematode egg and coccidian oocyst density in faecal samples was estimated using a modified McMaster method (MAFF, 1986), with salt-sugar flotation solution (specific gravity 1.28) and a detection limit of 30 eggs per gram (epg). Briefly, 42 ml of water were added to each three gram sample, mixed thoroughly and then sieved. Two centrifuge tubes were filled with an aliquot of the sieved solution, and centrifuged for two minutes at 1500 rpm (400 g). The supernatant was then discarded and flotation solution added to the remaining sediment. The tubes were then inverted several times and a pipette was used to extract some of the mixed solution and place it in the chambers of a Fecpak slide (Fecpak Inc., New Zealand). This slide was used in preference to the standard McMaster slide because of the increased sensitivity, with one egg counted equating to 30 epg, compared with 50 epg using the standard modified McMaster method (Presland et al., 2005). The slides were left for two minutes to allow the eggs time to float to the surface before being examined under 10x objective (100 × total magnification) of a light transmission microscope. The prevalence of coccidian oocysts was recorded, and the number of nematode ova in each chamber was counted to estimate egg density.
Since some parasite ova, notably fluke eggs, could be too dense to float in salt-sugar solution, a sedimentation method (MAFF, 1986) was used to assess fluke prevalence. Faecal suspension was prepared as described in the flotation procedure above and topped up with water to 200 ml, mixed and poured into an inverse conical beaker. The beaker was left for three minutes to give fluke eggs time to sink. A pipette was then used to remove approximately 2 ml suspension from the very bottom of the beaker, and transfer it to the lid of a petri dish. After adding a drop of methylene blue stain, a graduated petri dish was then placed bottom-down on top of the lid to create an even layer of sediment, and the whole examined under 40x total magnification under a dissecting microscope. The number of fluke eggs seen was recorded. Early analysis of samples revealed that nematode eggs were frequently present in sediment fractions of FP-samples, while flotation tests on the same samples were negative for nematode eggs. Thereafter, nematode eggs were examined using both flotation and sedimentation methods. Sediment was examined in a petri-dish, as above, and the presence of fluke and nematode eggs recorded separately.
Statistical methods
Individual faecal samples were categorised by age, sex, month, season (wet, dry or flood), group size and group composition. Associations between these factors and the prevalence of coccidia and fluke, and of nematodes in FP-samples only, were investigated by binary logistic regression analysis, separately for each parasite type, in order to take account of potentially confounding interactions. All factors were included initially, and the least significant removed in turn until only significant predictors remained. The level of significance was set at p = 0.05. Logistic regression was not appropriate for nematode eggs in UP-samples, since observed prevalence was 100%. Instead, the effects of the same factors on nematode egg density were investigated by multiple linear regression analysis, following log10(x + 1) transformation to stabilise the variance. Because of apparently inconsistent flotation of nematode eggs preserved in formalin, nematode egg density was analysed only for UP-samples, and prevalence of the three parasite categories analysed for UP-and FP-samples separately. Nematode egg counts from samples analysed only using flotation, before the limitations of this method were known (see section 2.3 above), were discarded from the analysis of nematode egg prevalence. Trends in egg density in the captive Abu herd over the study period were assessed using two-tailed Pearson's correlation against time. One (3 month old) individual elephant was not found to be infected with any parasite at any time and was discarded from this analysis. Analyses were conducted using SPSS software (v16, SPSS Inc, USA).
Sample size and distribution
A breakdown of samples collected by factor is given in Table 1. The median age of sampled elephants was 17 years (range 2-36), and the median group size 5 (range 1-85). A total of 61 UP-samples were analysed and 397 FP-samples, which were stored in formalin for between 1 and 15 months before analysis. A total of 197 samples were analysed before problems with nematode egg flotation were appreciated, and were excluded from analysis of nematode egg prevalence, leaving an effective sample size of 261 for this analysis. Sampling was skewed towards males early in the study to align with behavioural studies, and more females sampled later to achieve greater balance. In addition to samples from wild elephants, 79 faecal samples were collected from the seven individuals of the Abu herd.
Parasite prevalence and density in wild elephants
Coccidian oocysts were recorded in 69% of UP-samples and 48% of FP-samples. For both sample types, prevalence varied seasonally (Fig. 1), and was significantly higher in January and/or February than in the reference month, March, which recorded intermediate prevalence (Tables 2 and 3). In FP-samples, coccidian oocysts were additionally more likely to be found in faecal samples taken in 2010 (prevalence 2008-12 = 47, 46, 65, 20, 56% respectively), and those that had been stored in formalin for less time (Table 3). Each additional month spent in formalin reduced the chance of finding coccidian oocysts using flotation by 13%. Although oocyst prevalence in males and females was very similar (48 and 49% respectively), when interaction with other factors was taken into account, oocysts were more likely to be found in samples from males than in those from females.
Nematode eggs were present in 73% of FP-samples, and were more likely to be found in elephants from larger groups, and in samples that had been stored in formalin for less time (Table 4). For every additional elephant in a group, nematode eggs were 6.5% more likely to be found, and for every additional month spent preserved in formalin, they were 19% less likely to be found. Nematode eggs were present in all of the UP-samples analysed, rendering analysis of prevalence superfluous, but egg density varied. Samples from Group 1 (all female herds or those with males aged below 15 years) had significantly higher nematode egg density than those from Group 2 (males that are over the age of 15) ( Table 5, Fig. 2). Nematode eggs were of typical strongyle-type morphology (Fig. 3), and of mean length (73 μm) (range 55-90) and width (46 μm) (range 35-55). Fluke eggs were present in 26% of UP-samples. Prevalence was significantly higher in females (39%) than in males (10%) ( Table 6). Fluke eggs were present in 23% of FP-samples, and were more likely to be found in samples collected in 2010 and 2011, and in those from older individuals (Fig. 4), and less likely to be found after longer storage in formalin (Table 7). Overall prevalence in years 2008-12 was, respectively, 12, 11, 27, 31 and 26%. Each additional month of storage in formalin decreased the chance of finding fluke eggs in an individual sample by 17%. Fluke eggs were operculate and measured 80-110 μm in length and 50-60 μm in width (Fig. 3), and were quite different in appearance to the classic Fasciola-type eggs seen in other large mammals in the study area.
Parasite occurrence in domesticated elephants
Samples were collected from the seven members of the Abu herd between January and April 2012. The group comprised six females aged 3 months to 37 years, and one male of 5 years. Over this period, a significant increase in nematode egg density was observed in two individuals (r = 0.727 and 0.759, n = 16 and 11, p = 0.001 and 0.008), with counts starting at 30 in both individuals and increasing steadily to 210 and 570 over the three month period. At the start of the study (21st January 2012), the six members of the Abu herd (excluding a new-born calf) were all infected with coccidia. However, from the 17th of March onwards, coccidian oocysts were no longer found in any of the samples collected. No fluke eggs were detected in the captive elephants at any time.
Discussion
This is, to our knowledge, the most extensive coprological parasite survey of wild elephants in Botswana to date. Specific identification of the parasite ova found was not possible, and would require corroborative post mortem recovery of adult parasites from elephants, which is rarely possible, given the high level of protection accorded to these animals and the rapid disintegration of carcasses. Advances in molecular methods provide opportunities for more specific studies in the future (McLean et al., 2012). Nevertheless, coprological surveys are useful to characterise broad patterns of infection at higher taxonomic levels (e.g. Thurber et al., 2011). In the present study, wild elephants in the Okavango Delta were found to be commonly infected with nematodes, coccidia and trematodes (=flukes). The morphology of the fluke eggs found was very similar to those of Protofasciola robusta, an intestinal fluke associated with emaciation and mortality in elephants in Kenya (Obanda et al., 2011). The seasonally wet conditions in the Okavango Delta probably provide suitable conditions for the life cycles of water-dependent fluke species. Lack of fluke infection in the domesticated elephants sampled, which have a more restricted range, suggests that infective stages might be distributed patchily in the environment. Coccidia and nematode ova were found in domesticated elephants as well as at high prevalence in wild elephants, demonstrating that conditions in the Delta are conducive to high levels of parasite transmission. Considering fluke, nematode and coccidia ova together, year, month, sex, age, and group composition and size were all significantly associated with level of parasite infection.
Wild elephant samples collected in 2010 more commonly contained coccidian oocysts and fluke eggs than in other years, and fluke eggs were also more common than average in 2010 and 2011. Flood levels in the Delta were unusually high in 2010 (Tsheboeng et al., 2014), and this could have favoured parasite transmission through, variously and non-exclusively, more humid soil supporting parasite development and survival, better conditions for snail intermediate hosts of fluke, and higher host population density as a result of lower available land area. A more persistent effect might be expected for fluke than for intestinal coccidia, since flukes are generally longer-lived parasites.
Given the well-established links between climate and the transmission of many parasite taxa, it was surprising that no strong associations were found between season (rainy, flood and dry) and the prevalence or density of parasite stages in elephant faeces. However, transmission in one season can result in elevated parasite burdens in the next, given the time needed for parasite maturation, and prolonged parasite survival and propagule production. This would blur season-prevalence relationships. The prevalence of coccidia, but not of nematodes or fluke, was significantly associated with month. Oocysts were most likely to be observed in faecal samples in January and February, towards the end of the rainy season, after which prevalence declined. In the small number of serial sampled domesticated elephants, coccidian oocysts similarly disappeared from the faeces between February and March. These results suggest that the prevalence of coccidiosis is seasonal, and drops between the rainy season and the flood season. Other studies have shown high prevalence of coccidiosis in farmed livestock in the tropical rainy season (Rehman et al., 2011). Oocysts continued to be recorded in the present study at lower prevalence through the rest of the year. Parasite transmission could be enhanced by increased host density during the flood season, as elephants become concentrated on elevated land; however, there was no clearly increased prevalence at this time. In other systems in the region, e.g. antelopes in Zambia (Nalubamba et al., 2012), and in elephants in Nigeria (Mbaya et al., 2013), helminth prevalence typically peaks in the rainy season. The lack of a strong seasonal signal in helminth prevalence in the current study could be due to, among other factors, limited effects of climatic variation on transmission in this system, or parasite longevity damping fluctuations in transmission. In two of the longitudinally sampled domesticated elephants, nematode egg count increased substantially after the end of the rainy season, which would be consistent with infection from larvae that developed during the rains. However, further work is needed to characterise and explain the seasonal epidemiology of nematode infections in this system.
Male elephants were more likely to shed coccidian oocysts and less likely to shed fluke eggs than females, while nematode egg prevalence was unaffected by sex. Many mammal studies have found a male bias in parasitism, usually due to sexual dimorphism in behaviour or morphology, or by the effect of sex-specific hormones on the immune system (Zuk and McKean, 1996). If the latter effect is present in elephants, then bulls in musth, when plasma testosterone levels rise significantly (Ganswindt et al., 2010), might be expected to have increased parasite levels. However, too few musth bulls were encountered during the study to assess the effect of this heightened male hormonal state on parasite burden. A previous study in Namibia (Thurber et al., 2011) found that musth had no significant effect on parasite burden in bull elephants, suggesting that testosterone may not have a significant immunosuppressive effect in this species. Measurement of hormones in faeces may enable more detailed investigation of hormone-infection relationships in the future. Non-hormone related sexual dimorphism such as group structure, range and diet in male and female elephants may also contribute to the observed pattern of male biased coccidia infections.
Age was not associated with the prevalence of coccidia or nematode ova, nor nematode egg density. However, fluke prevalence increased with increasing elephant age. Flukes are typically longlived within the final host, and this pattern is consistent with gradual accumulation of flukes through life, and limited host immunity. Unlike many livestock species (Armor, 1989), the elephants in this study do not appear to be acquiring immunity to parasites with age, or at least if such immunity occurs, it is not sufficiently strong to cause a detectable decrease in infection levels in previously exposed individuals. Similarly, a study on wild elephants in Namibia found that within family groups, nematode burden increased with age (Thurber et al., 2011), and this was attributed to older elephants eating more, and therefore being exposed to a greater number of parasites.
Elephant group size varied greatly in the stored sample study, ranging from two to 85 individuals, and the chance of nematode infection increased with increasing group size. A positive correlation between group size and parasite load in mammals was detected across species by metaanalysis (Cote and Poulin, 1994). The rate that the environment is contaminated by parasite eggs is positively correlated with the number of parasitised individuals in the population (Thurber et al., 2011). As larger herds have an increased probability of including infected individuals, it would be expected that larger group sizes lead to a high environment contamination rate, which in turn, leads to higher parasite levels. The host-density effect on parasite transmission may be exacerbated by the high water levels in the Delta, which force elephant group members to cluster together on dry 'islands,' thus increasing host density even further; although, there was no observed increase in infection levels during the flood season in the present study. Members of family groups (Group 1: females and males under the age of 15) had Table 7 Significant predictors of the prevalence of fluke eggs in FP-samples (formalinpreserved faecal samples) from wild elephants collected between 2008 and 2012 (n = 397), using binary logistic regression. The least significant variable, group composition, was removed from the analysis. The other non-significant factors in the analysis were sex, group size, month and season. higher average nematode egg density than those in groups of mature males (Group 2). Thurber et al. (2011) also found that members of the matriarchal group had a higher nematode burden than solitary bull elephants. This might similarly be explained by higher contamination rates of frequented range areas by larger social groups. However, such processes might be expected to act across parasite taxa, and no relationship was found between group size and composition and the prevalence of coccidian or fluke ova. Faecal egg count methodology was limited in this study due to the sinking in flotation solution of nematode eggs from elephant faecal samples after storage in formalin. This was unexpected but was overcome by changing the nematode detection method to include sedimentation as well as flotation, although this meant that only parasite prevalence, rather than density, could be estimated with confidence in stored samples. It was also found that increased time in formalin led to decreased detection of coccidia and fluke ova. It is possible that high ambient temperature adversely affects the integrity of parasite ova in faecal samples stored in formalin (Foreyt, 1986). This consideration should be borne in mind in other coprological studies of parasites in wildlife, in which prolonged storage of material is commonly used to overcome logistical barriers to immediate analysis.
Conclusions
Elephants in this study were found to commonly shed parasite ova in their faeces, including those of coccidia, nematodes and fluke. A wide range of factors was associated with parasite presence and density, including sex, age, group composition, group size, month and year. A significant effect of month on parasite prevalence was also found in sympatric domesticated elephants. In the case of coccidia, it appears that transmission is favoured in rainy and flood seasons. The high prevalence of fluke eggs is notable and could be due to the warm and wet conditions in the Okavango Delta. Further research is needed to establish whether internal parasites have any effect on individual fitness or population dynamics in this population, the extent to which transmission occurs between different sympatric host species, and to more fully understand the effects of climate and host biology in the epidemiology of parasite infections.
|
2018-04-03T00:34:13.805Z
|
2015-01-29T00:00:00.000
|
{
"year": 2015,
"sha1": "f9f7738273c73a1a772fd94ca0d7bd395013e511",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijppaw.2015.01.004",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c36098c9b91da64b3c9bf285be392906298f7f23",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
263310634
|
pes2o/s2orc
|
v3-fos-license
|
AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition
Audio-visual speech contains synchronized audio and visual information that provides cross-modal supervision to learn representations for both automatic speech recognition (ASR) and visual speech recognition (VSR). We introduce continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a semi-supervised method to train an audio-visual speech recognition (AVSR) model on a combination of labeled and unlabeled videos with continuously regenerated pseudo-labels. Our models are trained for speech recognition from audio-visual inputs and can perform speech recognition using both audio and visual modalities, or only one modality. Our method uses the same audio-visual model for both supervised training and pseudo-label generation, mitigating the need for external speech recognition models to generate pseudo-labels. AV-CPL obtains significant improvements in VSR performance on the LRS3 dataset while maintaining practical ASR and AVSR performance. Finally, using visual-only speech data, our method is able to leverage unlabeled visual speech to improve VSR.
INTRODUCTION
Machine learning has enabled rapid advancement in fields such as speech processing. However, speech processing requires large amounts of labeled data to work well (Radford et al., 2023;Zheng et al., 2022), which is hard to acquire for the thousands of languages spoken world-wide. Semisupervised learning aims to mitigate this challenge by using unlabeled data to learn better representations and improve performance on labeled data. Real-world unlabeled data is often multi-modal, for example, videos containing synchronized audio and visual information. In this work, we investigate whether we can use such multi-modal data in a semi-supervised pipeline to improve performance on labeled data. Multi-modal data has an additional benefit -modalities can be complementary for each other and provide cross-modal supervision, which influences our algorithm design.
In this work, we study audio-visual speech as multi-modal data with synchronized audio and visual input sequences. Using only the audio or the video data, we can perform two kinds of speech recognition: automatic speech recognition (ASR) from the audio channel, or visual speech recognition (VSR) from the video channel (lip-reading). However, these modalities require substantially different amounts of labeled data for training practical models. For example, with 30 hours of labeled data, we can train an ASR model which reaches around 11% word error rate (WER), while training modern end-to-end VSR models on the same amount of data is challenging: the lowest WER we achieve in our experiments is 96%. Therefore, in this work we investigate how to use the cross-modal information present in audio-visual speech to obtain better VSR performance.
Although VSR is a more challenging task than ASR, VSR still has several useful applications. VSR can be used to transcribe silent videos and to help more people communicate, for example, people with aphonia -a medical condition that causes them to lose the ability to produce voiced sounds (Shillingford et al., 2019). VSR is also useful for audio-visual speech recognition (AVSR) where both the audio and visual modalities are used to predict spoken words. The video channel helps improve performance in noisy audio conditions since it impacted less by background sounds, reverberation, and other distortion (MacLeod & Summerfield, 1987;Afouras et al., 2018a).
In this work, we build upon semi-supervised learning for ASR. So far, there have been two predominant methods: self-supervised learning (SSL) and continuous pseudo-labeling (CPL), or selftraining (ST). SSL has two disjoint stages. In the first stage, a proxy task, such as masked reconstruction, or a contrastive task, is optimized on unlabeled data. In the second stage, the model is fine-tuned on a smaller amount of labeled data (Hsu et al., 2021a;Baevski et al., 2020;Chiu et al., 2022). CPL instead learns a seed model on labeled data first and then trains the model on labeled and unlabeled data while continuously generating new pseudo-labels on the unlabeled data (Likhomanenko et al., 2021a;Manohar et al., 2021;Higuchi et al., 2021). One of the main benefits of CPL is that it has been shown to match SSL performance with fewer resources, while avoiding the two-stage pipeline by directly optimizing for the downstream task instead of using a proxy task (Likhomanenko et al., 2021a;Berrebbi et al., 2023).
SSL has been applied to audio-visual speech and has been found to decrease the amount of labeled data required to perform VSR and ASR (Shi et al., 2022a;Haliassos et al., 2023;Zhu et al., 2023;Lian et al., 2023). Further, self-training has been applied to audio-visual speech (Ma et al., 2023) and has been found to improve performance when combined with SSL (Shi et al., 2022a;Haliassos et al., 2023). However, current works are restricted to: (i) SSL pre-training and fine-tuning pipeline with two disjoint stages and different objectives; (ii) most of the SSL models are fine-tuned separately for each task (VSR, ASR, and AVSR), which requires 3× the number of model parameters; (iii) selftraining is performed after SSL pre-training and is often done with an external ASR model which itself requires a large amount of labeled data to train, and self-training is done as a single round instead of continuously.
In this work, we propose continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a semi-supervised method that trains an audio-visual speech recognition model on a combination of labeled and unlabeled data with continuously regenerated pseudo-labels. Our method uses the same objective throughout training and can perform ASR, VSR, and AVSR with a single model. We use the same audio-visual model for both supervised training and pseudo-label generation, mitigating the need for external ASR models. Our method can handle out-of-domain unlabeled data for self-training with a simple fine-tuning strategy on labeled data. Our approach leads to significant improvements in VSR performance on the LRS3 dataset (Afouras et al., 2018b) while maintaining practical ASR and AVSR performance compared to our baselines trained purely on labeled data. We also conduct a thorough investigation of the training configuration for audio-visual learning, including the architecture design, input stride, and output token set. Finally, we also show that our pseudo-labeling method is effective for unlabeled audio-only and visual-only data.
RELATED WORK
Continuous pseudo-labeling for semi-supervised ASR. Self-training or pseudo-labeling has been successfully applied as a semi-supervised learning method in domains such as vision (Berthelot et al., 2019), machine translation (He et al., 2020), speech recognition (Kahn et al., 2020), and speech translation (Pino et al., 2020). In these methods, a student model is trained on labeled data and is then used to generate pseudo-labels (PL)s for the unlabeled data. For speech recognition, initial methods trained new models from scratch on both the labeled and pseudo-labeled data (Kahn et al., 2020;Xu et al., 2020b;, sometimes in multiple rounds. They also incorporated a language model (LM) into the PL generation process. However, LM decoding is slower than greedy decoding, and the acoustic models were shown to overfit to the text training set of the LM used for generating PLs. Recent methods such as SlimIPL (Likhomanenko et al., 2021a) and MomentumPL (Higuchi et al., 2021;Manohar et al., 2021) instead continuously train on labeled and unlabeled data while re-generating PLs and use greedy decoding to generate PLs. To prevent model collapse which could happen when PLs are re-generated after each training step, SlimIPL maintains a dynamic cache of unlabeled samples and PLs, while Momentum PL generates PLs with a teacher model whose weights are the exponential moving average of the student model. Inspired by these methods, AV-CPL applies continuous pseudo-labeling for multi-modal speech.
Semi-supervised learning for AVSR. The temporal synchrony between acoustic and visual speech provides opportunities for audio-visual semi-supervised learning. Initial methods focused on using external ASR models trained on speech-only datasets for pseudo-labeling unlabeled audio-visual data (Afouras et al., 2020;Ma et al., 2022; or performing knowledge distillation from the ASR to the VSR model (Li et al., 2019;Afouras et al., 2020;Ren et al., 2021). However, training Figure 1: AV-CPL trains jointly on labeled and unlabeled videos while continuously generating pseudo-labels (PL)s on unlabeled videos. The parameters of the model generating PLs, θ T −∆ , are controlled through a cache or EMA (explained in Section 3.2). Audio-visual inputs are used during PL generation. Modality dropout is used during training so that the model is trained on audio-visual, video-only, or audio-only inputs to increase robustness for missing modalities.
Generate pseudo-labels from unlabeled audio-visual examples
an ASR model that generalizes well requires a lot of data from different domains (Likhomanenko et al., 2021b;Hsu et al., 2021b), which limits the applications to other languages. AV-CPL does not assume access to any external models and generates PLs continuously by itself while training.
Self-supervised learning for AVSR. Recently, SSL has been applied to audio-visual speech to improve VSR using unlabeled audio-visual data. Learning objectives for the proxy task include masked token prediction (Shi et al., 2022a;Hsu & Shi, 2022;Zhu et al., 2023) and predicting the latent representations of teacher models (Ma et al., 2021a;Haliassos et al., 2023;Lian et al., 2023). Although most of the models use a shared audio-visual transformer to process both audio and visual inputs, they usually fine-tune separately for each task (VSR, ASR, AVSR), which requires 3× the number of parameters. u-HuBERT (Shi et al., 2022a) is one exception which fine-tunes on audio-visual data and performs all three tasks. Some methods combine SSL with self-training and gain a boost in performance by using an ASR model to label the unlabeled data (Shi et al., 2022a;Haliassos et al., 2023;Zhu et al., 2023). However, the ASR model is external and trained separately from the audio-visual model, and pseudo-labeling is done only once. AV-CPL forgoes the proxy task and instead is trained for speech recognition from audio-visual inputs throughout training. It performs pseudo-labeling on unlabeled data continuously during training and only requires a single model to perform all three tasks.
SUPERVISED AUDIO-VISUAL TRAINING
At the core of our method is an audio-visual transformer (Vaswani et al., 2017;Afouras et al., 2018a). Given an input of synchronized audio and visual sequences A 1:T and V 1:T , both modalities are first processed by separate encoders, resulting in audio features f a 1:T and visual features f v 1:T . These features are then added to form audio-visual features: f av 1:T = f a 1:T + f v 1:T . The audio-visual features are combined with a positional embedding and fed as input into the transformer, which predicts the tokens corresponding to the spoken phrase in the input. We train our models with the Connectionist Temporal Classification (CTC) objective (Graves et al., 2006). Recent audio-visual models adopt the sequence-to-sequence (S2S) framework with a transformer decoder (Ma et al., 2021b;Shi et al., 2022a). The main reason we focus on CTC models instead of S2S models is to avoid issues during pseudo-label (PL) generation due to looping and over/under-generation: it is well known that S2S models tend to generate shorter or longer sequences and are likely to generate repeated n-grams at the end of sequences. Consequently, S2S models necessitate strategies for filtering poor PLs Kahn et al., 2020;Gheini et al., 2023), while PLs generated by CTC models do not require filtering (Likhomanenko et al., 2021a;Higuchi et al., 2021).
By default, the model is trained for audio-visual speech recognition, which uses both audio and visual inputs. However, to increase the model's robustness to a missing modality and to facilitate VSR-only and ASR-only capabilities, we randomly apply modality dropout during training where one modality is entirely dropped out (Neverova et al., 2015;Shi et al., 2022a). Both modalities are used as input with probability p m . If only one modality is used, the audio features are used with probability p a . Formally, (1) By default, p m = p a = 0.5. We always set p m = p a for simplicity so that a lower probability means more video-only training, and a higher probability means more audio-only training.
CONTINUOUS PSEUDO-LABELING
Once an initial seed model is trained on the labeled data, we use the seed model for semi-supervised continuous pseudo-labeling (CPL). With labeled L = {(a i , v i ), y i } and unlabeled U = {(a j , v j )} videos, the audio-visual model M θ is trained on both labeled and unlabeled data with continuously re-generated PLs. The following loss function is minimized during training: L(θ) = L L (θ) + λL U (θ), where θ are the model's parameters, L L (θ) is the CTC loss using the labeled data, L U (θ) is the CTC loss using the unlabeled data and PLs, and λ is a hyperparameter controlling the weight of the unlabeled data. To decouple the seed model training from the pseudo-labeling stage, the optimizer is restarted and new modality dropout probabilities p ′ m = p ′ a are used. To generate PLs for the unlabeled audio-visual data, samples are passed through the model without augmentation and the model's predicted transcripts using greedy decoding are used as the PLs. The model can generate PLs using both the audio and visual data as input (AVSR), or just the audio (ASR) or visual (VSR) modalities. In practice, we use both modalities (AVSR) to generate PLs since the performance is slightly better than ASR, and we wanted to prevent the model from over-relying on the audio input.
We propose two methods to control the generation of PLs: AV-SlimIPL (see Appendix, Algorithm 1) and AV-EMA-PL (see Appendix, Algorithm 2). AV-SlimIPL maintains a dynamic cache of unlabeled samples and PLs. Before CPL begins, the model is "warmed up" on the labeled data with the new modality dropout probabilities p ′ m = p ′ a . Then CPL begins and the cache of size C is filled with unlabeled audio-visual samples and their PLs generated by the audio-visual model with states from different training iterations. During CPL, the model is continuously trained on samples from the labeled dataset and from the cache of unlabeled samples and PLs. The cache is updated with a probability p (controls the number of PL re-generations) by replacing some samples in the cache with other unlabeled data and their PLs generated by the current model state. This ensures that PLs are updated using newer versions of the model which are improved upon older states.
AV-EMA-PL instead generates PLs with a separate teacher model M ϕ whose parameters are updated as the exponential moving average (EMA) of the student model M θ 's parameters. Before CPL, both θ and ϕ are initialized with the parameters of the seed model. The student model is trained with regular gradient-based optimization while the parameters of the teacher model are updated as ϕ ← αϕ + (1 − α)θ, where α is a hyperparameter controlling the weight of the most recent student parameters. During an initial "warm up" phase, the student model is trained on the labeled data with the new modality dropout probabilities p ′ m = p ′ a and the teacher model's parameters are updated as the EMA of the student's parameters. Once CPL begins, the teacher model generates PLs on the unlabeled data at each iteration and continues to track the EMA of the student's parameters.
AV-SlimIPL and AV-EMA-PL have different computational requirements since the former requires a cache and the latter requires two models. Given that video frames take longer to load and require more memory than audio samples, it is easier to maintain two models instead of a cache (see Appendix A for more details). Therefore, in practice we use AV-EMA-PL for our AV-CPL experiments. However, AV-SlimIPL and AV-EMA-PL are closely related: they both perform model averaging by using previous model states to generate PLs. The α hyperparameter in AV-EMA-PL can be related to the cache size C and probability p in AV-SlimIPL as C = p 1−α to provide a similar model history horizon for model averaging (either via EMA or via the cache). We use α = 0.9999 in our experiments which corresponds to C = 10, 000 and p = 1 or C = 1, 000 and p = 0.1; the later is faster to train as PLs are regenerated only every 10th training step.
Differences with audio-only CPL. AV-SlimIPL and AV-EMA-PL are inspired by audio-only Slim-IPL (Likhomanenko et al., 2021a) and EMA-PL (Higuchi et al., 2021;Manohar et al., 2021) and use similar mechanisms for controlling the generation of PLs. We stress that these previous methods only trained and evaluated using audio, while our models are trained with both audio and visual inputs and perform ASR, VSR, and AVSR. Our method uses cross-modal information to generate PLs using both audio and visual inputs. Moreover, VSR is more challenging task than ASR: our seed models' VSR performance (>60% WER) is higher than the audio-only seed models' ASR performance (≈20% WER). Nonetheless, we are able to improve the VSR performance significantly.
EXPERIMENTAL SETUP
Following prior works on AVSR, LRS3 (Afouras et al., 2018b) is used as the labeled dataset while VoxCeleb2 (Chung et al., 2018) is used as the unlabeled dataset. LRS3 is the largest public dataset for audio-visual speech recognition in English collected from TED talks. We followed Shi et al. (2022a) to generate the following splits: 433h training set, 30h training set, 1h validation set, and 1h test set. When training the models on the 30h set, the 433h data can also be used as unlabeled data. VoxCeleb2 is a multilingual audio-visual speaker verification dataset without transcripts. While the original dataset contains more than 2,400 hours of video, we use the 1,326 hours of videos in English selected by Shi et al. (2022a). We note that the VoxCeleb2 videos are from a different distribution than those in LRS3 since they were collected from YouTube, are longer, and have more noise. The dataset statistics are reported in the Appendix, Table B1.
Following prior audio-visual semi-supervised learning setups, we use two transformer model sizes: Base and Large. The number of transformer blocks / embedding dimensions / feed-forward dimensions / attention heads for Base / Large are 12/768/3072/12 and 24/1024/4096/16 respectively. We use the CAPE positional embedding (Likhomanenko et al., 2021c). The number of parameters is 96M for Base and 315M for Large. Full training details are presented in Appendix C.
We use the videos' original frame rate of 25 fps (corresponds to 40ms stride per frame). Following Shi et al. (2022a), Dlib (King, 2009) is used to detect facial key points to extract a 96x96 region centered on the mouth. During training, we take a random 88x88 crop and flip the entire video horizontally with probability 0.5. During testing, we use the center 88x88 crop and flipping is not applied. The videos are converted to a single channel (grayscale). The videos are processed with a 3D convolutional layer (Stafylakis & Tzimiropoulos, 2017), followed by a ResNet-18 (He et al., 2016). The audio sampled at 16kHz is converted to an 80-dimensional Mel spectrogram with a stride of 10 ms and a window size of 25 ms. The model processes the spectrograms with a 1D convolution with a stride of 2 (stride is 20ms per output frame). We duplicate the video features temporally so that both modalities have a stride of 20ms. We provide an analysis of the stride in Table 2b.
We train the models with the CTC loss using character output units. We use English characters and numbers, augmented with word boundary, apostrophe and CTC blank token. We train 4-gram wordlevel language models (LM)s on the LRS3 30h/433h training text using KenLM (Heafield, 2011) and use it with the Flashlight beam-search decoder (Kahn et al., 2022) implemented in Torchaudio (Yang et al., 2022). Full decoding details are presented in Appendix D. We select the best model checkpoints on the validation set to evaluate on the test set and report results on both the validation and test sets. We include a discussion about the performance on the validation set in Appendix E.
AUDIO-ONLY CONTINUOUS PSEUDO-LABELING
We first conducted experiments on audio-only and video-only continuous pseudo-labeling (CPL) to confirm the effectiveness of the method on each modality before combining them. We show the audio-only CPL results in Appendix F. We re-implemented SlimIPL (Likhomanenko et al., 2021a) as the audio-only CPL method and compared it to HuBERT (Hsu et al., 2021a) as the audio-only self-supervised method, using LRS3 and VoxCeleb2 for labeled and unlabeled data. We found that SlimIPL can outperform HuBERT with a simpler pipeline (CTC encoder model and a 4-gram LM, compared to a S2S encoder-decoder model). The results show that audio-only CPL methods can transfer well to new datasets, motivating us to perform video-only and audio-visual CPL. Table 1: Comparison of video-only models. AV-HuBERT is a S2S encoder-decoder transformer trained from scratch with video-only, while V-CPL is a CTC encoder transformer. We report either greedy ("None") or beam-search decoding with an LM trained on LRS3 30h or 433h transcriptions.
VIDEO-ONLY CONTINUOUS PSEUDO-LABELING
In Table 1, we show the results of applying CPL to the video modality only (V-CPL). We use the AV-EMA-PL method without any audio input. We show the full results including Transformer-Base and results on the validation set in the Appendix, Table F2. Training the video-only transformer model on labeled LRS3 30h from scratch is challenging -the best WERs we are able to get is around 96%. When training on labeled LRS3 433h video-only data, we obtain a more reasonable WER of 60.6%. Our results are similar to video-only AV-HuBERT trained from scratch without self-supervised pre-training, although our model is simpler and does not use a transformer decoder. When we apply CPL using 433h labeled video-only data, the 1,326h unlabeled video-only data from VoxCeleb2 improves the WER to 55.9%. These results show that it is possible to perform CPL with unlabeled silent videos even when the seed model has relatively large WER (> 60%). We provide an ablation study for the ratio of unsupervised to supervised updates in the Appendix, Table G1. Our method achieves similar performance to video-only AV-HuBERT trained from scratch using PLs on VoxCeleb2 generated by an external ASR model (51.7%), while our V-CPL method does not use any audio input at all. These results confirm that it is possible to perform video-based pseudo-labeling without an external ASR model, motivating our audio-visual pseudo-labeling approach.
AUDIO-VISUAL MODEL DESIGN
In this section, we investigate the best architecture design and training pipeline for supervised AVSR to obtain the best seed model for audio-visual continuous pseudo-labeling. Note that the modality dropout while training the seed model is p m = p a = 0.5.
AV Architecture.
In Table 2a, we compare audio-visual architectures. For the audio encoder, Shi et al. (2022a) proposes to stack 4 spectrogram frames with an effective stride of 40ms to match the video frame rate. This is equivalent to a convolutional layer with stride of 4 and kernel width of 4. We tried this method (Linear) as well as a convolutional layer with stride of 4 and kernel width of 7 (Conv.). For the modality fusion method, the audio and visual features can be fused either by temporally concatenating the features and passing them through a linear layer (Shi et al., 2022a), or by adding the features together. We also control whether modality drop is enabled or not. We find that the convolutional layer works better than the linear layer according to the ASR and AVSR performance. For modality fusion, addition works better with the convolutional layer, while the results for the linear layer are mixed. Modality dropout tends to make AVSR and VSR marginally worse, and ASR significantly better. Given these results, we use the convolutional layer with modality addition for all subsequent experiments.
Token Set. Two common token sets in AVSR are characters or subwords. Subwords are longer than characters and typically work better with a larger input stride. We first compared different tokens and strides for the audio-only and video-only supervised models in Table G2 and Table G3 of the Appendix. We follow Shi et al. (2022a) to construct unigram-based subwords with a vocabulary size of 1k (Kudo, 2018). We found that the audio model works best with characters and a stride of 20ms, while the video model works better with characters when performing CPL. In Table 2b, we compare different tokens and strides for the AVSR supervised model, where we observe trends consistent with the audio-only and video-only results. When using the AVSR model for ASR, the best results are obtained using characters and a stride of 20ms. For VSR, subwords outperform characters, and using a stride of 20ms works better than a stride of 40ms with characters. Even though the 20ms stride contains the same visual features as the stride of 40ms (duplicated), the model has more time slots to predict the correct tokens. Finally, AVSR performance is better with a stride of 20ms, and the final WER using characters and subwords is similar (9.6% vs 9.7%). Given these results, we use characters and a stride of 20ms to retain the best ASR and AVSR performance, which is useful for generating PLs on unlabeled videos.
Modality Pre-Training. We observed that it was difficult to train the audio-visual model jointly on both modalities from scratch. The VSR performance plateaued more rapidly than the AVSR and ASR performance. Moreover, the AVSR performance was usually worse than the ASR performance, and the ASR and VSR performance were usually worse than the single-modality baselines. To remedy this, we propose to pre-train the model on a single modality, and then start the training on the joint modalities with modality dropout. This can be viewed as a simple schedule on the modality dropout with p m = 0, p a = 1 at the beginning of training and arbitrary p m = p a later. In Table 2c, we show the results for training jointly from scratch, followed by the results of training on only one modality. Next we show the results of training jointly when initialized from the model trained on one modality only. We find the best ASR (2.6%) and AVSR (2.6%) performance when the model is pre-trained on audio only, while the VSR (67.0%) performance nearly matches the best result. The AVSR performance of the model initialized from video-only pre-training is much worse (23.5%). Therefore, we first pre-train the model on audio-only data, and then train the model jointly on both modalities with modality dropout. With this pipeline, the AVSR performance matches the ASR performance, while the ASR and VSR performance are similar to the models trained separately on each modality (ASR: 2.6% vs 2.3% and VSR: 67.0% vs 65.0% for Transformer-Base.) We show the results of these experiments for the Base model on 30h, as well as the Large model on 433h/30h, in Appendix G. We note that such pre-training is less needed for the Large model on 433h, which shows that it is easier to learn from both modalities given enough data and parameters.
AUDIO-VISUAL CONTINUOUS PSEUDO-LABELING
Once we train supervised audio-visual models on 433h or 30h of labeled LRS3 videos, we apply CPL and use the models to continuously generate PLs during training on unlabeled videos. While the modality dropout when training the seed model is p m = p a = 0.5, different modality dropout probabilities p ′ m = p ′ a during CPL on unlabeled videos create a trade-off between ASR, AVSR, and VSR performance, as shown in Table 2d 1 . As the probability of using both modalities p ′ m decreases, the model is trained on more video-only data; VSR performance consistently improves up to the lowest p ′ m of 0.05 compared to the baseline trained without unlabeled videos. However, ASR and AVSR performance gets worse as p ′ m decreases, with the best performance at the initial modality dropout rate of 0.5. Given these observations, we present the main results using 433h of labeled LRS3 videos in Table 3 with both 0.5 and 0.1 modality dropout, where 0.5 dropout obtains the best ASR and AVSR results, while 0.1 dropout obtains nearly the best VSR performance without a significant decrease in the ASR and AVSR performance. We present the results on 30h of labeled LRS3 data in Table 4 with 0.1 modality dropout to focus on improving the VSR performance.
In Table 3, we compare AV-CPL to other semi-supervised methods using 433h of labeled data. We also show supervised audio-visual methods trained with non-public data. With modality dropout of 0.1 during CPL, our method is able to significantly improve the VSR performance compared to the baseline trained only on the labeled data (58.6% → 45.3%) while maintaining near-optimal ASR (3.0%) and AVSR (3.4%) performance. Compared to the video-only CPL results in Table 1 (60.6% → 55.9%), the best VSR performance from AV-CPL (45.4%) is better by 10.5% absolute WER, which shows the advantage of using both audio and video inputs for pseudo-labeling, as opposed to using video-only inputs. With modality dropout of 0.5 during CPL, the improvement on VSR is not as large (58.6% → 47.4%), but the ASR and AVSR performance is improved over the baseline trained only on the labeled data (ASR: 2.6% → 2.3%, AVSR: 2.6% → 2.2%). We show the full results of our models on the validation set and with greedy decoding in the Appendix, Table H1.
Our best result for ASR (2.2%) and AVSR (2.0%) is close to the SSL state-of-the-art (1.4% and 1.2%), while our best result for VSR (45.3%) is worse than the state-of-the-art (27.2%). However, our method has several major advantages compared to the SSL methods. Our models are trained jointly on audio-visual data and can perform AVSR, VSR, and ASR with a single trained model, while the SSL models are fine-tuned separately for each task and require 3x more parameters to accomplish the same number of tasks (except for u-HuBERT (Hsu & Shi, 2022), which is also fine-tuned on audio-visual data and can perform all three tasks). Moreover, our models use only an encoder with beam-search decoding and a 4-gram LM while others use both an encoder and a decoder, which makes the total number of parameters of those models up to 1.5x the encoder size 2 .
In Table 4, we compare AV-CPL to other methods using 30h of labeled data. Although directly performing AV-CPL on the combination of LRS3 and VoxCeleb2 works well and we obtain our best ASR (6.6%) and AVSR (6.4%) performance, we find that performing AV-CPL with VoxCeleb2 first followed by AV-CPL with LRS3 obtains better VSR performance, thus alleviating the domain mismatch between labeled and unlabeled data. AV-CPL significantly improves VSR performance compared to the baseline trained only on the labeled data (87.0% → 56.7%) and maintains practical ASR and AVSR performance. While training video-only models on just 30h of labeled videos resulted in >95% WER and thus video-only CPL was not possible, AV-CPL uses the seed model's strong AVSR performance (9.7%) to generate good PLs and confirms the advantage of multi-modal data. Moreover, our method performs all three tasks with one model, while all previous SSL methods presenting results on 30h labeled videos require a separate model for each task. We show the full results of our models on the validation set and with greedy decoding in the Appendix, Table H2.
CONCLUSION
We introduced audio-visual continuous pseudo-labeling for multi-modal semi-supervised learning. Our audio-visual models continuously generate pseudo-labels during training on unlabeled videos, which leads to significant improvements in VSR performance while maintaining practical ASR and AVSR performance. Our method uses a single objective for speech recognition throughout training and can perform ASR, VSR, AVSR with a single model. For future work, we would like to apply our method to more languages, especially since our method does not require external ASR models to generate pseudo-labels.
2 For example, RAVEn's Transformer-Base decoder has half the parameters of the encoder.
ETHICS STATEMENT
The data used in this paper are publicly available for research purposes and were used under the following licenses: Creative Commons BY-NC-ND 4.0 license, Creative Commons Attribution 4.0 International License, and the TED terms of use. The datasets may have biases regarding racial, age, and gender attributes, which should be considered before deploying any models trained on them.
REPRODUCIBILITY STATEMENT
We provide implementation details and the full pseudo-code of our proposed method in the main paper and in the Appendix. We used datasets that are publicly available and include details such as dataset statistics and a discussion about performance on the validation set in Appendix. We report all of our results on the validation set for transparency and to make reproduction easier. ▷ Copy teacher weights from student. for M steps; ▷ Begin CPL; "warm up" phase with new modality dropout. do -Train M θ on labeled audio-visual data L = {(ai, vi), yi} w/ modality dropout p ′ m = p ′ a for 1 step; -Update teacher weights: ϕ ← αϕ + (1 − α)θ end repeat 4. Train M θ on L with augmentation and modality dropout p ′ m = p ′ a for NL updates; 5. for NU updates do -Draw a random batch (a ′ , v ′ ) ∈ U and generate its PLŷ ′ by M ϕ (a, v) with greedy decoding to form a batch B = {(a, v),ŷ}; -Train M θ on batch B with augmentation and modality dropout p ′ m = p ′ a for 1 update; -Update teacher weights: ϕ ← αϕ + (1 − α)θ end until convergence; AV-SlimIPL vs AV-EMA-PL. The pseudo-code for AV-SlimIPL and AV-EMA-PL is shown in Algorithm 1 and Algorithm 2 respectively. AV-SlimIPL requires a data structure for maintaining the cache. The fastest method for doing this is to store the unlabeled samples and pseudo-labels in CPU memory. While this is practical for the original SlimIPL (Likhomanenko et al., 2021a) which only trains with audio, this is infeasible for video with cache sizes C > 100 due to the large size of the video frames. For example, a 1s spectrogram contains 80 × 100 = 8, 000 values, while 1s of single-channel video contains 96 × 96 × 25 = 230, 400 values. Instead of keeping the samples in memory, a workaround is to maintain a mapping of unlabeled samples IDs in the dataset to their PLs. However, this requires loading the unlabeled data twice for each training step on un-labeled data: once for pseudo-labeling and once for training on the unlabeled sample 3 . Loading video frames is significantly more time-consuming than loading audio due to the larger file sizes. In comparison, AV-EMA-PL only requires loading the unlabeled data once since the teacher model is used to generate PLs each time the student model is trained on a batch of unlabeled data. This requires keeping two copies of the model parameters, however, we find this to be easier on the memory and CPU thread consumption at the expense of being slightly slower than AV-SlimIPL due to the re-generation of PLs at every iteration. Therefore, we focused on AV-EMA-PL for our AV-CPL experiments. Table B1 shows the number of samples and the length statistics for the sequences in the LRS3 and VoxCeleb2 dataset splits. The VoxCeleb2-English video split is provided by Shi et al. (2022a) according to an off-the-shelf English ASR model. We remove samples longer than 20s to ease the computational complexity.
C OPTIMIZATION DETAILS
We train all of the models up to 300k-400k steps on 8 A100 GPUs with 80GB of memory. Video samples with similar lengths are batched together such that the maximum number of frames is 5,680 frames (227s) per GPU. We apply SpecAugment (Park et al., 2019) during training to the input spectrograms with the following parameters: two frequency masks with frequency parameter F = 30, ten time masks with time mask parameter T = 50 and maximum time-mask ratio p = 0.1. When fine-tuning on the LRS3 30h training set, we use the same parameters except reduce the number of time masks to two since the videos are shorter on average. We use the AdaGrad optimizer (Duchi et al., 2011) with a learning rate of 0.03. The learning rate is warmed up for 64k steps and then held constant at 0.03 until 200k steps were reached. Then the learning rate is reduced by 2 every 50k updates if the WER does not improve on the validation set. Dropout and layer drop (Fan et al., 2020) are set to 0.1 during supervised training and CPL. Gradients are clipped to a maximum norm of 1.0. For AV-CPL and V-CPL experiments, we use M = 5k warmup steps and α = 0.9999. For the audio-only SlimIPL experiments, we use a cache size of 500 and M = 20k warmup steps. Our implementation is in Jax (Bradbury et al., 2018).
D LANGUAGE MODEL INFERENCE
We train a 4-gram word-level language model on the LRS3 text using KenLM (Heafield, 2011). We use the Flashlight beam-search decoder (Kahn et al., 2022) implemented in Torchaudio (Yang et al., 2022) to integrate the language model. The perplexity on the LRS3 test set using the language model trained on the 433h training set was 92.5 excluding Out-of-Vocabularies (OOV) and 94.0 including OOV. The perplexity on the LRS3 test set using the language model trained on the 30h training set was 112.2 excluding OOV and 122.4 including OOV. We use the LRS3 text to construct a lexicon file which contains 51,292 words. We tuned the LM weight among {0, 1, 2, 4, 8} and word insertion penalty among {±4, ±2, ±1, 0} using grid search on the validation set and selected the LM weight of 2 and word insertion penalty of 0. We use a beam size of 1,500. We use the same LM decoding hyperparameters for all models.
E VALIDATION SET DISCUSSION
LRS3 (Afouras et al., 2018b) does not provide a validation set, therefore Shi et al. (2022a) randomly selected 1,200 samples (about 1h) from the 30h training set as the validation set. Several works since then have followed this setup (Haliassos et al., 2023;Zhu et al., 2023;Lian et al., 2023;Hsu et al., 2021a), however, so far no work has reported the performance of their final models on the validation set, except for an AV-HuBERT VSR ablation study (Shi et al., 2022a). We find it important to report the results on the validation set since the hyperparameters are tuned on the validation set with the test set held out until the final decoding. In most scenarios, the performance on the validation set is better than performance on the test set. However, for ASR, performance is better on the test set than on the validation set when using characters as the output units (Table G2). One interesting observation is that for VSR, the performance on the validation set is much better than the performance on the test set (Table F2), regardless of whether characters or subwords are used as the output units (Table G3). In some cases, the performance on the validation set is more than 20% absolute better than on the test set. Shi et al. (2022a) also report better VSR performance on the validation set compared to the test set by 9% absolute WER (Table D.1). Moreover, better performance on the validation set does not reliably indicate better performance on the test set. For example, the video-only V-CPL Base and Large models achieve 37.2% and 27.5% WER respectively on the validation set (Table F2) which is a significant difference, but they achieve 55.6% and 55.9% WER respectively on the test set, which is practically the same result. Upon further investigation, we found that the transcriptions for 1,044 of the 1,200 samples in the validation set are exact substrings of samples in the training set, while only 165 of the 1,321 samples in the test set are exact substrings of samples in the training set, which could potentially explain the discrepancy in performance on the sets and causes concern about over-fitting to particular sequences. Another reason could be that the test set may have more challenging visual conditions, for example, the test set may have faces shot at large angles, which would make VSR harder (Shillingford et al., 2019). In Table F1, we compare audio-based semi-supervised learning methods: HuBERT (Hsu et al., 2021a) as the SSL method and SlimIPL (Likhomanenko et al., 2021a) as the CPL method. We trained the SlimIPL method ourselves on LRS3 following the original model and hyperparameters. We report both the greedy and LM decoding results on both the LRS3 validation and test sets. We use the LRS3 30h training set as the labeled data, and either use the LRS3 433h training set as the unlabeled data or the combination of the LRS3 433h training data and VoxCeleb2 1,326h training data as unlabeled data. Comparing the supervised baselines, our model is able to match or outperform the reported state-of-the-art performance using a simple pipeline (encoder-only transformer with CTC loss compared to joint CTC and cross-entropy loss with a S2S encoder-decoder transformer). Comparing the semi-supervised methods, we find that SlimIPL can exceed HuBERT's performance.
F AUDIO-ONLY AND VIDEO-ONLY CONTINUOUS PSEUDO-LABELING
With 30 hours of labeled data and 433h of LRS3 unlabeled data, SlimIPL achieves 3.1% WER compared to HuBERT's 4.5% WER. Although directly performing CPL on the combination of LRS3 and VoxCeleb2 unlabeled data performs well, we find that performing CPL first on VoxCeleb2 and then on LRS3, followed by fine-tuning on the 30h labeled data in LRS3 works better and alleviates the domain mismatch between the labeled and unlabeled data. After these rounds of training on a total amount of 1,759h of unlabeled data from LRS3 and VoxCeleb2, SlimIPL achieves 3.0% WER compared to HuBERT's 3.2% WER. These results show that audio-only CPL methods transfer well to new datasets and are competitive with SSL methods, even with a simpler pipeline.
In Table F2, we show the full results of video-only continuous pseudo-labeling (V-CPL), including results with the Base model and results on the validation set. Our Base models achieve similar performance to the Base video-only AV-HuBERT trained from scratch without self-supervised learning, although our models use only an encoder with beam-search decoding and a 4-gram LM instead of a S2S encoder and transformer decoder. Applying V-CPL to the Base model, the WER with LM decoding is improved to 55.6%, which is even better than the Large model (55.9%). However, the Large model's greedy decoding performance (63.7%) is better than the Base model's (61.0%). In Table G1, we study λ = N U /N L , the ratio of the number of unsupervised to supervised updates during video-only CPL (V-CPL). We find a ratio of 1 / 1 to work the best in most cases. We therefore adopt this ratio for the video-only and audio-visual CPL experiments.
G ABLATION STUDIES
In Table G2, we compare different combinations of output tokens and strides for the supervised ASR models (Likhomanenko et al., 2021a). We follow Shi et al. (2022a) to construct unigrambased subwords with a vocabulary size of 1k (Kudo, 2018). We use 433h of labeled audio from LRS3 and the Transformer-Base model. The audio encoder is a convolutional layer with a kernel width of 7. Prior work keeps the video's native stride of 40ms and stacks 4 audio spectrogram frames to match the video frame stride (Shi et al., 2022a). However, in Table G2, we show that performance is always better with a 20ms stride using either characters or subwords as the output token. The best performance is obtained with character tokens and 20ms stride. In Table G3 we compare characters to subwords as the output unit for the video-only model. We use the video's native stride of 40ms. Although subwords achieve better performance when training purely on labeled data, characters achieve significantly better performance when performing pseudolabeling with unlabeled data (55.6% vs 60.6%).
We proposed to pre-train the audio encoder for supervised AVSR according to the results in Table 2c. We show the full results of such pre-training for the Transformer-Base model trained on 433h of labeled data, including results on the validation set and results with greedy decoding in Table G4. We show the results of these experiments for the Large model on 433h in Table G5, as well as the Base model on 30h in Table G6 and the Large model on 30h in Table G7. We note that such pre-training becomes less necessary for the Large model on 433h since the ASR, AVSR, and VSR performance is nearly the same both with and without pre-training, which shows that it is easier to learn from both modalities given enough data and representational power.
H AV-CPL FULL RESULTS
We show the full results of AV-CPL using 433h and 30h labeled LRS3 data including results on the validation set and with greedy decoding in Table H1 and Table H2. Table G4: AVSR modality pre-training ablation with labeled LRS3 433h and Transformer-Base. We report greedy ("no LM") and beam-search decoding ("w/ LM") with a language model (LM) trained on 433h of LRS3 transcriptions. Table G5: AVSR modality pre-training ablation with labeled LRS3 433h and Transformer-Large. We report greedy ("no LM") and beam-search decoding ("w/ LM") with a language model (LM) trained on 433h of LRS3 transcriptions. Table G6: AVSR modality pre-training ablation with labeled LRS3 30h and Transformer-Base. We report greedy ("no LM") and beam-search decoding ("w/ LM") with a language model (LM) trained on 30h and 433h of LRS3 transcriptions. Table G7: AVSR modality pre-training ablation with labeled LRS3 30h and Transformer-Large. We report greedy ("no LM") and beam-search decoding ("w/ LM") with a language model (LM) trained on 30h and 433h of LRS3 transcriptions. The seed models use modality dropout p m = p a = 0.5. We report greedy ("no LM") and beamsearch decoding ("w/ LM") with a language model (LM) trained on 433h of LRS3 transcriptions. Table H2: AV-CPL main results on LRS3 30h labeled videos reported on LRS3 val and test sets. The seed models use modality dropout p m = p a = 0.5 while the AV-CPL models use modality dropout p ′ m = p ′ a = 0.1. We report greedy ("no LM") and beam-search decoding with a language model (LM) trained on 30h and 433h of LRS3 transcriptions.
|
2023-10-02T06:42:17.602Z
|
2023-09-29T00:00:00.000
|
{
"year": 2023,
"sha1": "cadd1b509116311bcba84b140575da564da4e153",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cadd1b509116311bcba84b140575da564da4e153",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
]
}
|
216353311
|
pes2o/s2orc
|
v3-fos-license
|
Mabuia wirzi Roux, 1925 (Squamata: Scincidae), an overlooked synonym of Dasia olivacea Gray, 1839, with notes on the synonymy of Dasia olivacea
Almost a century ago, the Swiss herpetologist Jean Roux described a new skink species, Mabuia wirzi, from a single specimen from Pulau Nias, an island on the west coast of Sumatra in Indonesia (Roux 1925). The specimen was part of a small collection of reptiles and amphibians made on Nias by the ethnologist Paul Wirz (29.v.1892-1955.i.30), who worked on Nias in 1925 and 1926 (Wirz 1928, 1929). Roux's paper was published in October 1925, and hence the specimen would have come from Wirz's first expedition, and would have been only recently preserved at the time of description.
Almost a century ago, the Swiss herpetologist Jean Roux described a new skink species, Mabuia wirzi, from a single specimen from Pulau Nias, an island on the west coast of Sumatra in Indonesia (Roux 1925). The specimen was part of a small collection of reptiles and amphibians made on Nias by the ethnologist Paul Wirz (29.v.1892Wirz (29.v. -1955, who worked on Nias in 1925and 1926(Wirz 1928, 1929). Roux's paper was published in October 1925, and hence the specimen would have come from Wirz's first expedition, and would have been only recently preserved at the time of description.
Apart from the holotype being identified by registration number in a list of the type specimens of lizards in the Naturhistorisches Museum Basel (NHMB; Kramer 1979), the species does not appear to have been mentioned again in the literature. The assignment of the species to the genus Mabuia (a commonly used mis-spelling of Mabuya Fitzinger, 1826) by Roux suggests that it would now be placed in the genus Eutropis Fitzinger, 1843, to which most other south-east Asian skinks formerly in Mabuya are now assigned (Mausfeld et al. 2002;Mausfeld & Schmitz 2003;Karin et al. 2016). Roux stated that the species was very similar to Mabuia multifasciata (Kuhl, 1820) (now Eutropis multifasciata), reportedly differing only from that species in the lack of a postnasal scale. The presence of a postnasal scale has been considered invariant in Eutropis multifasciata (Boulenger 1887;de Rooij 1915;Smith 1935;Auffenberg 1980;Grismer 2011), suggesting that Roux was correct in considering the two species distinct. We have recently had the opportunity to examine photographs of the holotype of Mabuia wirzi (NHMB 8957) provided by Urs Wuest and Edi Stöckli, along with some notes on the holotype made by Allen Greer during a visit to the Basel collection in the early 1980s. These indicate that the species is not a Eutropis, but is instead conspecific with Dasia olivacea Gray, 1839, of which it is a junior synonym.
Of the character states described by Roux (1925) for Mabuia wirzi, the following are taxonomically important: supranasals present, separated by contact of rostral and frontonasal; prefrontals in median contact; supraoculars four, the first two or three in contact with the frontal; supraciliaries 6-7; interparietal completely separates parietals; one pair of nuchals; lower eyelid scaly; ear smaller than eye, oval, lacking lobules on anterior border; postnasal absent; fifth supralabial widest, and located below the eye; 30 midbody scales; anterior dorsal scales smooth to slightly striate; dorsal scales posteriorly on body tricarinate, occasionally quadricarinate, more weakly tricarinate on tail; ventral scales smooth; scales on limb dorsum tricarinate to quadricarinate, those on limb venter smooth; adpressed limbs overlap, hind limb not reaching elbow of front limb; hind limb about 75% of axilla-groin interval; subdigital lamellae smooth, 18 under fourth toe. Coloration uniform brown-grey dorsally with some dark macules on supraoculars, frontoparietals, parietals and nuchals; venter uniform light green; limbs yellow below.
To these, the following additional taxonomically important characters were listed in Greer's notes or are visible on photographs: supranasals widely separated; primary temporal single; secondary temporals two, lower overlapped by upper; upper secondary temporal and nuchal scales separated by 1L/2R intercalated scales; ear very small, about 2-2.5x diameter of nostril; postmental contacting first two infralabials on each side; only first pair of chin shields in median contact; three enlarged glandular scales on heel of pes.
For measurements, Roux (1925) only provided snout-vent length (96 mm) and tail length (100 mm, apparently incomplete as the tail is bifid, requiring regeneration to have occurred). Greer's notes record a snout-vent length of 99 mm, fore limb length 27 mm, and hind limb length 34 mm for the type.
Together, these characters almost entirely match Dasia olivacea, as defined by de Rooij (1915), Smith (1935), Taylor (1963), Inger and Brown (1980), Grismer (2011) and Harikrishnan et al. (2012), and the holotype of Mabuia wirzi ( Fig. 1) closely resembles that species. Particularly important in identifying Mabuia wirzi as a Dasia species are the enlarged glandular heel scales, a diagnostic feature of that genus (Greer 1970), and lacking in Eutropis, although previous definitions of Dasia record only two enlarged heel scales for species in the genus (Greer 1970;Karin et al. 2016). The simple temporal configuration of one primary temporal, not reaching the parietal, and two secondary temporals with the upper overlapping the lower and contacting the parietal (Fig. 1), is also a feature of Dasia olivacea (Fig. 2;de Rooij 1915: Fig. 77) while Eutropis multifasciata and other Eutropis species have a more complex temporal configuration (Greer & Broadley 2000;Greer & Nussbaum 2000). Grismer (2011) reports two primary temporals for D. olivacea, but this is likely to be based on a different definition of temporal scalation (possibly that of Grismer et al. 2011 for Larutia, where the scales labelled primary temporals are the equivalent of the pretemporals of Greer 1983, and the posterior supraciliary and upper postsubocular of Taylor 1936). We use Taylor's nomenclature for these scales. Inger and Brown (1980) reviewed geographic variation among the species of Dasia, concluding that two species of Dasia, D. olivacea and D. grisea, coexisted on Sumatra and its surrounding islands. They differentiated D. grisea from D. olivacea in the Sumatra region by the broad contact of the supranasals (vs usual separation in D. olivacea: 9 of 11 specimens), prefrontals in contact (vs usually separated in D. olivacea: 7 of 11 specimens), generally stronger keeling of the dorsal scales, taller anterior loreal (height/length 0.88-0.95 vs 0.53-0.71), fewer midbody scales (26-28 vs 30-32) and more numerous ventral scales (57-59 vs 50-56). In having separated supranasal scales, 30 midbody scales, and a relatively low anterior loreal (ratio of height to length on the left side 0.53), the holotype of M. wirzi better fits with D. olivacea. Inger and Brown (1980) did not define reference points for their ventral scale counts, but we presume they used a similar definition to that used previously by Inger, of scales between mental and vent (Inger 1958). On this basis, we count 53 ventrals on the holotype, again fitting with D. olivacea. While the contacting prefrontals of the holotype are more typical of D. grisea, they are within the range of variation of Indonesian D. olivacea (in contrast, populations in mainland south-east Asia more consistently have separated prefrontals; data on variation from Inger and Brown 1980). Further, the only Dasia specimen from Nias examined by Inger and Brown (1980), United States National Museum (USNM) 31677 (Fig. 3), was identified by them as D. olivacea. The species was also recorded from Nias by Fischer (1886) andde Rooij (1915). Hence, D. olivacea is the only species in the genus recorded from the type locality for Mabuia wirzi to date. Taylor (1936). These two scales constitute the pretemporals of Greer (1983). Scale bar = 5mm. The only character not typical of D. olivacea is the relatively uniform body dorsum. While D. olivacea typically has one-scale wide bands of dark and light streaks and flecks, separated by two or three scales, these may be poorly defined in insular and coastal habitats (Grismer 2011), and are absent in some individuals (Das 2004 Gray (1845).
De Rooij (1915) also included Mabuia saravacensis Bartlett, 1895, syntypes from Santubong and Kuching, Sarawak, in the synonymy of Dasia olivacea. However, Inger and Brown (1980), who examined a syntype in the Natural History Museum London (cited by them as BMNH 99.1.20.6, though the syntype has original registration number 99.1.20.4, now reregistered as 1946.8.20.57, Kuching, Sarawak, presented Sarawak Museum-99.1.20.6 is a syntype of Lygosoma bampfyldei Bartlett, 1895), considered Mabuia saravacensis to be a synonym of Dasia grisea instead. Smith (1943) and Taylor and Smith (1950), following examination of the holotype of Euprepis microcephalus Hallowell, 1856, suggested that this species, purportedly from Mexico, was a Dasia species with an incorrect locality, although they were unable to determine its affinities within Dasia as the head of the holotype was in poor condition. Uetz et al. (2019) go further in tentatively listing this name in the synonymy of Dasia olivacea. The small size of the holotype (given as 4 inches 9 lines [4.75 inches, = 121 mm] in total length by Hallowell 1856, with snout-vent length 2 inches 1 line [2.08 inches, = 53 mm] as given in a more extended description by Hallowell 1860) would be commensurate with a juvenile of the currently known species in the genus. Juveniles of Dasia species are strongly banded (Greer 1970;Inger & Brown 1980). Given this, the description of coloration by Hallowell (1860), uniform ash with traces of four longitudinal narrow dark-coloured lines extending the whole length of the trunk (earlier in the same description, Hallowell gives the number of dark lines as five), is not in agreement with any known Dasia species, suggesting that the assignment of this name to Dasia is incorrect. Hallowell (1856Hallowell ( , 1860 also reports the ear having three lobules along the margin, and the body scales bearing 8-9 keels, the central pair more widely spaced than the others, features which do not fit with Dasia. We have examined low resolution photographs of the holotype (Academy of Natural Sciences at Drexel University (ANSP) 9531), and our suspicions have been confirmed-the specimen is clearly not a Dasia, but more likely a Trachylepis or Eutropis species. In addition to the features in the description, the ear is very much larger than that of Dasia, and there are no enlarged glandular scales on the heel. A more detailed assessment of the type will be needed for generic and species assignment, but for the moment the name should not be considered synonymous with D. olivacea or any Dasia species.
|
2020-04-09T09:14:13.818Z
|
2020-04-08T00:00:00.000
|
{
"year": 2020,
"sha1": "1279636457ab33aafdb22efcbd6055cda3684a7a",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/3974764/files/source.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "b018aff67bb84fb98da5fd25f5fdfb27f7539ff8",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
119548378
|
pes2o/s2orc
|
v3-fos-license
|
P-adic numbers and replica symmetry breaking
The p-adic formulation of replica symmetry breaking is presented. In this approach ultrametricity is a natural consequence of the basic properties of the p-adic numbers. Many properties can be simply derived in this approach and p-adic Fourier transform seems to be an promising tool.
Introduction
In the replica approach to disordered systems one usually introduces a matrix Q a,b which is the stationary point of a free energy F [Q]; the matrix is a zero by zero matrix, with zero elements on the diagonal [1]. Such a matrix is constructed as the n → 0 limit of a normal matrix with n components.
In the mean field approach one looks for stable (or marginally stable) saddle points of the free energy. When the replica symmetry is spontaneously broken, as it happens in spin glasses, one assumes that the saddle point is given by a matrix Q constructed in a hierarchical way, which corresponds to breaking the replica symmetry group (the permutation group of n elements) in a peculiar way [2,3]. The aim of this note is to expose some hidden algebraic properties of this matrix and to show that the whole construction may be simply done using p-adic numbers.
In this approach the ultrametric properties of the matrix Q [4,5] arise naturally from the ultrametric properties of the p-adic numbers. Although we do not obtain new results in this way, we hope that this reformulation may be a useful starting point for simplify some of the long computations involved in the evaluation of the corrections to the saddle point approximation.
In section 2 will be present the basic properties of the p-adic construction and show its equivalence to the usual hierarchical construction. In the next section the limit n → 0 is performed in a simple way. In section 4 we present an alternative and more interesting procedure for doing the limit n → 0, where we connect this approach to standard p-adic analysis. In the next section we show the advantages of using the p-adic Fourier transform. Finally there are two appendices, the first dedicated to the foundations of p-adic analysis, the second to the basic properties of the Fourier transform [6,7]. Both appendices can be skipped by readers experts on p-adic analysis.
2 The p-adic construction of the matrix Q We start the construction of the matrix Q by considering a number p (which for simplicity we suppose to be a prime number) and by assuming that n = p L for some value of L. We are going to construct the matrix Q for integer L and p in a specific way which we will discuss later. The limit n → 0 will be done at the end. The matrix Q enters in the evaluation of the free energy in spin glasses and related models in the saddle point approximation. Here we do not address the point of the evaluation of the free energy and we only consider the construction of the matrix Q.
Eventually n = p L must go to zero and we can follow two options in order to realise this goal: a) We take a value of p greater than one and we send L to minus infinity. Eventually we may do an analytic continuation in p to non integer values of p. b) We first do an analytic continuation in p to non integer values of p. We take a value of p less than one and we send L to plus infinity.
In both cases one obtain the limit n → 0. The two constructions are roughly equivalent. It seems that the second one is more simple to work with, however for pedagogical reasons we will start by presenting the first one in section 3, while the second one will presented in section 4.
The first steps are common to both strategies. The construction of the matrix Q for integer p and L can be done as follows. We assume that the matrix Q a,b is of the form where Q(k) = Q(−k) (symmetric matrix) and Q(k + n) = Q(k). This choice restricts very much the form of the matrix and shows an explicit symmetry of this parametrization (i.e. translational invariance in internal space). The condition Q(0) = 0 implies that the elements on the diagonal are equal to zero. The second step consists in assuming that the function Q(k) is a function of the p-adic norm |k| p . The appendices provide a brief introduction to peadic analysis.
In other words we suppose that This corresponds to setting Before performing the limit n → 0 it is convenient to compare our apporach with the standard hierarchical construction.
In the usual case [2] one introduces a sequence of K + 2 numbers m i , with m 0 = 1 and m K+1 = n, such that m i−1 divides m i for i = 1, K + 1. One sets for a = b: where the function I(z) is the integer part of z, i.e. the largest integer less or equal to z. Let us consider the special case where the m i are given by We want to show that the matrix Q obtained in this way coincides, after a permutation with the matrix Q constructed before with K + 1 = L. The proof is rather simple. We associate to the index a the L digits of a − 1 in base p: These digits form a L dimensional vector with components in the range 0 − (p − 1). The hierarchical construction corresponds to set Q a,b = q i if a j = b j for all j ≥ i and a i−1 = b i−1 .
We now associate to an index a its transpose (a T ), which is obtained by writing its digits in the inverse order: The previous condition becomes that the K − i less significative digits of a T and b T do coincide, and the (K − i + 1) th digit differs. This last condition may be restated by saying that a − b is a multiple of p (K−i) but not of p (K−i+1) , i.e. |a − b| p = p −(K−i) . Apart from a reshuffling of the indices, i.e. a permutation, the usual construction is equivalent the p-adic construction presented before. Generally speaking it is possible to prove that independently from the condition in eq. (5), after a similar reshuffling of the indices, the hierarchical matrix Q (defined in eq. (4)) can always be written under the form of eq. (1). It is likely that many of the unexpected properties of the hierarchical construction arise from the possibility of choosing an ordering of the indices in such a way that the hierarchical matrix is invariant of under the transformation a → a + 1. This invariance implies that the elements of one line are the permutation of the elements of another line, however the converse is not true.
The n → 0 limit
We can now perform the n → 0 limit. We will firstly follow the strategy a).
This limit can be reached by sending L to −∞. The continuation of the usual formulae from positive to negative L can be done if we introduce the quantities q i for non positive i.
For example let us consider the sum of the elements of a line of the matrix. In order to perform the n → 0 limit we slightly change the notation of the previous section and we set We easily get that the sum is given Indeed the number of integers k such that |k| p ≤ p −j (i.e. the volume of the p-adic sphere) is given by p L−j and therefore the number of integers k such that |k| p = p −j (i.e. the volume of the p-adic shell) is given by (p − 1)p L−j−1 .
We are free to write the last equation it as by introducing the extra parameters q i for i < 1, which are irrelevant for positive L.
The analytic continuation to negative L can be now trivially done. In the limit L → −∞, the first term disappears and we get A similar procedure can be followed in order to compute other functions of the matrix Q. By comparing the previous equation with the usual ones, we see that we obtain the hierarchical formulation where a function q(x) is introduced, with the extra constraint that q(x) is piecewise constant with discontinuities at x = p −i . The usual formulation, where q(x) is a continuous function can be obtained by analytic continuation in p up to the point p = 1 + .
By performing the explicit computations similar results are obtained for the other quantities and it is possible to show that the usual approach is recovered.
The upsidedown world
In the other possible approach to the n → 0 limit (b), we firstly do an analytic continuation in p to values less than one and only later we send L → +∞ in such a way that p L → 0. At a later stage we are free to send p → 1 − in order to reach the continuous limit. In this way we get formulae quite similar to the previous one, with the advantage that only the q i with positive i are needed.
In this approach one obtains that the function q(x) is given by where the index i ranges from 0 to +∞ in such a way that when p → 1 − , x ≡ p i spans the interval 0-1. The formulae one obtains in this approach for p < 1 coincide with the formulae obtained with the formalism of the previous section (with the substitution of p with p −1 ). The advantage of this procedure is that we obtain formulae that are very similar to those used in the p-adic integral and that are well known to mathematician. The strategy to prove these formulae is quite similar and therefore one can use some of the well known results in this field.
In the region where p < 1 it may be convenient to introduce the notation: In this way |k| belongs to the interval 0 − 1, with the exception |0| = ∞. In the limit where p → 1 − , |k| spans the interval 0-1. Equation 12 can thus written as Let us apply this strategy to the computation of the sum of the elements of a line of the matrix. We find that lim For p < 1 the previous equation can be written as while for p > 1 the r.h.s. becomes proportional to the p-adic integral which is denoted as With some abuse of notation we denote for p < 1 where the sign ′ over the integral ′ p denotes that the value zero is excluded from the integration range. We must note that the measure of the integral is normalised to −1. In a similar way we can use the notation For p > 1 we obtain the usual p-adic integral (apart from a normalisation factor). The results for p < 1 can be obtained using the same steps as in appendix II. We can do the computation in the interesting case where the sum is restricted to all different indices. We have to compute where we denote by ′ the sum restricted to the case of all different indices. In the same way we could define where p is less than 1 and the sum is done on all different indices. The factor (−1) M has the effect of giving a positive result for the integral. Generally speaking in this way the evaluation of sums can be reduced to the computation of quantities that are very similar to the corresponding p-adic integral. For example let us use this strategy to compute The application of the previous formulae tells us that the integral is given by The proof can be obtained using the same strategy as in the appendix I for computing the measure of three intersecting p-adic shells.
Finally in the continuum limit where p goes to 1 − one get the formula b,c,b =a,c =a,b =c The same formula could be simply written as It is important to note that the ultrametricity inequality works at reverse in the region p < 1 and consequently also in the limit p → 1 − . This is in agreement with the fact that 1 − x, not x, has the physical meaning of distance.
After some work one can find simple rules for generic sums of the type where F depends only on the p-adic distance and all indices are different [9]. In the case where the function is symmetric one finds that in the limit p → 1 − ′ p F (a, b, c, d) = x<y<z dx dy dzF | |a−b|=|a−c|=|a−d|=x,|b−c|=|b−d|=y,|c−d|=z + 11 permutations + x<y;x<z dx dy dzF | |a−b|=z,|b−c|=|b−d|=|a−c|=|a−d|=x,|c−d|=y + 2 permutations + x<y dx dy x F | |a−b|=y,|b−c|=|b−d|=|a−c|=|a−d|=x=|c−d|=x + 5 permutations + x<y dx dy y F | |a−b|=|b−c|=|a−c|=y,|b−d|=|a−d|=x=|c−d|=x + 3 permutations where the formula for the intersection of 4-p-adic sphere of the same radius has been crucial to obtain the last term. If we apply the same strategy to more complicated sums we can find the formula of ref. [8], where the result is written as sum over all possible trees, with a specific integral associated to a given tree.
Using the p-adic Fourier Transform
An interesting application of this approach can be done to the formula for the product of two matrices A and B: If the matrices have the form discussed here one finds that the previous formula can be written as a convolution Finally one finds using the previous formulae that which using the rules of p-adic integral after performing the limit p → 1 − can be written as: Convolutions can be strongly simplified in Fourier space. In principle we can just do ordinary Fourier transform, where the momentum q is in the interval (−π , π), however it is convenient to take into account the p-adic nature of the functions we consider. We can start from the analysis leading to formula (64) of appendix (II). Generalising the computations to the case where p < 1 and performing the continuum limit, one finds that the p-adic Fourier transform 2 of a function A(k) = a(|k|) is a function A[M] = a[y], defined for y in the interval 0-1, where y = |M| −1 . One must be careful in removing the factor 1 p L in the normalization of the Fourier transform, which would be harmful here.
Eq. (64) may be transformed to Let us define (for p < 1) the function The function a[y] is defined only for y of the form p −k for integer k. With this definition one finally obtains that The final formulae in the continuum limits are where we use the apparently strange notation a(∞) = A(0) and a These relations becomes simpler in differential form. For example one gets: This differential relation is equivalent to the integral relation in eq. (36, 37) for obtain the Fourier transform if they are complemented by value of the Fourier transform in a given point. A possible choice is With some work one can verify that the multiplication of two matrices becomes the simple multiplication of their Fourier transform [10]: Indeed diffentiating eq. (32) one finds The function a[y] was already introduced in ref. [10] in order to solve the inversion problem, although its p-adic nature was not recognised. Also the usual procedure of simplifying the saddle point equations by differentiating them correspond to consider the p-adic Fourier transform.
We could also consider the problem of computing the inverse of a matrix Q, i.e. of finding matrix R, such that It is not a surprise that we find the Fourier transform of the matrix R is simply the inverse of the Fourier transform of the matrix Q: An extremely important problem consists in the computation of the inverse of the Hessian coming from the fluctuation around the saddle point. Here one has to solve the equation This inversion is not a simple job and rather complex computations have been done [11,12]. However the final formulae are remarkable simple. Although we are not able at the present moment to derive these formulae in the framework of the p-adic formalism it may be useful to show that they have a very simple interpretation in terms of p-adic Fourier transform [7]. We will consider here only the so called replicon sector for which the results are simpler than in the other sectors. We restrict our analysis to the region where |a − b| = z > |a − c| = x 1 , |b − d| = x 2 > z. In this region ultrametricity implies that z = |a − d| = |b − c| = |c − d|. Both M and G are functions of x 1 , x 2 , z only and we write them as M z (x 1 , x 2 ) and G z (x 1 , x 2 ). In the same way we denote by G z R (x 1 , x 2 ) the replicon contribution to G, where the precise definition of the replicon can be found in the original papers [11,12].
Following [11] we can thus introduce the Fourier transform with respect to x 1 and x 2 , which is given by One finally finds that the final formula for the replicon propagator [12] may be obtained with slightly modified inverse Fourier Transform: Equivalently we could write the last equation as The differential relations in the inverse Fourier transform are preserved, only the second condition which fixes the value of the inverse Fourier transform in one point is modified. With these modifications the replicon sector of the inverse is just the numerical inverse of the matrix M in Fourier space. The precise reason for the appearance of these simple formulae with a strong p-adic flavour is not completely clear at the present moment. They show the usefulness of the padic formalism. It would be also extremely interesting to study if the same formalism could be applied to the off equilibrium dynamics of the kind studied in ref [13].
After completion of this work we received a paaper by V. A. Avetisov, A. H. Bikulov, S. V. Kozyrev [14] where some similar results are derived.
Appendix I: p-adic numbers
Let us consider a prime number p. Any integer k can be written in an unique way as with i ≥ 0, 0 ≤ a l ≤ p − 1 and a 0 = 0. The p-adic norm of such an integer k (i.e. |k| p ) is defined as The p-adic norm of 0 is defined to be equal to zero. The value of the p-adic norm tells us the number of consecutive zeros at the end of a number, when it is written in base p. For a rational number r = a/b, the p-adic norm is defined as |r| p = |a| p /|b| p .
The properties of the p-adic norm are well studied by mathematicians, one of the most famous property being ultrametricity, which states that for any choice of c. This property, which generalises the statement the sum of two even number is even, can be proved as follows.
Using the translational invariance of the metric, we first write the ultrametric inequality in an equivalent way as |a + b| p ≤ max(|a| p , |b| p ).
If a is a multiple of p i (and not of p i+1 ), and b is a multiple of p k , with k ≥ i, it is evident that a + b is a multiple of p i . Therefore We stress that we have used in a crucial way the fact that p > 1 for a true prime. (In this paper we make an analytic continuation to p < 1. In that case, the inequality sign would be reversed.) A direct consequence of this inequality is that any triangle is either equilateral or isosceles with the two largest sides equal. It follows that any point a inside the p-adic disk centered at o and of radius r, i.e. such that |a − o| p ≤ r, is also a center of the disk; i.e. if |b − o| p ≤ r, then also |a − b| p ≤ r. The whole p-adic field may be constructed starting from the p-adic rationals by considering the closure of the rationals with respect to the p-adic norm in the same way that the real numbers (of the interval 0 − 1) are constructed as the closure of the rationals (of the interval 0 − 1) with respect to the usual Euclidean norm.
Closing the rational field with respect to the previously defined norm one obtains the p-adic field. Continuity of a p-adic function can be defined as usual. For example a function f is continuous at the point k if lim for any sequence of k n which converges to k in p-adic sense (i.e. |k n −k| p → 0). The extension of a function from integers to p-adic numbers is called p-adic interpolation. Here we do not need to discuss this point any more. ¿From our point of view a more interesting construction is the integral over the p-adic integers which can be defined in an elementary way as There are many well known properties of the p-adic integral. Here we report some of them, leaving the proof to the reader (the lazy reader can found them in any book on p-adic calculus).
a) The measure of the p-adic sphere of radius p −i centred around an arbitrary point a (i.e. the measure of all points such that |a − b| p ≤ p −i ) is given by p −i . As far as the p-adic distance among integers cannot be larger than 1, the unit sphere coincides with the whole space and has measure 1.
b) The measure of the p-adic shell of radius p −i centred around an arbitrary point a (i.e. the measure of all points such that |a − b| p = p −i ) is given by c) The measure of the intersection among two p-adic shells has rather interesting properties. Let us consider the intersection of a shell of radius p −i centred around the point a with a shell of radius p −k centred around the point b. The measure depends on the distance among the points a and b, which we assume to be equal to p −j . Ultrametricity tell us that the measure is zero unless two among the distances coincide and the two equal distances are the largest. After some reflection one finds that only three cases have to be considered.
• We first consider the case i = k < j. Here the ultrametricity inequality implies that the two shells coincide and therefore the measure of the intersection is simply given by • We now consider the case i = j < k. Here the ultrametricity inequality implies that the second shells is fully contained in the first one and therefore the measure of the intersection is simply given by (1 − p −1 )p −k .
• We finally consider the less trivial case is i = j = k. If one notice that the two spheres of radius p −i centred in a and b coincide and that the two spheres of radius p −i−1 centred in a and b have zero intersection one finds that the measure of the intersections of the two shells is given by The generalisation of the previous arguments allows us to compute the measure of the intersection of many p-adic shells by using ultrametricity in a systematic way. The most significative result is that the intersection of M shells of radius p −i , whose centres are all at mutual distance p −i is given by (1 − Mp −1 )p −i . The measure becomes zero for p = M, which implies that you cannot find M + 1 numbers exactly at the same distance. This last results is a generalisation of the well known statement that you cannot find three integers (a, b, c) such that the three differences among them (a − b, b − c, c − a) are all odd.
Using the previous formulae there are a few p-adic integrals that can be obtained a simple way.
For example let us try to compute where for simplicity we denote by |a| the p-adic norm of a. The integral is c independent and the application of the previous formulae tells us that the integral is given by where k = min(i, j)
Appendix II: p-adic Fourier transform
Fourier transform on the p-adic integers coincides with the usual Fourier transform. It can also be defined by analysing the characters of the addictive group. It is more simple to consider first the case where L is finite and only a finite number of points is present. We start by considering the case in which the function A(x) is defined only for x = 1, · · · , p L (with A(0) = A(p L )). The Fourier transform is defined as where M is a rational number of the form jp −L with 0 ≤ j < p L . As usual the Fourier space contains the same number of points of the original space. In this paper we will use the square parenthesis to denote Fourier transform. Let us consider the problem of computing the Fourier transform of a function which depends only on the p-adic norm,i.e. A(k) = a(|k| p ). We are thus interested in computing In order to compute the Fourier transform of the shell, it may be simpler to firstly compute the Fourier transform of the p-adic sphere of radius p −k . A simple computation shows that where we used the definition M = lp −L . It follows that V k [M] = 0 unless l = np L−k , n = 0, 1, · · · , p − 1 and the last passage is no more valid, because both numerator and denominator are equal to zero in the final result. We notice that the possible values of |M| are p j for non negative j. Consequently we find that V k [M] = 0 unless k − j ≤ 0 We also remark that, as consequence of translational invariance, the Fourier transform of a function of the p-adic norm, is still a function of the p-adic norm.
The reader should notice that the functions a[ ] and s k [ ] are defined in such a way that it argument is the range 0 − 1. It follows that v k [p −j ] = p −k for k − j ≤ 0 v k [p −j ] = 0 for k − j > 0.
As a consequence we find that the Fourier transform of a spherical shell is given by It is interesting to note that the last formula the dependance on L is very simple, so that the limit L → ∞ can be trivially done. Moreover most of the properties of the ordinary Fourier transform, like the theorems concerning convolutions, are still valid.
|
2014-10-01T00:00:00.000Z
|
1999-06-07T00:00:00.000
|
{
"year": 1999,
"sha1": "4b40b76e451630517f74f8eddcacb57b80757857",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9906095",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba9686945ad2244377c117f4184bad563c56d450",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
214756149
|
pes2o/s2orc
|
v3-fos-license
|
Bioterrorism: introduction
and define the role of the healthcare worker in the event of bioterrorism. Explain the roles of the various government agencies and managing a disaster. Define the basic signs and symptoms of the five major weapons of mass destruction. To provide clinicians and public health officials with the following information related to ricin: Background,
Man's inhumanity to man
Makes countless thousands mourn Robert Burns (1786) L ike a tornado, without any heralding event, asymmetric warfare or terrorism can strike with devastating consequences. Anesthesiologists, similar to many physicians, have little formal training in combating such vicious attacks. But we are deemed to be first responders and as such, the ability to provide rapid assessment and emergency care while maintaining personal protection must be anticipated.
In this issue, several noted physician scientists have outlined the principle dangers of bioterrorism today and defined the involvement of anesthesiologists in preparing for and dealing with an attack. Many bioagents are designed to incapacitate the respiratory system and therefore it is clear that we will likely be the first called to establish and maintain the airway. But our practice may be changed in several ways. In an overwhelming situation, as patient load reaches or exceeds hospital capacity, supplies may be quickly exhausted and our mindset may have to return to that of reusing "disposable" equipment or using people power as ventilators. Other chemicals used as weapons (some 70 different agents are reported to have been stockpiled), are defined as nerve paralyzing, vesicants, cyanogenic, choking toxins, and psychomimetic. In all these situations, our knowledge and familiarity with cardiorespiratory physiology, pharmacology and reversal agents make us leaders (and teachers) of the emergency care team.
The concept of biologic warfare is not new. From Biblical times and perhaps even earlier, man has sought to destroy his enemy with toxins, infectious material, and diseased animals. During the 20 th century, several governments developed extensive biologic weapons programs, some to be maintained as scientific curiosities and others for diabolic use. The ability to cultivate viruses such as smallpox, or anthrax spores and then release them into society became reality. It has been suggested by more than one authority that the recent outbreak of Severe Acute Respiratory Syndrome (SARS) may have had terrorist origins and was aimed at causing economic havoc in defined regions. Indeed, the last case, which occurred in a 27-year-old postdoctoral student in Singapore, was reported to have been caused by "inappropriate laboratory standards and a cross contamination of West Nile Virus samples with SARS coronavirus" indicating that quantities of the virus are available and can be released (New York Times, Sept 24 th 2003, pA6).
The use of radiation and nuclear devices as a weapon or threat by terrorists is a hazard of modern society. The release of radioactive materials in cities or at large gatherings may result in great numbers of victims requiring emergency care. Anesthesiologists in the accident unit must become familiar with decontamination techniques and the organization of casualties and patient flow. It is important to remember that secondary radiation injury from a contaminated patient to medical personnel is minimal and care should focus on surgical emergencies (although donning of protective clothing is advisable). Knowledge and identification of the degree of hazard is essential to control hysteria that stems from ignorance and fear.
In a speech to the American Bar Association in London in 1985, Prime Minister Margaret Thatcher remarked, "We must find ways to starve the terrorist and the hijacker of the oxygen of publicity on which they depend." But until that time comes we can be in a position to deal with an attack by preparing our hospitals and ourselves with an awareness of the consequences of bioterrorism. Equipment and personnel requirements can be identified. Frequent and realistic drills must be organized. The latest word to become part of the anesthesiologist's lexicon should be biopreparedness.
|
2019-08-20T05:35:46.412Z
|
2003-12-01T00:00:00.000
|
{
"year": 2003,
"sha1": "6ad6e0fc12e670d8ea03ee0bfeea9b12ec9bbb89",
"oa_license": null,
"oa_url": "https://doi.org/10.1053/j.sane.2003.10.002",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1b2354a2535fa6be1c6cb45dac8d4ee747eaf0c4",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
}
|
15628361
|
pes2o/s2orc
|
v3-fos-license
|
Atypical features of nanophthalmic macula- a spectral domain OCT study
Background To report atypical features on Spectral domain optical coherence tomography (SD-OCT) in a case of non-familial pure adult nanophthalmos. Case presentation A 39 year old male hyperope was found to have biometric and fundus findings typical of nanophthalmos. The additional atypical features included serous pigment epithelial detachment (PED) in right eye and a cuff of subretinal fluid with underlying yellow deposits along superotemporal arcade in the left eye. Fundus flourescein angiogram showed hyperfluorescence due to window defect, dye pooling due to serous PED in right eye and leak superior to disc in right eye and superotemporally in left eye. Cirrus-SD OCT horizontal line scan passing through the fovea showed extensive inner limiting membrane corrugations causing distorted foveal contour in both eyes. A large juxtafoveal serous PED and a small extrafoval PED were seen with folds in the retinal pigment epithelium (RPE)-choriocapillary layer in the right eye. Conclusion Structural disruptions in the RPE-choriocapillary complex in the form of folds or juxtafoveal serous PED and RPE folds can be atypical features of nanophthalmic macula better discerned on high resolution OCT.
Background
Nanopthalmos typically presents with typical clinical findings in a hyperopic small eye [1][2][3][4][5]. Several posterior segment findings have been described earlier including macular folds, retinal cysts or uveal effusion [6]. We report atypical features of a nanophthalmic macula on high resolution imaging which have not been described earlier.
Case presentation
A 39 year old male presented to us with complaints of poor vision in both eyes since childhood. He did not use any spectacles till presentation.
On examination, he was orthophoric and his unaided and best corrected visual acuity was FC 1/2 m, 20/70, N18 (+14DS/-2DCx10) and FC2m PR accurate, 20/200, N36 (+14DS/-1.5DCx110 0 ) in the right and left eye, respectively. Slit lamp showed shallow anterior chamber, intraocular pressure (IOP) by Goldmann applanation tonometry of 20 mm and 18 mm Hg and closed angles on 4 mirror gonioscopy in both eyes. Lens was clear in both eyes. His axial length (15.3 mm& 15.1 mm), corneal diameter and anterior chamber depth were suggestive of nanophthalmos. Central corneal thickness measured 555 microns and 554 microns in the right and left eye, respectively. Review history did not reveal any family history in siblings.
On a provisional diagnosis of pure non-familial nanophthalmos, he received prophylactic peripheral laser iridectomy (LPI) in both eyes. Dilated fundus examination showed crowded discs with obliterated cup and dilated engorged non-tortuous vessels in both eyes (Figures 1 & 2). There were prominent internal limiting striae radiating from the optic nerve to ½ disc diameter beyond the fovea associated with subretinal deposits in both eyes ( Figure 1). There was a cuff of subretinal fluid along the superotemporal arcade with underlying yellow subretinal deposits. There was a pigmented scar in left eye inferotemporal to macula in left eye.
Fundus fluorescein angiography, FFA, revealed hyperfluorescence due to transmission defect at macula in both the eyes,dye pooling in PED in juxtafoveal region in the right eye ( Figure 2) and leak along superotemporal arcade in left eye.
Full field flash ERG done showed normal scotopic and photopic response. Humphrey visual fields 24-2 showed peripheral artefacts in both eyes. Cirrus SD-OCT horizontal line scan passing across the fovea showed extensive corrugations involving the inner limiting membrane (ILM) suggestive of ILM striae causing distorted foveal contour in both eyes (Figure 2). A large serous PED was seen encroaching the fovea in the right eye with folds in the RPE-choriocapillary layer. The left eye showed normal vitreoretinal interface and normal intraretinal layer with no folds in the REP-choriocapillary complex.
In view of the above OCT findings, we advised him spectacles with rehabilitative support with low vision devices with advice for periodic follow up.
Discussion
Nanophthalmic eyes are typically hypermetropic with axial lengths less than 20 mm associated with shallow anterior chamber [1,3,5]. Abnormal deposits of glycosaminoglycans and elevated levels of fibronectin may thicken the sclera causing obstruction of the suprachoroidal drainage pathway and uveal effusion in these cases [6,7]. Papillomacular bands, abnormal thickening of the sclera with glycosaminoglaycans, choroidal congestion, foveal schisis, macular hypoplasia, choroidal thickening, pigmentary retinopathy and uveal effusions have been reported as typical features in posterior microphthamlos or familial nanophthamos [8][9][10]. In our case of nonfamilial nanophthalmos, macular striae were associated with RPE involvement in the form of folds and PED in the juxtafoveal area.
Disparity in growth between the sclera and retina probably gives rise to the retinal folds as seen in our patients and reported by others [8]. While amblyopia accounting for reduced vision cannot be ruled out in our case, the macular striae could be responsible for subnormal vision. Such striae could cause photoreceptor dysfunction though this was not the mechanism in our case with normal rod and cone responses. In is unclear from a single case if PED suggests the possibility of progressive changes in the retinal structure with age in an adult nanophthalmic macula.
Pigment cysts, choroidal and non-rhegmatogenous retinal detachments resultant to RPE dysfunction have been reported; [6,8] yet, serous PED as seen in this case has not been reported earlier. While the exact pathogenesis is not known, it is unclear if the PED and subretinal fluid cuff seen in this case represents a focal "effusion" similar to uveal effusion or RPE dysfunction seen in such cases [8]. Nevertheless this case suggests the possibility of progressive changes and structural alterations in nanophthalmic macula with age.
Conclusion
Structural disruptions in the RPE-choriocapillary layer including PED and RPE folds can be atypical features of nanophthalmic macula better discerned on high resolution SD-OCT.
Consent
"Written informed consent was obtained from the patient for publication of this Case report and any accompanying images. A copy of the written consent is available for review by the Series Editor of this journal."
|
2017-06-21T19:48:39.285Z
|
2012-06-06T00:00:00.000
|
{
"year": 2012,
"sha1": "9d0f3e10234c17011abb06890cfb4a1f2542db51",
"oa_license": "CCBY",
"oa_url": "https://bmcophthalmol.biomedcentral.com/track/pdf/10.1186/1471-2415-12-12",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db64ec2e70f00f6a783c769b824e18f5730cc6b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14295372
|
pes2o/s2orc
|
v3-fos-license
|
On the Relation Between Expected Returns and Implied Cost of Capital
In this study, we examine the relation between implied cost of capital and expected returns under an assumption that expected returns are stochastic, a property supported by theory and empirical evidence. We demonstrate that implied cost of capital differs from expected return, on average, by a function encompassing volatilities of, as well as correlation between, expected returns and cash flows, growth in cash flows, and leverage. These results provide alternative explanations for findings from empirical studies employing implied cost of capital on the magnitude of the market risk premium; relations between cost of capital, growth, leverage, and idiosyncratic risks; predictability of future returns, and characteristics of the firm's information environment.
INTRODUCTION
The purpose of this study is to theoretically analyze the properties of "implied cost of capital," defined as the internal rate of return that equates stock price with the present value of the expected future dividends, focusing on its relation with expected returns. Our analysis attempts to fill a gap in the empirical literature on the efficacy of implied cost of capital as a measure of expected return on equity. In particular, we examine the relation between the implied cost of capital and the expected returns when the latter are stochastic. Our results raise the prospect that some of the empirical results in the implied cost of capital literature may be an artifact of the difference between the two.
The assumption of constant expected returns can be challenged on both theoretical and empirical grounds. In his seminal study of inter-temporal capital asset pricing, Merton (1973) shows that variations in investors' investment opportunity set as a consequence of dependency on random states of nature induces stochastic expected returns. On the empirical side, Shiller (1980) contends that the US stock market is too volatile to be explained by cash flow innovations from a stationary distribution implying that the expected returns must also be time varying. More recent empirical studies by Fama and French (1997) and Jaganathan and Wang (1996) also conclude that expected returns are time varying.
In asset pricing theory, expected return of an asset is completely determined by its non-diversifiable risk, a property that may not be shared by implied cost of capital. Given stochastic expected returns, we show that implied cost of capital differs from expected return and this difference is a function of leverage, growth in cash flows, beta volatility, cash flow volatility, and the correlation between expected returns and implied cost of capital. Analytically, the difference arises for two reasons. First, equity prices depend non-linearly on the future expected returns; thus, there is an effect due to Jensen's inequality. Second, there is a correction due to the covariance between the future expected returns and the future cash flows. Our characterization of the difference generates a number of empirical implications, casting the existing findings in the literature under new light.
First, Claus and Thomas (2001), and, subsequently, Gebhardt, Lee and Swaminathan (2001) and Easton, Taylor, Shroff and Sougiannis (2002) used the implied cost of capital to infer the magnitude of the market risk premium. Notably, they found that the "ex ante equity risk premium" inferred from the implied cost of capital measures is only about 3%, far lower than the historical averages observed in the US. While they attribute the low estimate to a longitudinal decline in market risk premiums, our result that, on average, the implied cost of capital can be expected to be lower than the expected returns due to Jensen's inequality offers another explanation.
Second, studies by Gebhardt, Lee, and Swaminathan (2001) and Gode and Mohanram (2003) examined whether implied cost of capital measures capture previously unidentified priced risks in the cross section. In particular, they found that such measures are significantly correlated with firm characteristics such as growth, leverage and idiosyncratic risk, after controlling for beta. While it is tempting to conclude that these analyses discovered priced risk factors not previously identified in the asset pricing literature, our results demonstrate that even if risk is entirely captured by factor betas in determining expected return, given stochastic expected returns, implied cost of capital is correlated with growth, leverage, and idiosyncratic risk after controlling for betas.
Third, along similar lines to the second group of studies, Guay, Kothari and Shu (2003) and Easton and Monahan (2005) examine the efficacy of implied cost of capital measures as proxies for priced risks by investigating whether those measures have predictive power with respect to future stock returns. While their general result is insignificant for all implied cost of capital measures, they found improvement in significance when they controlled for analyst forecast inefficiency or firm growth. These findings can be potentially explained by our results. Because implied cost of capital differs from the expected returns by a function of growth, leverage, beta volatility, and cash flow volatility, omission of these correlated factors may cause the coefficient estimate on implied cost of capital to be biased. Explicit control of these variables, such as the control for growth in Easton and Monahan (2005), helps to alleviate such a problem.
Fourth, implied cost of capital measures have been used as proxies for expected returns in addressing a variety of research questions pertaining to relations between cost of capital and characteristics of the firm's information environment. For example, Botosan (1997) and Botosan and Plumlee (2003) found that corporate disclosure levels are negatively correlated with implied cost of capital, Luez and Hail (2005) found that features of countries' legal institutions are significantly correlated with implied cost of capital, and Hribar and Jenkins (2004) found earnings restatements lead to a higher implied cost of capital. The results of our analysis suggest that correlations such these could be artifacts of the difference between implied cost of capital and expected returns if growth in cash flows is correlated with the variables under investigation.
The purpose of our study is not to disparage prior literature that studies the implied cost of capital or employs the implied cost of capital as an instrument to study other economic phenomena. This literature has generated many useful insights not available from studies that use average returns as proxies for expected returns. Rather, the motivation is to establish a theoretical foundation that permits a better understanding of the properties of implied cost of capital in a context of stochastic expected returns; a context well supported by recent evidence in finance and economics.
While the primary contribution of the paper lies in offering alternative theorybased interpretations of a growing body of empirical results, our analysis also extends earlier work on the valuation of debt and equity securities (e.g., Vasicek 1977, Cox, Ingersoll and Ross 1985, Ang and Liu 2004, Miles and Ezzell 1980. The insight that that bond yield may differ from the bond's expected returns on average has been long recognized in the fixed income literature (e.g., Vasicek 1977, Cox, Ingersoll andRoss 1985). Our study generalizes this insight to equities. The fixed income literature does not need to model cash flows since they are constant, and essentially works from the time series properties of the stochastic discount factor (i.e., pricing kernel). In contrast, because we examine equities, we adopt an analytical structure similar to Ang and Liu (2004) with assumptions of stochastic expected returns, stochastic cash flows, and allowing a correlation between the two. Given the stochastic aspect of cash flows and our later introduction of leverage in altering equity risk, the generalization to equities is not direct.
We depart from Ang and Liu (2004) by adding structure that allows us to achieve a closed form characterization of the difference between average expected return and implied cost of capital, an issue outside the scope of their analysis. As indicated above, we also extend Ang and Liu (2004) to provide for leverage. Our analysis in this latter regard generalizes Miles and Ezzell's (1980) result on leverage and the efficacy of using the weighted average cost of capital to discount cash flows to a setting where the expected returns are stochastic.
The rest of the paper is organized as follows: In the next section, we analytically examine the relation between average expected returns and the implied cost of capital. In section III, we discuss the empirical implications. We conclude in section IV.
II. MODEL Discounted Cash Flow Model under Stochastic Expected Returns
In this subsection, we develop the discount cash flow formula for equity valuation under stochastic expected returns. Our analysis is an extension of Ang and Liu (2004), which systematically examines how cash flows should be discounted under stochastic expected returns. As noted earlier, we depart from Ang and Liu (2004) by adopting more specific assumptions that allow for a closed form solution. Such a solution is essential in later analysis when we examine the relation between implied cost of capital and expected returns.
The value of an asset at 0 t = , 0 A , satisfies the inter-temporal relation: where ( ) 0 exp μ is the expected (gross) return for the period between 0 and 1 known at the beginning of the period and 1 c % is "free cash flow" to both debt and equity investors for that period. We use the exponential form of expected returns for mathematical simplicity.
Iterating equation (1) one further period, we get The second equality holds because expected return is known at the beginning of each period; and the third equality holds because of the law of iterated expectations.
Successively iterating the above expression to infinity, and assuming that the transversality condition, , holds, we obtain the following discounted cash flow model under stochastic expected returns: As depicted above, discounting of future cash flows is achieved by taking the product of future (stochastic) expected returns ( ) To parameterize equation (2), we assume that the logarithms of expected returns, t μ , are determined by a factor structure. Without further loss of generality, we assume a one-factor model: f r (the risk free rate), λ (the factor risk premium), β , and β σ are constants, and is observed at the beginning of each period; i.e., 1 β is known at time1. Since the logarithms of expected returns are distributed normal, expected returns are bounded below at 0; hence, our assumption satisfies limited liability.
We point out that while stochastic expected returns can be achieved through the risk free rate, factor risk premiums, factor loadings, or a combination of the three, there is no loss of generality in considering the case where factor loadings (betas) are stochastic.
The analysis is essentially the same if we instead make the other components of the expected return stochastic. 1 This specification satisfies the empirical findings of time dependent betas in Fama and French (1997), and is consistent with the conditional CAPM specification of Jagannathan and Wang (1996).
We further assume that future cash flows, 1 t c + , are generated by where g , ρ , and c σ are constants and 1 , 0, , ct t ε + = ∞ K , are independent standard normal random variables. The cash flow specification in (5) allows contemporaneous correlation between log cash flows and log expected returns, with the correlation captured by ρ .
However, since betas and, hence, expected returns are observed at the beginning of each period, conditioning on information at the beginning of period t, t μ is known and, thus, does not co-vary with 1 t c + % . When the correlation coefficient ρ is zero, the cash flows dynamics are reduced to that assumed in the Gordon growth model, i.e., Since we assume that 0 β is known at 0 t = , it follows that ( ) Substituting for cash flows from (5) and taking expectations, we obtain Thus, we have Taking the infinite sum results in the following proposition about the firm's asset valuation: Proposition 1: Given assumptions (1) -(5), the firm's asset value can be expressed as Under constant expected returns, t μ μ = % for all t, Proposition 1 is reduced to the familiar Gordon Growth model with uncertainty:
Relation between Implied Cost of Capital and Expected Returns
Prominent in recent accounting research is the "implied cost of capital" literature (e.g., Botosan 1997, Claus and Thomas 2001, Gebhardt, Lee and Swaminathan 2001, that regards cost of capital as an internal rate of return derived from the discounted dividends formula, or equivalently, its accounting transformations. However, this treatment is grounded in asset pricing models for which expected return is assumed to be constant. Taking implied cost of capital as an ex ante measure of expected percentage returns, these studies analyze how this measure speaks to firms' risk exposures or aggregate market risk premiums. In this section, we theoretically explore the average relation between implied cost of capital and expected returns and depict significant aspects in which they may differ when expected returns are assumed to be stochastic.
To begin, we formally define the implied cost of capital as the constant return that equates the present value of cash flows with asset value: where 0 π is the logarithm of implied cost of capital at time 0. Because we are considering the valuation of assets by discounting future cash flows, the implied cost of capital can be considered as the weighted average cost of capital or WACC as defined in corporate finance textbooks.
Combining assumption (1) and (11) and applying similar calculations to those in the previous subsection, we obtain the following expression for firm's asset value as a function of the implied cost of capital: It immediately follows that under constant expected returns, t μ μ = % for all t, the fact that equations (10) and (12) In expression (13), 0 μ depends on the realization of the random variable t β % at time 0. To obtain an average relation between expected return and implied cost of capital, we step back and take the unconditional expectation of both sides of (11), which leads to the following proposition: Proposition 2: Given assumptions (1)-(5) and definition (11), the ex ante relation between implied cost of capital and expected return is Therefore, under stochastic expected returns, on average, the implied cost of capital will is a function of beta volatility, β σ , cash flow volatility, c σ , the correlation between expected returns and cash flows, ρ , and growth in cash flows, g . Since empirical studies tend to focus on cost of capital applicable for equity valuation rather than asset valuation as we have shown up to this point, in the next section, we extend the analysis to consider the effects of leverage.
Effect of Leverage
For a levered firm, the expected return on equity, Et e μ , is implied by its relation to the expected return on assets, the risk free rate, and the leverage ratio: where Et μ is the logarithm of expected equity return and k is the debt to asset ratio. Thus, the expected return on equity is Similar to the definition of the implied cost of capital on assets in section 2.2, we define an implied cost of equity capital as follows: where 0 DT is the value of debt at time 0, 1 t D + % , 0,1, 2... t = , are the future dividends, and 0 E π is the logarithm of the implied cost of equity capital at time 0. With these definitions in place, we now consider the relation between 0 E π and 0 E μ . Miles and Ezzell (1980) demonstrate that, if the expected returns on assets, the cost of debt, and the leverage ratio are constants, then the implied cost of equity capital used in (16) is equivalent to the expected return on equity. In our case, while the cost of debt is assumed to be a constant, the expected returns on assets are not a constant, so Miles and Ezzell's (1980) results will not directly apply. However, we note that equation (11) can be viewed as a case where a pseudo-sequence of asset prices satisfies a constant expected return. That is, defining the pseudo-price of an asset at time t as ( ) ( ) it follows that ( ) ( ) . Similarly, we can define a sequence of pseudo-prices for equity, If we further assume that the leverage ratio in the pseudo-price space is a constant, i.e., then the implied cost of asset capital and the implied cost of equity capital have the following relation: The above equality holds because, in the space * Combining equation (19) with (14), we obtain the following relation between the implied cost equity and the expected returns on equity when the latter is stochastic: Proposition 3: Given assumptions (1)-(5), and (19) and definitions (11) and (16), the ex ante relation between the implied cost of equity and expected returns on equity is It follows immediately that Proposition 3 reduces to Proposition 2 if the firm is unlevered, i.e., 0 k = . Under the general condition of positive leverage, i.e., average, the difference between the implied cost of equity capital and expected returns on equity is larger than the difference between the implied cost of capital for an unlevered firm and the expected returns on assets.
III EMPIRICAL IMPLICATIONS
The above analysis generates a number of empirical implications. First, in the implied cost of capital literature estimates of the ex ante market risk premium run about 3% (e.g., Claus and Thomas 2001, Gephardt, Lee and Swaminathan 2001, and Easton, Taylor, Shroff and Sougiannis 2002; much lower than estimates based historical average returns, which are between 6% and 8% depending on the time period and the method of calculation (See, for example, Ibbotson Associates Yearbook 2005). An explanation offered for the discrepancy is that market risk premium is declining, or that the long run average of US equity returns is biased too high because the US has been fortunate in the last century.
Our Campbell (1991) and Campbell and Ammer (1993) found that at the market level the correlation between cash flow news and expected return news are weakly negatively correlated. Accordingly, our results suggest that estimates of ex ante risk premiums inferred from implied cost of capital should be lower than those inferred from historical average returns.
To calibrate the magnitude of this difference numerically, we sought plausible estimates of the parameters contained in equation (21). As mentioned in the model setup, although our analysis assumes that time variation in expected returns comes solely from the time variation in beta, we made this choice for purely expositional reasons. In reality, the time variation in expected returns could also come from the time variation in risk free rates and market risk premiums. Of course, at the market level, beta is one, so the volatility in expected returns is driven by the volatility of risk free rates and market risk premiums. 2 Since the correlation between cash flow news and expected return news is small, the value of equation (21) is most sensitive to the volatility in expected returns. If we set 15% c σ = , 5% g = , 10% ρ = − , then the difference will be 1.2%, 2%, and 3% for standard deviations of expected returns of 10%, 13% and 16%, respectively. While, historically, realized market returns can reach as high as 20%, only a part is due to the volatility in expected returns. If half the variance is market returns is due to the variation in expected returns, then the standard deviation of expected returns should be close to 14%, which translates into a 2.3% average difference between expected returns and implied cost of capital. Campbell (1991) estimates that more than half of the stock volatility is due to the volatility in expected returns; however, he also cautions that the estimates are not precise. We therefore conclude that the difference between implied cost of capital and expected returns is likely to be a significant factor in explaining the empirical results in the literature, but it is unlikely to be the only explanation.
Our results are likely to be more pronounced in the cross-section, because in addition to the variability in the market risk premium, the variability in beta estimates also contributes to the difference between the expected returns and the implied cost of capital. Prior research such as Gebhardt, Lee and Swaminathan (2001) and Gode and Mohanram (2003) π μ λ σ λ σ ρ σ σ ε where ε is the regression residual (the time and firm subscripts are omitted).
Our analysis also has implications for studies that investigate the efficacy of the implied cost of capital measures by checking whether these measures can help to predict future returns. Guay, Kothari and Shu (2003) where 1 r is next period realized stock return and η is the regression residual. Equation (23) predicts a coefficient of one on the implied cost of equity capital, provided that one controls for leverage, growth, idiosyncratic risk, and the volatility of the expected returns.
However, the estimated coefficient will be biased if controls for these variables are omitted since, as we have demonstrated, these variables are correlated with the difference between implied cost of capital and expected returns.
The Easton and Monahan (2005) finding that the implied cost of capital becomes more significantly correlated with future returns when the expected growth rate is low is consistent with our result that the difference between expected returns and the discount rate is negatively correlated with the expected growth rate. Along similar lines, our analysis further suggests that the strength of the correlation will also be higher if the firm's beta displays less time series variation, or if the firm has lower leverage, since in both cases the difference between the expected returns and the implied cost of capital is small.
Finally, our analysis suggests that empirical studies that employ implied cost of capital, as a proxy for expected return in examining pricing implications of characteristics of the information environment, must guard against spurious correlation. While a significant correlation between the implied cost of capital and a test variable can be due to a significant correlation between expected return and the test variable, it can also be due to a correlation between the test variable and omitted controls for leverage, growth, and beta and cash flow volatility. For example, Botosan (1997) and Botosan and Plumlee (2002) examine the correlation between implied cost of capital and firm's disclosure score as reported by financial analysts. Though these studies control for conventional risk factors such as book to market ratio and size, they did not control for the variables we identify. To the extent that the firm's disclosure policy is correlated with growth as documented by Lundholm and Lang (1996), the correlation between implied cost of capital and the disclosure score could be seriously confounded by the correlation between growth and disclosure scores.
IV CONCLUSION
Assuming stochastic expected returns, we have shown that the implied cost of equity capital is a function of expected return on equity, leverage, growth, beta volatility, and cash flow volatility. Controlling for the expected return, the dependency of implied cost of equity capital on leverage, growth, beta volatility, and cash flow volatility arises because price is a nonlinear function of the expected returns. Accordingly, Jensen's inequality comes into play when the expected returns are stochastic. When expected returns are a constant, these variables drop out of the relation between implied cost of capital and expected returns, and we are back to Samuelson's (1965) classical equivalence result.
At a modeling level, while there are similarities with models of term structure (Cox, Ingersoll, and Ross, 1985), as in Ang and Liu (2004), we start from an assumption of stochastic expected returns rather than stochastic discount factors (pricing kernels).
We extend Ang and Liu (2004) by adding structure that enables us to derive a closed form characterization of the differences between expected return and implied cost of capital. Additionally, we generalize Miles and Ezzel's (1980) model to bring effects of leverage into play.
Our analysis suggests that, even if expected returns are purely determined by a factor model and beta risk is the only risk that is priced, one might observe results such as those documented in empirical studies due to the result that under stochastic expected returns the implied cost of capital and expected returns are, on average, not equivalent.
Examples of such results include estimates of equity risk premiums substantially below historical averages (e.g., Claus and Thomas, 2001;Gebhardt, et al , 2001), relations between implied cost of capital to measures of leverage, growth, and variables associated with firm-specific risks (e.g., Gebhardt, et al, 2001;Gode and Mohanran, 2003), weak associations between implied cost of capital and future returns conditional on growth (e.g., Guay, et al, 2003;Easton and Sommers, 2006), and associations between implied cost of capital and disclosure policies (e.g., Botosan, 1997;Botosan and Plumlee, 2002).
Given this theoretical perspective, it may make sense to revisit past studies such as those referred to above in order to assess whether the results are robust after controlling for the difference between the implied cost of capital and expected returns. Finally, our analysis recommends caution in conducting future empirical work; one must consider the prospect of differences between implied cost of capital and expected returns when one seeks to use the former as a proxy for the latter.
|
2014-10-01T00:00:00.000Z
|
2008-04-15T00:00:00.000
|
{
"year": 2008,
"sha1": "06995ee1a89820810dbe215684fea1ee549733ea",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11142-009-9093-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "90d47bb0976dd0b579424de9d0b8f6d6e64be8d0",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
617729
|
pes2o/s2orc
|
v3-fos-license
|
Systematic Review of Cadaveric Studies on Anatomic Posterior Cruciate Ligament Reconstruction: The Landmarks in Anatomic Posterior Cruciate Ligament Reconstruction
Recently, several new techniques for anatomic posterior cruciate ligament reconstruction (PCLR) have emerged and are believed to restore the normal anatomy of the posterior cruciate ligament more accurately. Despite the latest trend, the optimal methods for anatomic PCLR remain controversial. The purpose of this research is to review surgical techniques for PCLR in cadaver studies and suggest consistent and reproducible technical criteria. For the review of the literature, MEDLINE and EMBASE were screened for articles on anatomic PCLR. Only basic science studies on PCLR performed on human cadavers and written in English were included. Seventeen studies were included in this systematic review. Only the tunnel positions, graft types, and surgical techniques were reported in the majority of the studies. There were many variations of the reported tunnel positions, graft types, and surgical techniques among the studies. In most studies, surgical techniques for consistent and reproducible anatomic PCLR were not explained clearly. Therefore, high level medical research should be encouraged in order to establish standard surgical techniques for anatomic PCLR.
introduction
In recent years, more and more attention has been directed towards biomechanics of the anatomic posterior cruciate ligament reconstruction (PCLR). Past studies showed that PCLR would neither prevent the knee from developing osteoarthritis nor fully restore the normal knee kinematics 1) . However, during the past decade, there has been rapid development in surgical techniques 192 Lee et al. Systematic Review of Cadaveric Studies on Anatomic PCL Reconstruction mation of ligament insertion sites, tunnel positioning techniques, graft types, and graft fixation methods. On the other hand, researches using anatomic landmarks, such as the medial intercondylar ridge, medial bifurcate ridge, and posterior edge of shelf, or regarding preoperative planning or imaging techniques for postoperative evaluation are rare. The purpose of this research was to review surgical techniques for anatomic PCLR in cadaver studies and to suggest consistent and reproducible technical criteria. Therefore, a descriptive analysis was performed on surgical data reports. We hypothesized that the description of surgical techniques in those reports would be insufficient and thus it would not be feasible to set up clinical settings for anatomic PCLR.
methods
A systematic and descriptive review on surgical techniques for PCLR was undertaken. Clinical trials were excluded from this systematic review; cadaver studies on anatomic PCLR were included in this study. Only studies providing a description of surgical techniques and involving human cadavers were eligible for inclusion. A systematic electronic search was performed using the MED-LINE via PubMed and EMBASE databases. Studies that were published between 1999 and 2013 were included. The search was carried out by 2 observers in 2013. The following key search terms were used in all fields: 'posterior cruciate ligament' OR 'PCL' AND 'anatomic' OR 'anatomical' AND 'reconstruction' OR 'surgery' AND '1999:2013' . The search was restricted to English. Review articles, studies that were covered by 2 databases, clinical studies, and animal studies were excluded. Selection of studies was done by reading the abstracts, and if necessary, the full texts. For inclusion into the review, two authors independently analyzed the full texts using the aforementioned criteria. Any disagreements between the 2 observers were discussed to reach an agreement. Finally, the reference lists of the selected studies were investigated to identify additional studies that had not been found through our electronic search.
There are no established criteria yet to determine whether a PCLR is performed anatomically or not. So we initially decided to include all papers in which the authors stated that the reconstructive surgical procedure was 'anatomic' . However, considering the recent emphasis on the concept of 'anatomic' PCLR, we deemed it would be unfair to include all PCLR papers. Therefore, we analyzed the anatomic degree of reconstruction in those studies based on the assessment of the insertion site of the footprint of posterior cruciate ligament, tunnel position, and anatomical landmarks (medial intercondylar ridge, medial bifurcate ridge, and posterior edge of shelf), since most authors did not state their technique was 'anatomic' . The anatomical landmarks are displayed in Table 1.
A descriptive review of the reports providing a variety of surgical data was performed with the utilization of a predefined standardized data sheet. The authors filled in a template regarding suggestions for anatomic PCLR, which was used for analysis of the studies ( Table 1). The data sheet included a column for all data as well as an additional column for pooling more specific data. The analysis was not performed in a blinded fashion. The data were recorded as either 'reported' or 'not reported' . Also, the ratios of studies presenting certain data to the total included studies were calculated as percentages. Assessments on detailed procedures or methods were not performed. In addition, if an item was recorded as 'reported' , more specific data were collected when possible for the purpose of pooling. Consensus was reached through discussion for any disagreements.
results
There were 185 search results on MEDLINE via PubMed and 123 on EMBASE according to the aforementioned search criteria ( Fig. 1). Of these 308 studies, 246 were excluded because the abstracts showed they did not meet the inclusion criteria. Most of the excluded studies were either clinical trials or not written in English. Of the remaining 62 papers, 17 papers were selected by both observers and the rest were excluded after discussion due to disagreement. Therefore, 17 papers were selected for final inclusion of the systematic review 2,7,[9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] . The results of 17 included studies are summarized in Table 2. The 45 studies were excluded mostly because the authors did not claim that their reconstructive technique was anatomic.
Whether certain surgical data were reported or not reported in the included papers is displayed in Table 3. Visualization indicates presenting diagrams or pictures showing how the femoral or tibial bone is attached in the study. The femoral and tibial insertion sites were visualized in approximately two-thirds of the included studies, whereas only 12% of anatomic studies investigated the actual insertion sites of the PCL. Regarding the use of the medial intercondylar ridge and the medial bifurcate ridge for femoral tunnel positioning, the posterior edge of shelf was rarely used for tibial tunnel positioning (Figs. 2 and 3). The anatomic positions of tunnels or footprints proposed in the studies are described in Table 2. Seventy-eight percent and 61.7% of the studies included data on the tunnel placement in the femoral and tibial insertion sites, respectively. Seventy-eight percent of them also provided visual proof in their papers (Table 4). Imaging techniques were poorly used in these cadaveric trials: standard radiographs, computed tomography (CT), and three-dimensional CT were used in only 9.1% each. Magnetic resonance imaging was used in 18.2% and the use of other methods such as fluoroscopic images, computer graphics, or gross cadaveric dissection photographs were reported in 27.3%.
The positions of femoral and tibial tunnels were reported to be at a fixed distance from another anatomic structure in 66.5% and 55.9%, respectively. On the femoral side, the authors used the intercondylar roof, PCL insertion site, and the edge of the articular cartilage for guidance of femoral tunnel placement. On the tibial side, authors used the anterior margin of the tibia, the medial border of the tibial plateau, and the vertical distance from the tibial plane as reference points. No superior graft has been identi-fied and graft fixation method was reported in approximately half of the included studies (Table 5).
In these studies, either single-bundle PCLR or double-bundle PCLR was used as a tunnel reconstruction method, and superiority between the two methods could not be determined.
discussion
The growing attention to anatomic PCLR has led to a recent increase in the number of basic science studies evaluating potential benefits and limitations of this technique. However, despite the outcomes of many studies, the true definition of anatomic PCLR has not yet reached a consensus. In this review, it was hypothesized that the description of surgical techniques would be insufficient to set up clinical settings for anatomic PCLR.
This review revealed that data for anatomic PCLR, such as the insertion site and tunnel position, are not sufficient despite the current increase in the number of PCL research. In many studies, femoral tunnel positions were not determined by referring to anatomic sites and the o' clock reference was used instead. How-ever, the size and shape of the PCL insertion site, tibial plateau, and femoral intercondylar notch anatomy are different from patient to patient 21,22,24) . Therefore, the o' clock reference would not be beneficial for anatomic reconstruction because it provides a non-reproducible generic two-dimensional formula for tunnel placement. The o' clock reference was originally developed to be used with radiographs taken with the knee in extension, which can be quite reliable under this circumstance 25) . Later, it was also utilized for arthroscopic measurements without taking into consideration that the knee is flexed in this situation. Differences in the knee flexion angle and viewing portal have caused much confusion when using the o' clock description 26) . The mean tibial tunnel position in the studies we selected for review was 10−15 mm below the articular joint. In cadaveric studies and clinical trials, authors utilize various anatomical landmarks to describe the tibial insertion site 23,[27][28][29] . However, these studies mostly used only one reference value, although at least two coordinates are necessary to define a geographical point, and more are needed for an accurate 3D mapping. Radiological studies also attempted to identify landmarks for definition of the PCL tibial insertion site 10,11) . However, they did not rely on identical reference points and did not distinguish between the anterolateral and posteromedial bundles 28) . As evidenced in this review, the accurate methods for tibial tunnel positioning have been rarely reported in many studies, demonstrating the need for a detailed description of the PCL fovea to establish consistent, reproducible anatomical landmarks for surgery. The increased interest in anatomic PCLR has led to a great number of basic science studies evaluating potential benefits and limitations of this technique 2,7,[9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] . However, the true definition of anatomic PCLR has not reached a consensus, and therefore, the interpretation of 'anatomic' varies from study to study. The aim of many cadaver studies on PCLR is to study the effects of differences in reconstruction techniques and tunnel positions on the knee biomechanics 15,16,19,20) . Recent research furthermore puts its emphasis on comparisons of surgical methods and approaches for 'anatomic' PCLR 6,7,14,15,30) . As aforementioned in the present study, superiority between the single-bundle PCLR and doublebundle PCLR with regard to tunnel reconstruction could not be determined. So we believe this should be elucidated in further research. Basic science is the milestone for clinical research and ultimately treatment strategies. Providing detailed description of a surgical method helps readers make an appropriate interpretation of the study results and be assured that the reconstruction was indeed performed in an anatomic fashion. The ideal way to implement this would be to establish standards for describing anatomic techniques, encompassing all essential aspects needed to define anatomic PCLR. Authors, for their part, should strive to provide clear description of their methods using figures, pictures, and diagrams.
Overall, we found that a variety of surgical data were not presented in current cadaver studies on anatomic PCLR. The absence of certain data on surgical techniques does not necessarily imply certain procedures were not performed. However, the recent high standard of medical research requires accuracy when reporting methods and findings. Description of surgical techniques in clinical studies may be considered unimportant; however, it should be addressed in detail in cadaver studies considering that they are used as a template for clinical trials. Anatomic PCLR can be performed in many different ways, and such diversity of methods affects the study outcomes. As a result, in the absence of sufficient description of techniques, it should be difficult to interpret the outcomes and make comparisons with other studies.
There were several limitations of this systematic review. First, it was specifically focused on studies that report on anatomic PCLR techniques in cadaver models. Second, the search was limited to English papers available on MEDLINE via PubMed or EMBASE. Third, the data extraction was not performed in a blinded fashion. However, despite these limitations, we believe this systematic review provides a rare insight into the overall factors of anatomic PCLR and the current status of studies on the technique.
Most basic science studies regarding anatomic PCLR in cadavers do not provide detailed description of surgical techniques for consistent and reproducible anatomic PCLR. Therefore, we believe high level medical research should be encouraged in order to establish standard surgical techniques and delineate the definition of anatomic PCLR.
|
2018-04-03T01:33:06.396Z
|
2014-12-01T00:00:00.000
|
{
"year": 2014,
"sha1": "ba5610f278a5ca84783da01c757dc6e4c43e15b6",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5792/ksrr.2014.26.4.191",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba5610f278a5ca84783da01c757dc6e4c43e15b6",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
262180315
|
pes2o/s2orc
|
v3-fos-license
|
Ethanolic Extract of Polygonum minus Protects Differentiated Human Neuroblastoma Cells (SH-SY5Y) against H2O2-Induced Oxidative Stress
Neuronal models are an important tool in neuroscientific research. Hydrogen peroxide (H2O2), a major risk factor of neuronal oxidative stress, initiates a cascade of neuronal cell death. Polygonum minus Huds, known as ‘kesum’, is widely used in traditional medicine. P. minus has been reported to exhibit a few medicinal and pharmacological properties. The current study aimed to investigate the neuroprotective effects of P. minus ethanolic extract (PMEE) on H2O2-induced neurotoxicity in SH-SY5Y cells. LC–MS/MS revealed the presence of 28 metabolites in PMEE. Our study showed that the PMEE provided neuroprotection against H2O2-induced oxidative stress by activating the Nrf2/ARE, NF-κB/IκB and MAPK signaling pathways in PMEE pre-treated differentiated SH-SY5Y cells. Meanwhile, the acetylcholine (ACH) level was increased in the oxidative stress-induced treatment group after 4 h of exposure with H2O2. Molecular docking results with acetylcholinesterase (AChE) depicted that quercitrin showed the highest docking score at −9.5 kcal/mol followed by aloe-emodin, afzelin, and citreorosein at −9.4, −9.3 and −9.0 kcal/mol, respectively, compared to the other PMEE’s identified compounds, which show lower docking scores. The results indicate that PMEE has neuroprotective effects on SH-SY5Y neuroblastoma cells in vitro. In conclusion, PMEE may aid in reducing oxidative stress as a preventative therapy for neurodegenerative diseases.
Introduction
Oxidative stress is a condition that happens when there is an imbalance between oxidants and antioxidants in a living system.It is linked to many neurological disorders including Alzheimer's disease, Huntington's disease, Parkinson's disease, and amyotrophic lateral sclerosis.The imbalance happens when there are too many reactive oxygen species (ROS) or when the antioxidant system does not work appropriately [1].The body has specific defense mechanisms in the form of endogenous antioxidants, such as glutathione, and antioxidant enzymes, such as superoxide dismutase (SOD), glutathione peroxidase (GPX), and catalase, which combat the harmful effects of excessive ROS [2].A growing body of research has revealed that, under normal conditions, oxidative damage causes reactions that have a negative impact on beneficial markers in the antioxidant mechanistic pathway that is responsible for neutralizing harmful stimuli.Exogenous antioxidants, on the other hand, tend to counteract such effects, but when present in large quantities, they inhibit the response generated by their indigenous counterpart, increasing the cells' sensitivity to stimuli and eventually leading to death [3][4][5].The antioxidant properties of plant materials for maintaining health and preventing various diseases prompted scientists to investigate various herbs.
Polygonum minus Huds is a well-known traditional herbal plant in Malaysia, and it is commonly referred to as 'kesum' in the Malay language [6].Various Polygonum species have demonstrated antioxidant properties [7,8].P. minus is a plant with a high concentration of flavonoids and other antioxidants that reduce oxidative stress in neuronal membranes [9].P. minus has been shown to have in vitro antioxidant properties, low-density lipoprotein (LDL) oxidation inhibition, antiulcer activity, analgesic activity, anti-inflammatory activity, antiplatelet aggregation activity, antimicrobial activity, digestive enhancing property, and cytotoxic activity [10][11][12][13][14][15].Researchers have linked this plant's pharmacological effects to its high antioxidant capacity.This plant's aqueous, methanolic, and ethanolic extracts demonstrated high antioxidant activity, which was primarily attributed to its phenolic compounds [10,[16][17][18].The isolated compounds from P. minus such as polygonumins A [19], polygonumins B, C, and D [6] were reported to have diverse potential medicinal activities.In a study conducted by Yaacob [20], it was found that decanal and dodecanal are the primary aldehydes responsible for contributing flavor to P. minus.In addition to decanal and dodecanal, Yaacob's analysis revealed the presence of several other compounds in P. minus, including 1-decanol, 1-dodecanol, undecanal, tetradecanal, 1-undecanol, nonanal, 1-nonanol, and β-caryophyllene.In a study carried out by Baharum et al. [21], a total of 42 compounds were successfully identified using gas chromatography-mass spectrometry (GC-MS).This number greatly exceeded the count given by Yaacob [20].Some of the compounds found included α-pinene, drimenol, humulene, caryophyllene, farnesol, neoisolongifolene, 8-bromo-, and isobornyl acetate.To date, the literature on the neuroprotective effect of P. minus is still lacking.Thus, this present study aims to determine whether P. minus ethanolic extract (PMEE) can protect differentiated human neuroblastoma SH-SY5Y cells from H 2 O 2 -mediated oxidative stress.
LC-MS/MS Analysis
The positive and negative LC-MS/MS chromatograms of PMEE were shown in Figure 1.The presence of peaks in the chromatogram indicated the presence of various PMEE-derived compounds.The LC-MS/MS characterization of the phenolic compounds in PMEE revealed the presence of 28 metabolites, listed in Table 1.These metabolites have a variety of therapeutic properties, including anti-inflammatory, antioxidant, and anticancer properties.
In Table 1, PMEE was shown to contain various classes of natural compounds such as caffeic acid, (−)-epicatechin, kaempferol, gallic acid, eupatilin (5,7-dihydroxy-3 ,4 ,6trimethoxyflavone) and rhamnetin (7-methylquercetin) which have been proven to play a role in scavenging H 2 O 2 and preventing cell damage by oxidative stress [22][23][24][25][26][27].On the other hand, quercetin and quercitrin reduced the accumulation of ROS and nitric oxide while protecting against cytokine-induced cell death [28].Meanwhile, Kwon et al. [29] demonstrated the neuroprotective activity of quinic acid isolated from the roots of Arctium lappa Linne.It protected PC12 cells from oxidative stress, which could be attributed to the antioxidant capacity of quinic acid [29].The detection of various phenolic compounds, based on compound identification, strengthens the suggestion that other bioactive com-pounds in addition to polyphenols or flavonoids were also present and contributed to the diverse bioactive characteristics of P. minus.demonstrated the neuroprotective activity of quinic acid isolated from the roots of Arcti lappa Linne.It protected PC12 cells from oxidative stress, which could be attributed to t antioxidant capacity of quinic acid [29].The detection of various phenolic compoun based on compound identification, strengthens the suggestion that other bioact compounds in addition to polyphenols or flavonoids were also present and contribut to the diverse bioactive characteristics of P. minus.
Phase Contrast Microscopy and Immunocytochemistry Confirmed Neuronal Marker β-Tubulin III Expression
To demonstrate the neuronal phenotype of differentiated SH-SY5Y used in this study, the cells were examined under a fluorescence microscope using the phase contrast mode.Figure 2a shows that undifferentiated cells had no or significantly fewer neurites.In contrast, after seven days of differentiation, the neurite characteristics persisted and grew in the differentiated cells (Figure 2b), indicating that the SH-SY5Y cells had differentiated into typical neuronal cells.Immunochemistry was performed on differentiated cells in addition to morphological evaluation of SH-SY5Y-derived neuronal cells to evaluate the differentiation process.Undifferentiated and differentiated SH-SY5Y cells were stained and incubated with antibodies against the neuron-specific protein β-tubulin III, which is a marker of neurite development.The undifferentiated but stained cells showed low green fluorescence in the cytoplasm or neurite (Figure 2c), indicating that the marker was not present within the cells.Figure 2b,d show the morphological changes observed in the SH-SY5Y cell population throughout the differentiation period.Greenish fluorescence was observed in differentiated cells' cytoplasm and neurites (Figure 2d), indicating a 5 of 24 high level of β-tubulin III expression in both the cytoplasm and neurites of differentiated SH-SY5Y cells.Similar findings have been reported in previous studies [30,31], in which RA treatment resulted in neurite extension in SH-SY5Y cells.Phase contrast microscopy confirmed the success of the differentiation process.Consequently, differentiated cells were used throughout the experiment.
Cytotoxicity Effect of PMEE and H2O2 on SH-SY5Y Viability
The MTT assay was used to determine cell viability in differentiated SH-SY5Y cells The cells were exposed to PMEE at various concentrations (0.5 to 1000 µg/mL) for 24, 48 and 72 h.As shown in Figure 3, the PMEE-treated cells were viable across the concentrations used in a time-dependent manner.A similar pattern was observed in cells treated with curcumin at various concentrations (0.8 to 100 µg/mL) for 24, 48, and 72 h (Figure 4).As described in Section 3.6, the cytotoxicity of H2O2 was also determined by exposing differentiated cells to H2O2 at various concentrations (7.8 to 1000 µM/mL) for 4 h.In 4 h, 220 µM H2O2 caused approximately 50% cell death, according to the results obtained.Therefore, it was selected as the concentration of H2O2 to challenge PMEE pre-
Cytotoxicity Effect of PMEE and H 2 O 2 on SH-SY5Y Viability
The MTT assay was used to determine cell viability in differentiated SH-SY5Y cells.The cells were exposed to PMEE at various concentrations (0.5 to 1000 µg/mL) for 24, 48, and 72 h.As shown in Figure 3, the PMEE-treated cells were viable across the concentrations used in a time-dependent manner.A similar pattern was observed in cells treated with curcumin at various concentrations (0.8 to 100 µg/mL) for 24, 48, and 72 h (Figure 4).As described in Section 3.6, the cytotoxicity of H 2 O 2 was also determined by exposing differentiated cells to H 2 O 2 at various concentrations (7.8 to 1000 µM/mL) for 4 h.In 4 h, 220 µM H 2 O 2 caused approximately 50% cell death, according to the results obtained.Therefore, it was selected as the concentration of H 2 O 2 to challenge PMEE pre-treated cells in the subsequent experiments.According to studies, exposing differentiated SH-SY5Y cells to cytotoxic agents such as hydrogen peroxide for a predetermined period results in oxidative stress and cell death.Several prior studies have demonstrated that H 2 O 2 has a cytotoxic effect on differentiated SH-SY5Y cells used as a model for neuroprotection research [31,32].Furthermore, it was observed that the use of P. minus extract at doses of up to 500 µg/mL did not exhibit any harmful effects on normal human lung fibroblast cell line (Hs888Lu) [33].Ghazali et al. [34] studied the antiproliferative effect of various solvent extracts of P. minus using in vitro MTT assay against HepG2, WRL68, HeLA, HCT 116, MCF-7 and Chang cell lines.The ethanol extract showed lowest IC 50 of 32.25 ± 3.72 µg/mL towards HepG2 cell lines with minimum toxicity in WRL68 normal embryonic liver cells whereas methanol extract showed moderate antiproliferative activity against HCT 116 cell lines (IC50 = 56.23 ± 3.2 µg/mL) [34].P. minus had cytotoxic effects on cancer cells while demonstrating minimal toxicity towards normal cells.This shows that P. minus exhibits a selective effect in safeguarding normal cells.
ecules 2023, 28, x FOR PEER REVIEW 6 o IC50 of 32.25 ± 3.72 µg/mL towards HepG2 cell lines with minimum toxicity in WRL normal embryonic liver cells whereas methanol extract showed moderate antiproliferat activity against HCT 116 cell lines (IC50 = 56.23 ± 3.2 µg/mL) [34].P. minus had cytoto effects on cancer cells while demonstrating minimal toxicity towards normal cells.T shows that P. minus exhibits a selective effect in safeguarding normal cells.
Neuroprotective Effect of PMEE against H 2 O 2 -Induced Cytotoxicity
Damage to neurons resulting from oxidative stress (primarily reactive oxygen species) is one of the leading causes of neurodegenerative diseases [35].To evaluate the neuroprotective effect of PMEE and curcumin against H 2 O 2 -induced cell death, the differentiated SH-SY5Y cells were pre-treated with a range of PMEE (0.5 to 1000 µg/mL) and curcumin (0.8 to 100 µg/mL) for 24, 48, and 72 h.Then, the pre-treated cells were exposed to IC 50 of H 2 O 2 (220 µM) for 4 h.MTT assay showed that H 2 O 2 inhibited the cells' viability, while pre-treatment of the cells with PMEE provided protection to the cells against the cytotoxic effect of H 2 O 2 across the experimental period (Figure 5a-c) when compared to untreated control cells.However, pre-treatment with 62.5 µg/mL of PMEE demonstrated the highest viability against H 2 O 2 especially after 48 and 72 h (Figure 5d).Moreover, 3.13 µg/mL of curcumin demonstrated the highest viability effect on the differentiated cells after 48 h of pre-treatment (Figure 5e).Hence, they were selected as working concentrations for the rest and standard control in the subsequent experiments.
The potential utilization of herbal medicines as a novel preventative neuroprotective strategy in the context of neurodegenerative illnesses is a subject of interest.These natural therapies could be explored for their applicability in individuals who are at risk of developing such conditions [36].Exogenous antioxidants have been shown in studies to help prevent oxidative damage by reducing ROS production in cells, increasing their chances of survival [3,5].The current findings demonstrated how different concentrations of PMEE increased the viability of differentiated neurons in a time and dose-dependent manner.However, 62.5 µg/mL of PMEE demonstrated the greatest potential in that regard, indicating a high capability for reducing susceptibility caused by hermetic response in the cells.The effect was clearly greater after 48 and 72 h of treatment than after 24 h.This is due to the long-term effect on endogenous defensive mechanisms, which reduces the cells' vulnerability to attack from endogenous cytotoxic agents.Previous research found that an ethyl acetate extract of P. minus has a selective antiproliferative effect on HepG2 cells while having little cytotoxicity on normal liver cells [34].
PMEE Pre-Treatment Influenced Gene Expressions in Nrf2/ARE Pathway
The expression level of Nrf2, NQO1, SOD1, SOD2, and catalase under the Nrf2/ARE signaling pathway increased significantly (p < 0.05) in cells exposed to PMEE plus 4 h of H2O2 compared to cells treated with PMEE alone (Figure 6).The pre-treatment of differentiated neuron cells with PMEE increased the expression of these genes above the normal level.When Nrf2 is released from the cytoplasm, it translocates to the nucleus as a transcription factor.The factor binds to the antioxidant response element (ARE) to form a complex that binds to the promoter region of phase II antioxidant genes to initiate transcription of the phase II antioxidant proteins, which play a role in counteracting the toxic effects of free radicals and protecting neurons from damage and death [37].Though under oxidative stress, the presence of PMEE enhanced the expression of Nrf2 genes and their translocation to the nucleus.In addition, the increased expression of NQO1 in cells pre-treated with PMEE suggests a potential gene-level neuroprotective effect of PMEE.
The mRNA level of the GST gene increases significantly (p < 0.05) when the cells are pre-treated with PMEE with or without an H2O2 challenge compared to the H2O2 control group (Figure 6b).GST proteins are significant antioxidant enzymes that regulate stressinduced signaling pathways and are essential for scavenging the free radicals produced by cells [38].Increased expression or activity of this enzyme indicated improved antioxidant activity.PMEE's ability to prevent neuronal death caused by oxidative stress was indicated by its ability to increase GST expression.PMEE pre-treatment prior to H2O2 exposure resulted in a significant (p < 0.05) decrease in the mRNA expression level of the GCLC gene in differentiated SH-SY5Y cells compared to PMEE alone (Figure 6d).The
PMEE Pre-Treatment Influenced Gene Expressions in Nrf2/ARE Pathway
The expression level of Nrf2, NQO1, SOD1, SOD2, and catalase under the Nrf2/ARE signaling pathway increased significantly (p < 0.05) in cells exposed to PMEE plus 4 h of H 2 O 2 compared to cells treated with PMEE alone (Figure 6).The pre-treatment of differentiated neuron cells with PMEE increased the expression of these genes above the normal level.When Nrf2 is released from the cytoplasm, it translocates to the nucleus as a transcription factor.The factor binds to the antioxidant response element (ARE) to form a complex that binds to the promoter region of phase II antioxidant genes to initiate transcription of the phase II antioxidant proteins, which play a role in counteracting the toxic effects of free radicals and protecting neurons from damage and death [37].Though under oxidative stress, the presence of PMEE enhanced the expression of Nrf2 genes and their translocation to the nucleus.In addition, the increased expression of NQO1 in cells pre-treated with PMEE suggests a potential gene-level neuroprotective effect of PMEE.
decreased significantly (p < 0.05) in all treatment groups (Figure 6g) compared to untreated cells.This indicates that treatment exposure had no effect on the expression of HO-1 in differentiated SH-SY5Y cells.Moreover, after 48 h, the level of mRNA for catalase gene expression increased significantly (p < 0.05) in PMEE pre-treated cells plus 4 h of H2O2 exposure compared to its expression in H2O2 control and PMEE-treated cells (Figure 6h).Catalase plays a crucial antioxidant role in the body by converting harmful substances, such as cellular hydrogen peroxide, into less toxic forms (oxygen or water) [40].
PMEE Pre-Treatment Influenced Gene Expressions in NF-κB/IκB Pathway
In oxidative stress conditions with or without PMEE and curcumin, the mRNA expression level of genes involved in the NF-κB/IκB -mediated neuropathological pathway is drastically altered.This pathway is influenced by NF-κB, IκB, BACE1, APP, and MAPT genes.All gene expression levels were significantly (p < 0.05) higher in PMEEtreated cells than in H2O2 control cells (Figure 7).Pre-treatment of differentiated SH-SY5Y cells with PMEE for 72 h inhibited the overexpression of the NF-κB gene significantly (p < 0.05) when the cells were exposed to an H2O2-induced toxic environment for 4 h, as The mRNA level of the GST gene increases significantly (p < 0.05) when the cells are pre-treated with PMEE with or without an H 2 O 2 challenge compared to the H 2 O 2 control group (Figure 6b).GST proteins are significant antioxidant enzymes that regulate stressinduced signaling pathways and are essential for scavenging the free radicals produced by cells [38].Increased expression or activity of this enzyme indicated improved antioxidant activity.PMEE's ability to prevent neuronal death caused by oxidative stress was indicated by its ability to increase GST expression.PMEE pre-treatment prior to H 2 O 2 exposure resulted in a significant (p < 0.05) decrease in the mRNA expression level of the GCLC gene in differentiated SH-SY5Y cells compared to PMEE alone (Figure 6d).The GCLC encodes glutamate-cysteine ligase, a phase II antioxidant marker with extraordinary antioxidant activity in humans and other closely related species [31].In reaction to oxidative stress as well as other inflammatory factors, the level of GCLC increases to neutralize the harmful environmental effect and ensure the survival of affected cells.However, inducing H 2 O 2 for 4 h in PMEE pre-treatment differentiated SH-SY5Y cells did not enhance the expression level of GCLC.
SOD1 and SOD2 gene expression increased significantly (p < 0.05) in differentiated cells treated with PMEE prior to H 2 O 2 exposure, as compared to control cells and cells treated with PMEE alone (Figure 6e,f).The two genes code for the enzyme's superoxide dismutase 1 and 2, which are abundant in neural tissue.Part of the physiological and antioxidant significance of these proteins is their ability to regulate superoxide concentration by converting superoxide to hydrogen peroxide, a substrate for another phase II enzyme that is less damaging to the former [39].This phenomenon facilitates neuron cell survival in a toxic environment.In contrast, the expression of the HO-1 gene decreased significantly (p < 0.05) in all treatment groups (Figure 6g) compared to untreated cells.This indicates that treatment exposure had no effect on the expression of HO-1 in differentiated SH-SY5Y cells.Moreover, after 48 h, the level of mRNA for catalase gene expression increased significantly (p < 0.05) in PMEE pre-treated cells plus 4 h of H 2 O 2 exposure compared to its expression in H 2 O 2 control and PMEE-treated cells (Figure 6h).Catalase plays a crucial antioxidant role in the body by converting harmful substances, such as cellular hydrogen peroxide, into less toxic forms (oxygen or water) [40].
PMEE Pre-Treatment Influenced Gene Expressions in NF-κB/IκB Pathway
In oxidative stress conditions with or without PMEE and curcumin, the mRNA expression level of genes involved in the NF-κB/IκB -mediated neuropathological pathway is drastically altered.This pathway is influenced by NF-κB, IκB, BACE1, APP, and MAPT genes.All gene expression levels were significantly (p < 0.05) higher in PMEE-treated cells than in H 2 O 2 control cells (Figure 7).Pre-treatment of differentiated SH-SY5Y cells with PMEE for 72 h inhibited the overexpression of the NF-κB gene significantly (p < 0.05) when the cells were exposed to an H 2 O 2 -induced toxic environment for 4 h, as compared to its high expression in the PMEE-only treatment group (Figure 7a).Reportedly, inhibition of NF-κB mediates neuroprotection in neurodegenerative diseases such as Alzheimer's and multiple sclerosis [41].The present findings suggested that PMEE could selectively inhibit the expression of NF-κB subunits, which are accountable for the transcription of inflammatory markers that induce neuronal damage and death.PMEE pre-treatment prior to H 2 O 2 exposure significantly (p < 0.05) increased the level of mRNA for the IB gene in differentiated neuron cells compared to H 2 O 2 control cells (Figure 7b).The IκB gene encodes the IκBα protein, which prevents the production of inflammatory cytokines.IκBα and NF-κB interact in two ways: by retaining the transcription factor (NF-κB) in the cytoplasm and by inhibiting its DNA binding in the nucleus.PMEE perhaps strengthened the interaction between the two molecules by preventing phosphorylation of IκBα which leads to the release of NF-κB and its translocation to the nucleus.control cells (Figure 7c).In Alzheimer's and Down syndrome disease models, inhibiting the activity of or silencing this gene was associated with neuroprotection [42].Thus, the genotoxic effect of PMEE on the expression of the BACE1 gene demonstrated the compound's potential to inhibit the functional BACE1 protein's activity.In contrast, 48 h of PMEE pre-treatment increased APP and MAPT gene expression significantly (p < 0.05) in H2O2-treated control cells without PMEE pre-treatment (Figure 7d,e).The expression level of the BACE1 gene decreased significantly (p < 0.05) in PMEEtreated cells prior to 4 h of H 2 O 2 exposure when compared to the expression in H 2 O 2 control cells (Figure 7c).In Alzheimer's and Down syndrome disease models, inhibiting the activity of or silencing this gene was associated with neuroprotection [42].Thus, the genotoxic effect of PMEE on the expression of the BACE1 gene demonstrated the compound's potential to inhibit the functional BACE1 protein's activity.In contrast, 48 h of PMEE pre-treatment increased APP and MAPT gene expression significantly (p < 0.05) in H 2 O 2 -treated control cells without PMEE pre-treatment (Figure 7d,e).
PMEE Pre-Treatment Influenced Gene Expressions in the MAPK Pathway
PMEE pre-treatment prior to H 2 O 2 exposure led to a significant (p < 0.05) increase in the expression of JNK, p38, and PP2A genes in differentiated cells compared to the expression in H 2 O 2 control cells and PMEE only group (Figure 8a,b,d).Curcumin (positive control) could not prevent the genotoxic effect of H 2 O 2 on the expression level of JNK, unlike PMEE-treated cells.JNK is a gene that codes for c-Jun N-terminal kinases-3 (JNK3), a neuron-specific isoform protein that plays an essential role in the pathophysiology of a variety of neurological disorders.The upregulation of JNK is activated by H 2 O 2 -induced cellular stress.In addition, the p38 gene is a member of the mitogen-activated protein kinase (MAPK) family that plays a crucial role in MAPK pathway-mediated apoptosis [43].Previous research has demonstrated the neuroprotective effects of PP2A against neuronal apoptosis in cases of traumatic brain injury, acute ischemia, and neurodegenerative disease [44][45][46][47].After 4 h of exposure to H 2 O 2 , this indicated that PMEE protects neuroblastoma SH-SY5Y cells from oxidative stress by increasing JNK, p38, and PP2A gene expressions.Meanwhile, the expression level of MKP1 and AKT decreased significantly (p < 0.05) in the presence of PMEE pre-treatment prior to H 2 O 2 exposure compared to PMEE only treatment group (Figure 8c,f).In contrast, the presence of PMEE did not increase the expression of PP5 under oxidative stress conditions in cells treated with PMEE alone (Figure 8e).κB/IκB signaling pathway.Control: untreated cells; H2O2: cells induced with 300 µM hydrogen peroxide for 4 h; PMEE+H2O2: cells pre-treated with PMEE for 48 h + 4 h exposure to H2O2; CUR+H2O2: cells pre-treated with curcumin for 48 h + 4 h exposure to H2O2; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; means with different letters denote significant differences with one another and vice versa (p < 0.05).
PMEE Pre-Treatment Influenced Gene Expressions in the MAPK Pathway
PMEE pre-treatment prior to H2O2 exposure led to a significant (p < 0.05) increase in the expression of JNK, p38, and PP2A genes in differentiated cells compared to the expression in H2O2 control cells and PMEE only group (Figure 8a,b,d).Curcumin (positive control) could not prevent the genotoxic effect of H2O2 on the expression level of JNK, unlike PMEE-treated cells.JNK is a gene that codes for c-Jun N-terminal kinases-3 (JNK3), a neuron-specific isoform protein that plays an essential role in the pathophysiology of a variety of neurological disorders.The upregulation of JNK is activated by H2O2-induced cellular stress.In addition, the p38 gene is a member of the mitogen-activated protein kinase (MAPK) family that plays a crucial role in MAPK pathway-mediated apoptosis [43].Previous research has demonstrated the neuroprotective effects of PP2A against neuronal apoptosis in cases of traumatic brain injury, acute ischemia, and neurodegenerative disease [44][45][46][47].After 4 h of exposure to H2O2, this indicated that PMEE protects neuroblastoma SH-SY5Y cells from oxidative stress by increasing JNK, p38, and PP2A gene expressions.Meanwhile, the expression level of MKP1 and AKT decreased significantly (p < 0.05) in the presence of PMEE pre-treatment prior to H2O2 exposure compared to PMEE only treatment group (Figure 8c,f).In contrast, the presence of PMEE did not increase the expression of PP5 under oxidative stress conditions in cells treated with PMEE alone (Figure 8e).
PMEE Pre-Treatment Increased the Expression of Acetylcholine (ACH) in SH-SY5Y Differentiated Cells
It has been demonstrated that ACH inhibits the production of ROS during oxidative stress.Therefore, we examined the effects of ACH concentration before or after H2O2 treatment on the differentiated SH-SY5Y cells using ELISA.The ACH level in the differentiated SH-SY5Y cells was significantly (p < 0.05) increased in PMEE or curcumin pre-treatment prior to 4 h of H2O2 exposure compared to that in H2O2 control cells as shown in Figure 9.In the present study, PMEE and curcumin have been shown to possess positive effect by increasing the expression of ACH under oxidative conditions.
The potential utilization of herbal medicines as a novel preventative neuroprotective strategy in the context of neurodegenerative illnesses is a subject of interest.These natural therapies could be explored for their applicability in individuals who are at risk of developing such conditions [36].The AChE is an enzyme that is responsible for the metabolism of the neurotransmitter ACH, and inhibiting AChE can have therapeutic (e.g., Alzheimer's disease drugs) or neurotoxic effects (e.g., pesticides).Patients with coronary artery disease and Alzheimer's disease pathogenesis had elevated ACH gene expression.The finding reported by Işık and Beydemir [48] suggested that an increase in cellular AChE release results in the formation of neurotoxic β-amyloid plaques and may cause neurodegenerative diseases.According to the cholinergic hypothesis, the inhibition of AChE, an enzyme that catalyzes acetylcholine hydrolysis, increases the levels of ACH in the brain, thus improving cholinergic functions in Alzheimer's disease patients.The use of AChE inhibitors has proven to be an effective approach in the management of neurological conditions such as Alzheimer's disease [49].Therefore, one of the important strategies for treating neurological disease is to maintain the levels of ACH through the inhibition of AChE [50].
The inhibitory actions of the aqueous and methanolic extracts of P. minus leaves were observed on the AChE enzyme [15,51].A prior research conducted by George et al. [15] demonstrated that the aqueous extract of P. minus had inhibitory effects on cholinesterase
PMEE Pre-Treatment Increased the Expression of Acetylcholine (ACH) in SH-SY5Y Differentiated Cells
It has been demonstrated that ACH inhibits the production of ROS during oxidative stress.Therefore, we examined the effects of ACH concentration before or after H 2 O 2 treatment on the differentiated SH-SY5Y cells using ELISA.The ACH level in the differentiated SH-SY5Y cells was significantly (p < 0.05) increased in PMEE or curcumin pre-treatment prior to 4 h of H 2 O 2 exposure compared to that in H 2 O 2 control cells as shown in Figure 9.
In the present study, PMEE and curcumin have been shown to possess positive effect by increasing the expression of ACH under oxidative conditions.
The potential utilization of herbal medicines as a novel preventative neuroprotective strategy in the context of neurodegenerative illnesses is a subject of interest.These natural therapies could be explored for their applicability in individuals who are at risk of developing such conditions [36].The AChE is an enzyme that is responsible for the metabolism of the neurotransmitter ACH, and inhibiting AChE can have therapeutic (e.g., Alzheimer's disease drugs) or neurotoxic effects (e.g., pesticides).Patients with coronary artery disease and Alzheimer's disease pathogenesis had elevated ACH gene expression.The finding reported by Işık and Beydemir [48] suggested that an increase in cellular AChE release results in the formation of neurotoxic β-amyloid plaques and may cause neurodegenerative diseases.According to the cholinergic hypothesis, the inhibition of AChE, an enzyme that catalyzes acetylcholine hydrolysis, increases the levels of ACH in the brain, thus improving cholinergic functions in Alzheimer's disease patients.The use of AChE inhibitors has proven to be an effective approach in the management of neurological conditions such as Alzheimer's disease [49].Therefore, one of the important strategies for treating neurological disease is to maintain the levels of ACH through the inhibition of AChE [50].rate of 88.2 ± 3.44% as compared to other species of polygonum such as P. patulum subsp.Pulchellum, P. aviculare and P. lapathifolium [52].In this present study, P. minus showed promising potential as a therapeutic intervention for Alzheimer's disease due to the observed favorable impact of PMEE on the upregulation of ACH concentrations under oxidative conditions.
Molecular Docking
The results of molecular docking between PMEE's identified compounds and protein AChE (Protein Data Bank ID: 4EY6) are shown in Table 2.The complexes exhibited comparable binding interaction energy values in the range of −9.5 to −5.8 kcal/mol.The complexes of AChE and quercitrin, aloe-emodin, afzelin and citreorosein showed more negative binding values compared to other compounds, i.e., quercitrin showed the highest docking score at −9.5 kcal/mol followed by aloe-emodin, afzelin, and citreorosein at −9.4, −9.3 and −9.0 kcal/mol, respectively.Lower binding affinity depicts better ligand receptor interaction as well as higher docking score against AChE.The inhibitory actions of the aqueous and methanolic extracts of P. minus leaves were observed on the AChE enzyme [15,51].A prior research conducted by George et al. [15] demonstrated that the aqueous extract of P. minus had inhibitory effects on cholinesterase activity, with an IC 50 value of 0.04 mg/mL and a maximal inhibition rate of 68%.The findings of this study indicate that P. minus exhibits antioxidant and anticholinesterase properties, and it has been observed to improve cognitive function in vivo and indicates that the extract possesses neuroprotective effects.The AChE inhibitory activity of P. istanbulicum, was observed in a dose-dependent manner.The ethanolic extract of P. istanbulicum demonstrated the highest level of inhibition against AChE, with an inhibition rate of 88.2 ± 3.44% as compared to other species of polygonum such as P. patulum subsp.Pulchellum, P. aviculare and P. lapathifolium [52].In this present study, P. minus showed promising potential as a therapeutic intervention for Alzheimer's disease due to the observed favorable impact of PMEE on the upregulation of ACH concentrations under oxidative conditions.
Molecular Docking
The results of molecular docking between PMEE's identified compounds and protein AChE (Protein Data Bank ID: 4EY6) are shown in Table 2.The complexes exhibited comparable binding interaction energy values in the range of −9.5 to −5.8 kcal/mol.The complexes of AChE and quercitrin, aloe-emodin, afzelin and citreorosein showed more negative binding values compared to other compounds, i.e., quercitrin showed the highest docking score at −9.5 kcal/mol followed by aloe-emodin, afzelin, and citreorosein at −9.4, −9.3 and −9.0 kcal/mol, respectively.Lower binding affinity depicts better ligand receptor interaction as well as higher docking score against AChE.24.
Quinic acid −5.8 Figures 10 and 11 demonstrate results from ligand-protein interaction between AChE and four most active compounds indicated by binding affinity scores (kcal/mol).It showed that the interacting amino acids of AChE and quercitrin were found to be Tyr72, Asp74, Ser 293, and Phe295.Meanwhile, ligand-protein interaction between AChE and the following compounds showed that the interacting amino acids at the active site were found to be Ser293 and Phe295 (aloe-emodin), Tyr72, Asp74, Ser 293, and Gln291 (afzelin), and Tyr72 and Tyr337 (citreorosein).Ligplot analysis showed 2D structure where the hydrogen bonding is in green dashed lines and hydrophobic interaction is in red arcs between AChE with all the different ligands.Furthermore, Pymol analysis showed 3D structure of ligandprotein interactions where the green color indicates the ligand and red color indicates interacting amino acids of the protein.
Plant Collection and Preparation of Ethanolic Extract
P. minus leaves (5 kg) were collected from an experimental plot of INBIOSIS.Original samples (10 kg) were collected from Cameron Highland, Malaysia and a voucher specimen was deposited in the UKMB Herbarium, Universiti Kebangsaan Malaysia.Specimens were identified by a taxonomist and further confirmed by ITS sequencing.P. minus leaves (1 kg) were air dried at room temperature (+27 °C) and powdered using a blender (230-250 mesh).Approximately 360 g of leaf powder were soaked in 7.2 L ethanol.The extraction was performed with ratio 1:20 (w/w) for 72 h at room temperature.The mixture was filtered, and the filtrate was concentrated using an EYELA OSB-2100 rotary vacuum evaporator model N-11005-WD until complete dryness at 40 °C.Subsequently, the semi-dried ethanol extract was freeze dried using Labconco freeze dryer model 74200-30.The extract was referred to as P. minus ethanolic extract (PMEE) and was utilized in subsequent analysis.
Liquid Chromatography-Mass Spectrometry (LC-MS/MS)
LC-MS/MS was performed with slight modifications to the method described by Bingol and Bursal [53].The separation was performed using Thermo Scientific (C18 column (AcclaimTM RepMap RSLC, 75 µm × 15 µm, 2 µm, 100 A) on an Dionex UltiMate 3000 UHPLC system (Thermo Scientific, Waltham, MA, USA).The dry ethanolic extract was dissolved in HPLC-grade methanol.As an internal benchmark, umbelliferon was used.The sample injection volume was 20 µL, and the temperature and flow rate of the column were 60 °C and 0.3 mL/min.The mobile phases were 0.1% formic acid dissolved in water (mobile phase A) and acetonitrile (mobile phase B).The elution was carried out with a 35 min gradient beginning with an increase from 0 to 5% B in the first two minutes, then to 40% B in the next two minutes, and finally to 95% B in the following 16 min.At 95% B, the mixture was held for 2 min prior to an increase of 0.1 min to 100% B. At 100 percent B, the mixture was held for four minutes before dropping to 5 percent B in two minutes.The column (C18, Thermo Scientific) was then reconditioned with the initial
Plant Collection and Preparation of Ethanolic Extract
P. minus leaves (5 kg) were collected from an experimental plot of INBIOSIS.Original samples (10 kg) were collected from Cameron Highland, Malaysia and a voucher specimen was deposited in the UKMB Herbarium, Universiti Kebangsaan Malaysia.Specimens were identified by a taxonomist and further confirmed by ITS sequencing.P. minus leaves (1 kg) were air dried at room temperature (+27 • C) and powdered using a blender (230-250 mesh).Approximately 360 g of leaf powder were soaked in 7.2 L ethanol.The extraction was performed with ratio 1:20 (w/w) for 72 h at room temperature.The mixture was filtered, and the filtrate was concentrated using an EYELA OSB-2100 rotary vacuum evaporator model N-11005-WD until complete dryness at 40 • C. Subsequently, the semi-dried ethanol extract was freeze dried using Labconco freeze dryer model 74200-30.The extract was referred to as P. minus ethanolic extract (PMEE) and was utilized in subsequent analysis.
Liquid Chromatography-Mass Spectrometry (LC-MS/MS)
LC-MS/MS was performed with slight modifications to the method described by Bingol and Bursal [53].The separation was performed using Thermo Scientific (C18 column (AcclaimTM RepMap RSLC, 75 µm × 15 µm, 2 µm, 100 A) on an Dionex UltiMate 3000 UHPLC system (Thermo Scientific, Waltham, MA, USA).The dry ethanolic extract was dissolved in HPLC-grade methanol.As an internal benchmark, umbelliferon was used.The sample injection volume was 20 µL, and the temperature and flow rate of the column were 60 • C and 0.3 mL/min.The mobile phases were 0.1% formic acid dissolved in water (mobile phase A) and acetonitrile (mobile phase B).The elution was carried out with a 35 min gradient beginning with an increase from 0 to 5% B in the first two minutes, then to 40% B in the next two minutes, and finally to 95% B in the following 16 min.At 95% B, the mixture was held for 2 min prior to an increase of 0.1 min to 100% B. At 100 percent B, the mixture was held for four minutes before dropping to 5 percent B in two minutes.The column (C18, Thermo Scientific) was then reconditioned with the initial gradient for seven minutes.MS/MS analysis was performed using a MicroTOF-QIII (Bruker, Bremen, Germany) system equipped with an electrospray ionization (ESI) source operating in a positive mode of ionization.The nitrogen drying gas was set to 45 psi with a flow rate of 8 L min1 and a temperature of 200 • C. The voltage of ESI spray was fixed at 4.5 kV, and the voltage of the fragmentor was set at 200 V.For the mass range of 50-1500 m/z, ionization-mode mass spectrum data were recorded.MS-DIAL version 3.70 was utilized for all compound identifications.
Neuronal Differentiation of SH-SY5Y Cells
Differentiation of SH-SY5Y cells was achieved in accordance with the stipulated protocol outlined in Jaafaru et al. [31].According to the predetermined protocol specified by Jaafaru et al. [31] SH-SY5Y cells were successfully differentiated into neuron-like cells.In brief, the cells were seeded in a 6-well plate at a density of 1 × 10 5 cells/well.Following a 24 h incubation period, each well was added 2 mL of DMEM/F12 media containing 3% heat-inactivated FBS and 10 µM retinoic acid (RA).This was performed in the dark with the incubator set to 37 • C with 5% CO 2 .For a period of seven days, the differentiation media was changed every two days.RA-induced differentiation was examined under phase contrast using an inverted light fluorescence microscope (Zeiss Axio Vert A1, Göttingen, Germany) fitted with an image acquisition system (AxioCam MRm, Göttingen, Germany), and multiple images were taken independently.
Immunocytochemistry (ICC) Assay
To further ascertain the differentiation of SH-SY5Y cells into full neuronal cells by retinoic acid (RA), ICC was conducted according to the protocol described by Jaafaru et al. [31].The cells were differentiated as previously mentioned after being seeded in 24-well plates at a density of 2 × 10 4 cells/well.The differentiated cells were washed three times with cold phosphate buffer saline pH 7.4, at 25 • C followed by incubation with 300 µL fixation solution (4% Paraformaldehyde, 1M NaOH and PBS) at 25 • C for 30 min and washed with PBS thereafter.Permeation solution (1% Triton X-100 and 99% PBS) and blocking (0.3% bovine serum albumin, 10% goat serum, 10% tween 20 and PBS) solution were incubated with the cells at 25 • C for 15 min and 30 min, accompanied with washing at each stage.Antibody for class III β-tubulin (Tuj-1), a cytoplasmic neuron-specific protein, was added in ratio of 1:200 blocking solution with subsequent overnight incubation at 4 • C. The cells were washed with PBS the following day and incubated with Alexa fluoropore-488 secondary antibody conjugate (1:200) in the dark at 25 • C for 2 h.Then, the cells were incubated with nuclear counterstaining dye (DAPI dye) for 10 min before images were taken using an inverted light fluorescence microscope (Zeiss Axio Vert A1, Germany) with an image acquisition system (AxioCam MRm, Göttingen, Germany).
Cytotoxicity of PMEE on the SH-SY5Y Cells
The effect of PMEE on cell viability on differentiated SH-SY5Y cells were assessed using the MTT reduction assay, as modified by Jaafaru et al. [31].In a 96-well plate, 1 × 10 4 SH-SY5Y cells were seeded, and they underwent a seven-day period of differentiation process as outlined in Section 3.4.The cells were treated with serially diluted concentrations of PMEE (0.5-1000 µg/mL) for 24, 48, and 72 h to determine how PMEE affected cell viability.The plate was incubated in the dark for four hours after 20 µL addition of MTT solution and then 200 µL of DMSO was added after removal of cell medium to dissolve the formazan that had formed in the wells.Absorbance was measured immediately at 540 nm using a microplate reader.Similar analysis was conducted for H 2 O 2 cytotoxic effect, in which 1000 µM concentration was serial diluted to 7.8 µM and the optical density was used to evaluate the IC 50 of H 2 O 2 used in the present study.
Neuroprotection of PMEE on the SH-SY5Y Cells
Differentiated SH-SY5Y cells were pre-treated with serial dilutions of PMEE to determine the neuroprotective activity of the PMEE in time-dependent manner prior to 4 h challenged by 220 µM (IC 50 ) H 2 O 2 , followed by addition of 20 µL and 200 µL of MTT and DMSO reagent, respectively.Curcumin was used as positive control.The absorbance reading was measured immediately at 540 nm using a microplate reader.
PMEE Pre-Treatment and H 2 O 2 Exposure
The differentiated neuronal cells were seeded in T25 flasks at a density of 1 × 10 3 cells/mL and underwent differentiation as outlined in Section 3.4.For 48 h, the cells were pre-treated separately with PMEE (6.25 µg/mL) or curcumin (3.13 µg/mL).Prior to bioassay analyses, the pre-treated cells were exposed to 220 µM H 2 O 2 for 4 h.
Gene Expression Study of PMEE-Treated SH-SY5Y Cells
After differentiation and treatment, genomic RNA was extracted using an RNA extraction kit (NucleoSpin RNA Plus, Macherey, Düren, Germany) in accordance with the manufacturer's instructions.The concentration and purity of the isolated RNA were evaluated using Nanodrop spectrophotometer (Thermo Scientific Nanodrop, NanoDrop Technologies, Wilmington, DE, USA).The cDNA was synthesized from one µg of RNA using the HiScript III First Strand cDNA Synthesis kit +gDNA wiper (R312-02, Vazyme, Nanjing, China).Meanwhile, the qPCR was conducted using Maxima SYBR green qPCR Master Mix (Q712-02, Vazyme) according to the manufacturer's instructions.The nucleotide primer sequences used in this study were presented in Table 3.The primers were synthesized by Bio3 Scientific Sdn.Bhd.(Puchong, Malaysia).The glycerldehyde-3-phosphate dehydrogenase (GAPDH), a housekeeping gene, was used as an internal reference (forward 5 -GTCATCCCTGAGCTGAACGG-3 , reverse 5 -AAGTGGTCGTTGAGGGCAAT-3 ).Each gene was amplified three times using RT-qPCR.The amplification parameters were as follows: 95 • C for 30 s, 95 • C for 5 s, and 60 • C for 31 s for a total of 40 cycles.Using the 2-∆∆Cq method, the quantification values were subsequently calculated and analyzed.Ratio in untreated cells (negative control) was assigned as 1.
Molecular Docking
Molecular docking study was conducted to test the binding affinity of PMEE's identified compounds to AChE enzyme residues.AChE (PDB ID: 4EY6) was retrieved as a PDB file from the RCSB Protein Data Bank (http:/www.rcsb.org/pdb/,accessed on 15 August 2023).The Auto Dock Tools (version 1.5.7) was used to prepare protein.Crystallographic waters were removed, polar hydrogens were added to a macromolecule, along with Kollman charges.To get the best conformational docking state, a grid box covering the active site residues of the target protein was created.Using AutoDock Tools 1.5.7, the docking search site was established where ligands could investigate potential binding interactions with AChE [54].The 3D cuboidal AutoGrid box's center was set to (x: 12.3199, y: 42.071, z: 28.832), and its dimensions were set to (x: 24 y: 20 z: 20) for the number of points.
The molecular docking runs were carried out using command prompt.The AutoDock Vina software, target receptor and ligand pdbqt files, configuration text file, and intended destination of output data were all supplied in the docking command line.The resulting AutoDock Vina output files in pdbqt format contained the generated poses as well as text data listing the relevant poses' binding energies [55].The binding affinity measured in terms of binding energy (kcal/mol) and the visualization of binding conformation for each docking mode using PyMOL were the two results from molecular docking.
Statistical Analysis
Data are presented as the mean standard deviation, and differences between means of each group were determined by one-way analysis of variance (ANOVA) with Tukey's multiple comparison, using Graph Pad Prism 9 (GraphPad Software, Inc., San Diego, CA, USA).The 95% confidence interval was considered, thus p < 0.05 signified statistical significance.
Conclusions
This study revealed the ability of PMEE to halt ROS generation due to oxidative stress induced by H 2 O 2 .The findings showed that the demonstrated effects were coordinated through the Nrf2/ARE, NF-κB/IκB, and MAPK signaling pathways, thus concluding that PMEE confers neuroprotection against oxidative stress in differentiated SH-SY5Y cells.Quercitrin had the best docking score compared to the other compounds found in PMEE, which had lower docking scores.The present study suggests that PMEE may be a potential therapeutic agent for the treatment of neurodegenerative disorders associated with oxidative stress.The results of our study provide a justification for further investigation into the application of PMEE in animal models of neurodegenerative disorders, in order to assess their safety and effectiveness.Additionally, this would serve as a fundamental basis for subsequent clinical investigations.
Figure 1 .
Figure 1.Liquid chromatography-mass spectrophotometry (LC-MS/MS) analysis of PMEE in positive mode (a) and the negative mode (b).
Figure 1 .
Figure 1.Liquid chromatography-mass spectrophotometry (LC-MS/MS) analysis of PMEE in the positive mode (a) and the negative mode (b).
Figure 3 .
Figure 3. Cytotoxicity of PMEE on differentiated SH-SY5Y cells.(a) Viable cells after 24 h, (b) a 48 h, and (c) after 72 h.The values are the means of three independent trials (n = 3) and means w asterisks differed significantly (* p < 0.05, ** p < 0.01) with the untreated control.
Figure 3 .
Figure 3. Cytotoxicity of PMEE on differentiated SH-SY5Y cells.(a) Viable cells after 24 h, (b) after 48 h, and (c) after 72 h.The values are the means of three independent trials (n = 3) and means with asterisks differed significantly (* p < 0.05, ** p < 0.01) with the untreated control.
Figure 3 .
Figure 3. Cytotoxicity of PMEE on differentiated SH-SY5Y cells.(a) Viable cells after 24 h, (b) after 48 h, and (c) after 72 h.The values are the means of three independent trials (n = 3) and means with asterisks differed significantly (* p < 0.05, ** p < 0.01) with the untreated control.
Figure 4 .
Figure 4. Cytotoxicity of curcumin on differentiated SH-SY5Y cells.(a) Cell viability after 24 h, (b) 48 h and (c) 72 h.The values are the means of three independent trials (n = 3) and means with asterisks differed significantly (* p < 0.05, ** p < 0.01) with the untreated control.
Figure 4 .
Figure 4. Cytotoxicity of curcumin on differentiated SH-SY5Y cells.(a) Cell viability after 24 h, (b) 48 h and (c) 72 h.The values are the means of three independent trials (n = 3) and means with asterisks differed significantly (* p < 0.05, ** p < 0.01) with the untreated control.
Molecules 2023 , 24 Figure 5 .
Figure 5. PMEE has a concentration-dependent neuroprotective effect.The differentiated SH-SY5Y cells were pre-treated with PMEE (0.5-1000 g/mL) for (a) 24, (b) 48, and (c) 72 h before being exposed to 220 µM H2O2 for 4 h.(d) A total of 62.5 µg/mL of PMEE and (e) 3.13 µg/mL of curcumin plus 4 h of exposure to 220 µM of H2O2.The values are the means of three independent trials (n = 3) and the means with different alphabets vary significantly (p < 0.05).
Figure 5 .
Figure 5. PMEE has a concentration-dependent neuroprotective effect.The differentiated SH-SY5Y cells were pre-treated with PMEE (0.5-1000 g/mL) for (a) 24, (b) 48, and (c) 72 h before being exposed to 220 µM H 2 O 2 for 4 h.(d) A total of 62.5 µg/mL of PMEE and (e) 3.13 µg/mL of curcumin plus 4 h of exposure to 220 µM of H 2 O 2 .The values are the means of three independent trials (n = 3) and the means with different alphabets vary significantly (p < 0.05).
Figure 6 .
Figure 6.The mRNA expression of (a) Nrf2, (b) GST, (c) NQO1, (d) GCLC, (e) SOD1, (f) SOD2, (g) HO-1 and (h) catalase in the Nrf2/ARE signaling pathway.Control: untreated cells; H2O2: cells induced with 300 µM hydrogen peroxide for 4 h; PMEE + H2O2: cells pre-treated with PMEE for 48 h + 4 h exposure to H2O2; CUR+H2O2: cells pre-treated with curcumin for 48 h + 4 h exposure to H2O2; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; values with different letters alphabets are significantly different from one another and vice versa (p < 0.05).
Figure 6 .
Figure 6.The mRNA expression of (a) Nrf2, (b) GST, (c) NQO1, (d) GCLC, (e) SOD1, (f) SOD2, (g) HO-1 and (h) catalase in the Nrf2/ARE signaling pathway.Control: untreated cells; H 2 O 2 : cells induced with 300 µM hydrogen peroxide for 4 h; PMEE + H 2 O 2 : cells pre-treated with PMEE for 48 h + 4 h exposure to H 2 O 2 ; CUR+H 2 O 2 : cells pre-treated with curcumin for 48 h + 4 h exposure to H 2 O 2 ; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; values with different letters alphabets are significantly different from one another and vice versa (p < 0.05).
Figure 7 .
Figure 7.The mRNA expression of (a) NF-κB, (b) IκB, (c) BACE1, (d) APP and (e) MAPT in NF-κB/IκB signaling pathway.Control: untreated cells; H 2 O 2 : cells induced with 300 µM hydrogen peroxide for 4 h; PMEE+H 2 O 2 : cells pre-treated with PMEE for 48 h + 4 h exposure to H 2 O 2 ; CUR+H 2 O 2 : cells pre-treated with curcumin for 48 h + 4 h exposure to H 2 O 2 ; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; means with different letters denote significant differences with one another and vice versa (p < 0.05).
Figure 8 .
Figure 8.The mRNA expression of (a) JNK, (b) p38, (c) MKP1, (d) PP2A, (e) PP5 and (f) AKT in MAPK signaling pathway.Control: untreated cells; H 2 O 2 : cells induced with 300 µM hydrogen peroxide for 4 h; PMEE+H 2 O 2 : cells pre-treated with PMEE for 48 h + 4 h exposure to H 2 O 2 ; CUR+H 2 O 2 : cells pre-treated with curcumin for 48 h + 4 h exposure to H 2 O 2 ; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; means with different letters denote significant difference (p < 0.05).
Figure 9 .
Figure 9. Acetylcholine (ACH) level in the supernatant of differentiated SH-SY5Y cells.The ACH level of cells induced with H2O2 increased in both PMEE and CUR-treated cells.Control: untreated cells; H2O2: cells induced with 300 µM hydrogen peroxide for 4 h; PMEE+H2O2: cells pre-treated with PMEE for 48 h + 4 h exposure to H2O2; CUR+H2O2: cells pre-treated with curcumin for 48 h + 4 h exposure to H2O2; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; means with different letters denote significant differences (p < 0.05).
Figure 9 .
Figure 9. Acetylcholine (ACH) level in the supernatant of differentiated SH-SY5Y cells.The ACH level of cells induced with H 2 O 2 increased in both PMEE and CUR-treated cells.Control: untreated cells; H 2 O 2 : cells induced with 300 µM hydrogen peroxide for 4 h; PMEE+H 2 O 2 : cells pre-treated with PMEE for 48 h + 4 h exposure to H 2 O 2 ; CUR+H 2 O 2 : cells pre-treated with curcumin for 48 h + 4 h exposure to H 2 O 2 ; PMEE: cells treated with only PMEE; CUR: cells treated with only curcumin.The value represents fold changes between control (untreated cells) and treatment groups.Data were expressed as the mean ± SD of triplicate experiments; means with different letters denote significant differences (p < 0.05).
Figure 11 .
Figure 11.Three-dimensional (3D) interactions of AChE and selected PMEE's identified compounds.Ligands are illustrated in green, AChE protein in dark blue, and hydrogen bonds are depicted in yellow dots.(A) Interactions of quercitrin and AChE.(B) Interactions of aloe-emodin and AChE.(C) Interactions of afzelin and AChE.(D) Interactions of citreorosein and AChE.
Figure 11 .
Figure 11.Three-dimensional (3D) interactions of AChE and selected PMEE's identified compounds.Ligands are illustrated in green, AChE protein in dark blue, and hydrogen bonds are depicted in yellow dots.(A) Interactions of quercitrin and AChE.(B) Interactions of aloe-emodin and AChE.(C) Interactions of afzelin and AChE.(D) Interactions of citreorosein and AChE.
Table 1 .
List of compounds identified in PMEE by LC-MS/MS analysis.
Table 1 .
List of compounds identified in PMEE by LC-MS/MS analysis.
Table 3 .
Gene name, accession number, forward and reverse primer sequences used in the real-time PCR analysis.
|
2023-09-24T15:28:43.512Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ad9434ec301ebeb1a446be649c2c388e747dbbf3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/18/6726/pdf?version=1695222994",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4bd835e5ea6dec2e85721380d7c36340c2f5225",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55723664
|
pes2o/s2orc
|
v3-fos-license
|
Specker's Parable of the Over-protective Seer: A Road to Contextuality, Nonlocality and Complementarity (PLUS AN ERRATUM)
In 1960, the mathematician Ernst Specker described a simple example of nonclassical correlations which he dramatized using a parable about a seer who sets an impossible prediction task to his daughter's suitors. We revisit this example here, using it as an entree to three central concepts in quantum foundations: contextuality, Bell-nonlocality, and complementarity. Specifically, we show that Specker's parable offers a narrative thread that weaves together a large number of results, including: the impossibility of measurement-noncontextual and outcome-deterministic ontological models of quantum theory (the Kochen-Specker theorem), in particular the proof of Klyachko; the impossibility of Bell-local models of quantum theory (Bell's theorem), especially the proofs by Mermin and Hardy; the impossibility of a preparation-noncontextual ontological model of quantum theory; and the existence of triples of positive operator valued measures (POVMs) that can be measured jointly pairwise but not triplewise. Along the way, several novel results are presented, including: a generalization of a theorem by Fine connecting the existence of a joint distribution over outcomes of counterfactual measurements to the existence of a noncontextual model; a generalization of Klyachko's proof of the Kochen-Specker theorem; a proof of the Kochen-Specker theorem in the style of Hardy's proof of Bell's theorem; a categorization of contextual and Bell-nonlocal correlations in terms of frustrated networks; a new inequality testing preparation noncontextuality; and lastly, some results on the joint measurability of POVMs and the question of whether these can be modeled noncontextually. Finally, we emphasize that Specker's parable provides a novel type of foil to quantum theory, challenging us to explain why the particular sort of contextuality and complementarity embodied therein does not arise in a quantum world.
In 1960, the mathematician Ernst Specker described a simple example of nonclassical correlations, the counterintuitive features of which he dramatized using a parable about a seer who sets an impossible prediction task to his daughter's suitors. We revisit this example here, using it as an entrée to three central concepts in quantum foundations: contextuality, Bell-nonlocality, and complementarity. Specifically, we show that Specker's parable offers a narrative thread that weaves together a large number of results, including: the impossibility of measurement-noncontextual and outcome-deterministic ontological models of quantum theory (the 1967 Kochen-Specker theorem), in particular the recent state-specific pentagram proof of Klyachko; the impossibility of Bell-local models of quantum theory (Bell's theorem), especially the proofs by Mermin and Hardy and extensions thereof; the impossibility of a preparation-noncontextual ontological model of quantum theory; and the existence of triples of positive operator valued measures (POVMs) that can be measured jointly pairwise but not triplewise. Along the way, several novel results are presented, including: a generalization of a theorem by Fine connecting the existence of a joint distribution over outcomes of counterfactual measurements to the existence of a measurement-noncontextual and outcome-deterministic ontological model; a generalization of Klyachko's proof of the Kochen-Specker theorem from pentagrams to a family of star polygons; a proof of the Kochen-Specker theorem in the style of Hardy's proof of Bell's theorem (i.e., one that makes use of the failure of the transitivity of implication for counterfactual statements); a categorization of contextual and Bell-nonlocal correlations in terms of frustrated networks; a derivation of a new inequality testing preparation noncontextuality; and lastly, some novel results on the joint measurability of POVMs and the question of whether these can be modeled noncontextually. Finally, we emphasize that Specker's parable of the over-protective seer provides a novel type of foil to quantum theory, challenging us to explain why the particular sort of contextuality and complementarity embodied therein does not arise in a quantum world. In the field of quantum foundations, the mathematician Ernst Specker is rightly famous for introducing, with co-author Simon Kochen, the concept of a noncontextual hidden variable model and proving that such a model cannot underly quantum theory. This 1967 result, known as the Kochen-Specker theorem [1], continues to be an active subject of research today (see Ref. [2] for a bibliography). One finds precursors to this result in the 1957 work of Gleason [3] and Bell's 1966 review article on hidden variable models (which refers to Gleason's result) [4], but also in a 1960 paper by Specker entitled "The logic of propositions that are not simultaneously decidable" [5] 1 . This article studied logical features of quantum theory, in particular the question of the consistency of counterfactual propositions concerning the values of observables that are not comeasurable 2 . One of the points of the paper was to show that it is possible to conceive of an implication relation that is not transitive. The idea is illustrated with a parable wherein an overprotective seer sets a simple prediction task to his daughter's suitors. The challenge cannot be met because the seer asks the suitors for a noncontextual assignment of values but measures a system for which the statistics are inconsistent with such an assignment. The present article considers the parable anew and seeks to connect it with modern developments in quantum foundations. In particular, we explore the extent to which the sorts of correlations instantiated in the seer's prediction game can be achieved in a quantum world. Although the precise correlations that are required by the seer do not occur in quantum theory, the prediction game is found to be a good pump for quantum intuitions. It leads quite naturally to proofs of nonlocality and contextuality, to a novel kind of complementarity and to the notion of stronger-than-quantum correlations. Indeed, it provides a narrative thread that is able to weave together a great number of important modern results. That so much can be gleaned from this little prediction game is a testament to the depth of Specker's work. We offer this article as a small tribute to him on the occasion of his 90th birthday.
A. The parable of the over-protective seer We begin by reproducing Specker's parable of the overprotective seer 3 , with clarifications by us in square brackets: At the Assyrian School of Prophets in Arba'ilu in the time of King Asarhaddon [(681-669 BCE)], there taught a seer from Nineva. He was a distinguished representative of his faculty (eclipses of the sun and moon) and aside from the heavenly bodies, his interest was almost exclusively in his daughter. His teaching success was limited; the subject proved to be dry and required a previous knowledge of mathematics which should be noted, however, that Bell's 1964 proof [6] of quantum nonlocality is also a proof of contextuality using only a finite set of observables; unlike the Kochen-Specker proof, it is statespecific, the first example of this kind. 2 Specker did not use the modern term "counterfactual", but instead referred to "infuturabilities", which had been discussed in a scholastic context in connection with the problem of whether God's omniscience extended to knowing the truths of propositions concerning what would have occurred if some event which did not happen had in fact happened. 3 Our translation is an amalgam of those provided by Stairs [5] and Seevinck [7].
was scarcely available. If he did not find the student interest which he desired in class, he did find it elsewhere in overwhelming measure. His daughter had hardly reached a marriageable age when he was flooded with requests for her hand from students and young graduates. And though he did not believe that he would always have her by his side, she was in any case still too young and her suitors in no way worthy. In order that the suitors might convince themselves of their unworthiness, he promised them that she would be wed to the one who could solve a prediction task that was posed to them. Each suitor was taken before a table on which three little boxes stood in a row, [each of which might or might not contain a gem], and was asked to predict which of the boxes contained a gem and which did not. But no matter how many times they tried, it seemed impossible to succeed in this task. After each suitor had made his prediction, he was ordered by the father to open any two boxes which he had predicted to be both empty or any two boxes which he had predicted to be both full [in accordance with whether he had predicted there to be at most one gem among the three boxes, or at least two gems, respectively]. But it always turned out that one contained a gem and the other one did not, and furthermore the stone was sometimes in the first and sometimes in the second of the boxes that were opened. But how can it be possible, given three boxes, to neither be able to pick out two as empty nor two as full?
The daughter would have remained unmarried until the father's death, if not for the fact that, after the prediction of the son of a prophet [whom she fancied], she quickly opened two boxes herself, one of which had been indicated to be full and the other empty, and the suitor's prediction [for these two boxes] was found, in this case, to be correct. Following the weak protest of her father that he had wanted two other boxes opened, she tried to open the third. But this proved impossible whereupon the father grudgingly admitted that the prediction, being unfalsified, was valid. [The daughter and the suitor were married and lived happily ever after.]
B. Contextuality and Complementarity
Specker's parable presents us with apparently impossible correlations; as he says "But how can it be possible, given three boxes, to neither be able to pick out two as empty nor two as full?" Indeed, if a suitor reasons classically, then he expects that even if he chooses a configuration of gems at random from among the eight possibilities, it will be the true configuration one time out of eight, and when he opens two boxes he has marked both empty or both full, his prediction will be found to be correct one time out of four. The fact that no suitor manages to succeed after many trials suggests that this reasoning must be flawed and that whichever two boxes are opened, one will be found full and the other empty. Such correlations are contextual in the sense that if one wishes to explain the measurements (opening a box) as revealing a pre-existing property, then one must imagine that the outcome of a measurement (or equivalently, the property that is measured) is context-dependentwhether a gem is seen or not in the first box depends on whether that box was opened together with the second or together with the third. The seer's challenge cannot be met by the suitors because he asks them for a noncontextual assignment of outcomes (i.e. a specification of whether a gem will be found or not in each box, independent of which other box is opened with it) but measures a system for which the statistics are inconsistent with such an assignment. 4 To imagine a world wherein the parable might occur, Specker must effectively posit the existence of a system that exhibits a particular kind of complementarity: the system must be such that three distinct measurements can implemented upon it, any pair of which can be measured jointly, but where a joint measurement of all three is not possible. To see this, one need only note that if all three binary-outcome measurements could be implemented jointly, some pair would necessarily be found to have correlated outcomes.
We now ask the obvious question: Can the parable be implemented in quantum theory? The reader is urged to pause and give this question some thought before reading on.
There is of course a trivial sense in which the parable can be implemented in a quantum world, namely the same way that it can be implemented in a classical world: through a hidden mechanism under the seer's table and under his control, which inserts and removes gems from the closed boxes at his will. Such a mechanism would allow the seer to enforce complementarity and contextuality "by hand", so to speak. However this is clearly not what Specker had in mind, because had that been the case, the seer would not have been so easily stymied by his daughter's trick, as there would have been no reason why the third box could not have been opened. Rather, the seer seems to be in possession of a set of "magic" boxes that have particular, rather than arbitrary, correlations. Thus in asking the question whether the parable can be implemented in quantum theory, we mean: does quantum theory allow for this sort of "magic", which would be truly surprising for a naive suitor familiar only with classical theories, which do not incorporate contextuality and complementarity at a fundamental level?
Certainly, both complementarity and contextuality are required at a fundamental level in quantum theory -measurements that cannot be implemented jointly, and correlations that cannot be explained by noncontextual preexisting properties (see Ref. [8] for a review). But what about the particular correlations of the Specker parable? To get this kind of contextuality, it is necessary to find a situation wherein there are very specific sorts of limitations on joint measurability -there must exist a triple of measurements that can only be implemented jointly in pairs. For projective measurements in quantum theory, this sort of limitation on joint measurability does not occur. The reason is as follows. Two Hermitian operators can be jointly measured if and only they are jointly diagonalizable. But if we have three Hermitian operators 1 , A 2 , and 3 , and each pair of operators is jointly diagonalizable, then all three are jointly diagonalizable. This is true for any number of Hermitian operators -one can implement all jointly if and only if one can implement every pair jointly.
Nonetheless, one can imagine modifying the parable in various different ways to obtain something for which an analogue can be found in quantum theory, and these different modifications are the topics of the different sections of our article. In the following we outline each of them in turn.
C. Outline
We begin by providing, in Sec. II, a formalization of the original parable as well as some refinements and elaborations thereof, together with definitions of the key concepts. We then present the four different themes inspired by the parable, with an interlude on frustrated networks.
A double-query n-box system allowing only adjacent queries (Sec. III). The seer could have a set of n boxes, arranged in a ring, for which only adjacent pairs of boxes can be opened jointly. For n odd, classical intuition leads one to expect that there must exist at least one adjacent pair of boxes that are either both full or both empty, but we can imagine that the seer has a special system wherein, regardless of which adjacent pair of boxes is opened, it is always the case that one is found full and the other empty. The n = 3 case, which corresponds to the original parable, is exceptional because the adjacent pairs constitute all the pairs. For n > 3, this is not the case, and so there is no longer any obstacle to finding a set of projective measurements that have the same pattern of joint measurability as these boxes. Indeed, one can find such sets. There are then two ways of trying to obtain a quantum analogue of the new parable.
i) Klyachko's proof of contextuality. Find a quantum state that yields a nonzero probability of anti-correlation for every adjacent pair. When the overall probability is higher than one could account for classically, we arrive at a Klyachko-type proof of quantum contextuality [9]. ii) A new variant of Klyachko's proof of contextuality. Find a quantum state that supports the implication from one outcome to the opposite outcome for every adjacent pair in the ring and that assigns a non-zero probability to the first outcome in the sequence of inferences. In conjunction with the transitivity of implication (a consequence of noncontextuality), and the fact that the ring contains an odd number of boxes, this gives rise to a contradiction, thereby demonstrating the contextuality of quantum theory.
A separated pair of single-query 3-box systems (Sec. IV). One can imagine that the seer's three-box system is such that only a single box (rather than a pair of boxes) can be opened at any given time, but that it is possible to prepare a pair of three-box systems such that by opening a single box on each element of the pair, one reproduces the seer's correlations. Specifically, if the same box is opened on each member of the pair, they are always found to be both full or both empty, while if different boxes are opened on the two systems, one is always found full and the other empty. (Classically, one would expect that some pair of boxes on a given wing are both full or both empty, and by the assumed perfect correlation between the wings, the same pair is similarly configured on the other wing, implying that it is impossible to open different boxes on the two systems and always find anti-correlation rather than correlation.) Here, we are postulating six distinct measurements (three on each wing) only certain pairs of which can be implemented jointly, namely, pairs that have one member from each wing. So again, there is no obstacle to finding a set of projectors having this pattern of joint measurability. There are once again two ways of trying to obtain a quantum analogue of the new parable.
i) Mermin's proof of Bell-nonlocality. Find a quantum state that yields perfect correlation when the same measurement is implemented on the two wings. Demonstrate that the extent to which it can yield anti-correlation when different boxes are opened is greater than is possible in a Bell-local model [10].
ii) Hardy's proof of Bell-nonlocality. Find a chain of choices of measurement, alternating between the two parties, and find a quantum state that yields implications connecting particular outcomes of all but one measurement within this chain. Demonstrate that there is a nonzero probability for the kind of correlation exhibited by the last pair in the chain to be opposite to what one would expect by the transitivity of implication [11].
We also consider generalizations of these nonlocality proofs to rings of n measurements where only adjacent members can be implemented jointly.
Interlude on frustrated networks (Sec. V). By representing correlations between binary-valued observables by frustrated networks, we provide a simple cat-egorization of some of the contextual and Bell-nonlocal correlations outlined above.
A diachronic pair of single-queries of a 3-box system (Sec. VI). In this case, the seer's three-box system is modified so that only a single box can be opened at any given time, but that it is possible to implement two consecutive measurements in such a way that if the same box is opened at the two times, then the result of the measurement is always reproduced faithfully, while if different boxes are opened at the two times, then the results are always different. In addition, we impose the constraint that no measurement at the second time can yield any information about the choice of the measurement at the first time.
Now it is natural for a suitor to assume that statistical indistinguishability among a set of choices implies that they are equivalent at the level of an ontological model. This assumption is known as preparation noncontextuality [12]. It can be shown that no such preparation-noncontextual model can reproduce the diachronic (two-time) correlations stated above. But in quantum mechanics (which violates preparation noncontextuality [12]), there are sets of measurements for which these correlations can be approximated even though the quantum state after the first measurement reveals no information about the identity of this measurement.
Joint measurability of POVMs (Sec. VII). A final path to a quantum analogue of the overprotective seer (OS) parable is to ignore the counter-intuitive correlations, and rather concentrate on the complementarity exhibited by the three boxes. As discussed above, the pairwise but not triplewise joint measurability of three observables cannot exist in quantum mechanics for traditional (projective) measurements. However, this does not rule out the possibility that there exists a triple of generalized measurements, described by POVMs (positive operator valued measures), that can be jointly measured pairwise but not triplewise. Indeed, we will exhibit two specific examples of such a triple of nonprojective measurements. This thread connects with some recent results on joint measurability of POVMs [13][14][15][16]. We demonstrate that this example is not useful for approximating the OS correlations, nor for proving the contextuality of quantum theory.
A. Joint measurability
We wish to flesh out the original parable by being more specific about the nature of the correlations posited therein. We shall do this within the context of operational theories. This is natural because the OS parable was originally presented by Specker as a "toy theory" with similarities to quantum theory, not as a scenario that arises within quantum theory. We thus need a unified framework to compare the OS theory both with quantum theory, and with classical theories (i.e. theories without contextuality or complementarity). Also, to make the most of the OS parable we need to embellish the narrative (in a formal way) by adding extra assumptions, and this requires considering measurements and preparations beyond those discussed by Specker. Finally, we note that in the fields of quantum foundations and quantum information, there is currently considerable interest in operational "foil" theories such as Popescu-Rohrlich (PR) boxes [17] and the toy-bit theory [18].
An operational theory is one that specifies the probabilities of each possible outcome X of each possible measurement procedure M given each possible preparation procedure P . We denote these probabilities by p(X|M ; P ). It will be important for the later discussion of contextuality to distinguish between a measurement procedure M , which is a specification of a list of instructions of what to do in the laboratory, and an equivalence class M of measurement procedures, where two procedures are equivalent if they yield the same statistics for all preparation procedures. For instance, the equivalence class associated with a particular measurement procedure M 1 is We will refer to this equivalence relation over procedures as operational equivalence. We will refer to the equivalence classes as simply measurements, and denote them by calligraphic font, while the measurement procedures will be denoted by italic font. Similarly, we define equivalence classes of preparation procedures. For instance, the equivalence class associated with a particular preparation procedure P 1 is P 1 ≡ {P | ∀M : p(X|M ; P ) = p(X|M ; P 1 )} . (2) Given that probabilities of outcomes of measurements depend only on the equivalence classes of the preparation and the measurement procedures, we typically condition only on the latter and write p(X|M; P). We begin by providing an operational definition of joint measurability 5 . We consider only measurements with a discrete set of outcomes.
Joint measurability of a set of N measurements can be defined (recursively) as follows. p (X 1 , X 2 , ..., X N |M; P) .
Clearly joint measurability of all n-tuples implies joint measurability of all (n − 1)-tuples, but not vice-versa.
Finally, we shall sometimes say that the measurements in a set {M 1 , M 2 , ..., M N } exhibit complementary if they are not jointly measurable.
We can now be precise about the nature of the correlations in the overprotective seer's prediction game.
Abstracting from the story of boxes and gems, the parable posits that there are three distinct measurement procedures, which we shall denote by M 1 , M 2 and M 3 (corresponding to the choice of box). A key assumption that is not explicit in Specker's description of the prediction game is that these three measurement procedures are not operationally equivalent. That is, for every pair, there is a preparation procedure that distinguishes them, that is, some P such that p(X|M 1 ; P ) = p(X|M 2 ; P ). Making this assumption, we see that the game assumes the existence of three distinct equivalence classes of measurement procedures, which we denote by M 1 , M 2 and M 3 . Furthermore, it is assumed that these are pairwise jointly measurable. It follows that there exist three joint measurements, which we shall denote by M 12 , M 13 and M 23 and which, by virtue of the definition of joint measurability, must have statistics that reproduce the statistics of M 1 , M 2 and M 3 as marginals. Note that, as the notation suggests, M 12 , M 13 and M 23 correspond to distinct equivalence classes of measurement procedures, a fact that follows from the operational distinguishability of M 1 , M 2 and M 3 .
Note also that within the equivalence class of measurement procedures M 1 , there are procedures M that involve implementing a joint measurement of M 1 and M 3 and discarding the outcome of the M 3 measurement. Which of these two procedures is implemented may be relevant in a contextual hidden variable model, as we will see.
The seer's trick also requires that there is at least one preparation, call it P * , that yields perfect negative correlations for the joint measurement of any pair of M 1 , M 2 and M 3 . Perfect negative correlation for a single joint measurement of M 1 and M 2 does not imply that one must have equal probability for X 1 = 0, X 2 = 1 and X 1 = 1, X 2 = 0 (the two ways of achieving perfect negative correlation). However, this equality does follow from demanding perfect negative correlation for all three joint measurements, as we show in Appendix A. Consequently, the correlations are of the form We call these the overprotective seer correlations, or OS correlations. Note that it follows from this definition that individual measurements have a uniformly random outcome, The question of joint measurability concerns what is physically possible, not what is logically possible. If a physical theory postulates measurements that cannot be jointly implemented, it could still be that there is a joint probability distribution over the outcomes of these measurements that yields each measurement's statistics as a marginal.
It is worth noting that within a given theory, the nonexistence of a joint distribution for some set of measurements implies the physical impossibility of a joint measurement of these. This follows from the fact that if a joint measurement is possible, then there must exist a joint distribution over the outcomes. However, the converse implication need not hold. For instance, there are theories, such as the toy theory of Ref. [18] which postulate the physical impossibility of certain joint measurements, but for which a joint distribution over outcomes (effectively a hidden variable model) does exist.
The feature of the OS correlations that is at the root of their peculiarities is the fact that they do not admit of a joint distribution.
Lemma 4 (no joint distribution for OS correlations).
There is no distribution p(X 1 , X 2 , X 3 ) on the three binary variables X 1 , X 2 and X 3 such that the marginals over pairs of these are of the form of Eq. (5).
Given the discussion above, this result has immediate (negative) consequences for the possibility of implementing a triplewise joint measurement of M 1 , M 2 and M 3 .
Corollary 5. Measurements M 1 , M 2 and M 3 that can be pairwise jointly measured and that achieve the OS correlations of Eq. (5) cannot be triplewise jointly measured.
C. Measurement-noncontextual ontological models
In this article, we will make use of the generalized notion of noncontextuality introduced in Ref. [12], which is operational insofar as it is defined for ontological models of any operational theory, not just quantum theory. An ontological model of an operational theory specifies: (i) a set Λ of ontic (i.e. real, physical) states λ; (ii) for each preparation procedure P , a distribution p(λ|P ) describing the probability that the ontic state of the system subsequent to the preparation procedure P is λ; (iii) for each measurement procedure M , a response function p(X|M ; λ) describing the conditional probability of obtaining outcome X given ontic state λ. Finally, one must recover the statistics of the operational theory as follows: Here we have taken λ to be a discrete variable for simplicity. An ontological model is said to be measurementnoncontextual if any two measurement procedures that are operationally equivalent [in the sense of Eq. (1)] are represented similarly in the model: Equivalently, the condition is that the response function for a measurement procedure M depends only on its operational equivalence class M, that is, An ontological model is said to be outcomedeterministic for a measurement procedure M if the outcome is uniquely determined for every ontic state, ∀λ ∈ Λ : p(X|M ; λ) ∈ {0, 1}.
The traditional notion of a noncontextual ontological model of quantum theory incorporated both the assumption of measurement noncontextuality and that of outcome determinism for projective measurements. Here, we will follow Ref. [12] and distinguish these assumptions so as not to conflate issues about determinism with issues about noncontextuality. To avoid terminological confusion, we shall say that an ontological model of quantum theory is traditionally-noncontextual if it is both measurement-noncontextual [in the sense of Eq. (9)] and outcome-deterministic for projective measurements. Any proof of the impossibility of a traditionally-noncontextual model of quantum theory will be called a proof of the Kochen-Specker theorem.
As it turns out, there is a close connection between the existence of a joint distribution and noncontextuality: Theorem 6. For a given set of measurements, if there exists a measurement-noncontextual and outcomedeterministic ontological model then there exists a joint distribution for their outcomes.
The proof is provided in Appendix B. This is a slight generalization of half of a theorem by Fine [19]. Combining this theorem with the nonexistence of a joint distribution for the OS correlations (lemma 4), we have: There is no measurement-noncontextual and outcome-deterministic ontological model of the OS correlations of Eq. (5).
It is also possible to write down inequalities which must be satisfied by the experimental statistics if these are to admit of an explanation in terms of a measurementnoncontextual and outcome-deterministic model. We will call these Kochen-Specker inequalities. For the case of the OS correlations, if we imagine such a model then each box must be either empty or full. Consequently, if we choose a pair of boxes uniformly at random at most two of the three pairs could exhibit anticorrelation, so that the probability of obtaining anticorrelated outcomes is bounded above by 2/3. More precisely, if p (X i = X i⊕1 |M i,i⊕1 ; P) denotes the probability of obtaining anti-correlated outcomes in a joint measurement of M i and M i⊕1 , where ⊕ denotes addition modulo 3, then the average probability of success is and it satisfies This is a Kochen-Specker inequality. It is sometimes useful to express Kochen-Specker inequalities in an algebraic form. We define new variables X i = (−1) Xi , so thatX i = +1(−1) when X i = 0(1).
Using angle brackets to denote averages, we consider the following combination of correlation functions Then the inequality takes the form The OS correlations, however, require X iXi⊕1 = −1 for all i and hence S 3 = −3, clearly violating the bound.
With the correlations in this form, one can also express a proof of the impossibility of an outcome-deterministic noncontextual model in the algebraic manner introduced by Mermin [20]. Assuming thatX i ∈ {+1, −1} has a value independent of context, the OS correlations require that these values satisfy the following algebraic relations.
However, these relations cannot be satisfied because the product of the left-hand-sides isX 2 1X 2 2X 2 3 = +1, while the product of the right-hand-sides is −1.
Any theory that realizes the OS correlations fails to admit of a measurement-noncontextual and outcomedeterministic ontological model. However, as explained in the introduction, the kind of complementarity one requires to achieve these correlations -three measurements that are pairwise but not triplewise jointly measurable -cannot arise for projective measurements in quantum theory. In Sec. III, we turn to the modifications of the parable that do have a counterpart in quantum theory.
D. Preparation noncontextuality
The notion of measurement noncontextuality defined in Eq. (9) is motivated by a kind of equivalence principle: in the absence of observable differences between measurement procedures (i.e. differences in their statistics) one should not posit differences in their representations in the ontological model. In Ref. [12] it was argued that the same principle should lead one to an assumption of noncontextuality for preparation procedures. Specifically, an ontological model is said to be preparation noncontextual if any two preparation procedures that are operationally equivalent [in the sense of Eq. (2)] are represented equivalently in the model: Preparation noncontextuality can also be characterized as the condition that the distribution for a preparation procedure P depends only on its operational equivalence class P, that is, Given their similar motivations, someone who endorses measurement noncontextuality ought also to endorse preparation noncontextuality just as enthusiastically. One should endorse both notions or neither. Therefore, it is most natural to ask about the possibility of an ontological model that is both preparation-noncontextual and measurement-noncontextual. We will call such models generalized-noncontextual 6 . In this paper, we will consider suitors faced with the seer's prediction problem who are committed to the kind of equivalence principle described above and therefore to generalized noncontextuality.
Inequalities that must be satisfied by the experimental statistics if these are to admit of a generalizednoncontextual model will be called simply noncontextuality inequalities. Note that our terminology distinguishes such inequalities from the Kochen-Specker inequalities of the previous section: Kochen-Specker inequalities express constraints on statistics when one assumes outcome determinism in addition to measurement noncontextuality, while noncontextuality inequalities rely on no such assumption of determinism. An example of a noncontextuality inequality will be provided in Sec. VI.
E. Justifying outcome determinism
Note that a commitment to the kind of equivalence principle described above does not obviously provide any grounds for assuming outcome determinism for measurements, Eq. (11). Thus, faced with the OS correlations and corollary 7, a suitor might simply deny outcome determinism to salvage measurement noncontextuality. For instance, seeing the correlations in the seer's prediction game, a clever suitor might hypothesize that they are explained by the following sort of model. There is an ontic variable that flags when the preparation P * was implemented and if it was, the measurements M 12 , M 13 and M 23 each generate the outcomes (0, 1) and (1, 0) uniformly at random. Such an ontological model would violate outcome determinism, but would preserve measurement noncontextuality.
On the other hand, the assumption of outcome determinism can sometimes be shown to be a consequence of preparation noncontextuality. If such a justification is forthcoming, then the OS correlations cannot be explained by any ontological model that is generalizednoncontextual. For instance, in quantum theory, the assumption of outcome determinism for projective measurements can be derived from preparation noncontextuality, as shown in Ref. [12]. Therefore, in quantum theory the conjunction of measurement noncontextuality and outcome determinism for projective measurements -i.e. the assumption of traditional noncontextuality of an ontological model -is implied by the assumption of generalized noncontextuality and all the no-go theorems for the former are no-go theorems for the latter. In Sec. III, we will provide proofs of the failure of traditional noncontextuality in quantum theory using a generalization of the OS correlations. Given the result just mentioned, such proofs also demonstrate the failure of generalized noncontextuality.
Much of this article makes statements about correlations that are not found in quantum theory but can easily be imagined to occur in more general operational theories. In such theories, a natural analogue of the notion of a projective measurement can be defined. The question thus arises of whether the assumption of preparation noncontextuality might imply outcome determinism for such measurements for an ontological model of a general operational theory. The question is currently open, but we conjecture that it has a positive answer.
Fortunately, we can still draw some negative conclusions about generalized noncontextuality in operational theories without settling this conjecture. Specifically, in Sec. VI, we will demonstrate how a slight modification of the seer's game yields a set of correlations that fails to admit of a preparation-noncontextual ontological model.
III. NO-GO THEOREMS FOR MEASUREMENT-NONCONTEXTUAL AND OUTCOME-DETERMINISTIC MODELS
A. A double-query n-box system allowing only adjacent queries One way to generalize Specker's parable is to consider n > 3 boxes, and allow only certain pairs to be opened jointly. In particular, one can imagine the boxes to be arranged in a ring with adjacent pairs being the only ones that can be opened jointly. The resulting pattern of joint measurability can be reproduced in quantum theory because there exist ordered sets of n > 3 projectors for which adjacent elements commute (where adjacency is determined modulo n). If n is odd, then for every deterministic and noncontextual assignment of gems to boxes that the suitor might make, there must exist at least one adjacent pair of boxes that are either both full or both empty. Indeed, given any assignment of gems to boxes, if we choose an adjacent pair of boxes uniformly at random, the probability of obtaining anti-correlated outcomes is bounded above by (n − 1)/n. We then imagine that the seer has a special system such that, regardless of which adjacent pair of boxes is opened, it is always the case that one is found full and the other empty. 7 For these correlations, unlike those described in the original parable, one can find a quantum analogue. Although this analogue does not allow the seer to always defeat the suitor's prediction, the probability of finding perfect anti-correlation between a pair of adjacent boxes can be greater than the success rate of (n − 1)/n expected by classical reasoning.
Let us consider this situation more carefully. We are imagining an odd number n ≥ 5 of measurements, {M a |a = 1, ..., n}, such that for all a, M a and M a⊕1 are jointly measurable by a measurement M a,a⊕1 (here ⊕ denotes addition modulo n), and that there is at least one preparation, call it P * , such that the outcomes of all of these pairs of measurements are anti-correlated. By a generalization of the argument provided in Appendix A, the correlations must be of the form We will call these the double-query n-box OS correlations.
By an argument analogous to the one proving lemma 4 one can show that there is no joint distribution over all the X a that reproduces these correlations as marginals. It then follows from theorem 6 that there is no measurement-noncontextual and outcome-deterministic ontological model of these correlations.
Indeed, if we choose an adjacent pair of boxes uniformly at random, the probability R n of obtaining anticorrelated outcomes, is clearly bounded above, (because at most n − 1 pairs can be anti-correlated if n is odd). The double-query n-box OS correlations yield R n = 1, maximally violating this Kochen-Specker inequality. We may equivalently state the restriction as follows. Following the convention established in Sec. II C, we de-fineX a = (−1) Xa ∈ {+1, −1}. For all measurementnoncontextual and deterministic assignments of the valuē X a , at most n − 1 of the pair-wise products can be −1, so that: (22) whereas the double-query n-box OS correlations give S n = −n.
Again, a simple algebraic way of manifesting the fact that the double-query n-box correlations do not admit of a measurement-noncontextual and outcomedeterministic model is that they requireX a ∈ {+1, −1} such thatX . . .
but the product of the left-hand-sides isX 2 1X 2 2 · · ·X 2 n = +1, while the product of the right-hand-sides is −1.
We now consider what values of R n and S n can be achieved in quantum theory.
B. Klyachko's proof of the Kochen-Specker theorem
We require n Hermitian observablesX 1 , . . . ,X neach having eigenvalues 0 and 1 -associated with the n measurements M 1 , . . . , M n . As discussed in Sec. I B, for the specific case of n = 3, the pairwise commutativity ofX 1 ,X 2 andX 3 implies their triplewise commutativity and consequently the existence of a triplewise joint measurement and of a measurement-noncontextual and outcome-deterministic model. 8 Nonetheless, we can obtain something interesting for odd n greater than 3. We begin with the case of n = 5. A no-go theorem of this sort has recently been given by Klyachko [21] (see also Refs. [9] and [22]). The construction is as follows. We consider a quantum system described by a 3-dimensional Hilbert space, and all of the states we consider require only real-valued coefficients in some basis. Thus the system can be visualized in 3dimensional Euclidean space. The observables are pro-jectorsX a = |l a l a |, where the vectors {|l a : a = 1, ..., 5} are of the form |l a = (sin θ cos ϕ a , sin θ sin ϕ a , cos θ), and ϕ a = 4πa 5 so that the sequence of vectors forms a pentagram, as in Fig. 1. The angle θ is chosen such that vectors adjacent in the sequence are orthogonal, l a |l a⊕1 = 0, where ⊕ denotes sum modulo 5. As a result of this orthogonality relation, adjacent observableŝ X a ,X a⊕1 are indeed jointly measurable. It is clear that such a value of θ exists because as it varies from 0 to π 2 , the angle between adjacent vectors varies from 0 to 4π 5 . In fact, orthogonality is achieved at cos θ = 1 4 √ 5 . Now consider a preparation of the quantum state |ψ 1 corresponding to the vector lying along the symmetry axis of the pentagram, such that the angle between it and each of the |l a is θ. In a measurement of any adjacent pair of observablesX a ,X a⊕1 , either just one of them yields the outcome 1, in which case the outcomes are anticorrelated, or both yield the outcome 0. The probability for anti-correlation is 2 cos 2 (θ) = 2/ √ 5, which implies that the Kochen-Specker bound of Eq. (21) is violated, Equivalently, with the observablesX a = 2X a −1, wherê 1 is the identity operator, the state |ψ 1 achieves S 5 = 5 − 4 √ 5 ≈ −3.9443 ≥ −3. The value 2/ √ 5 is in fact the maximum possible quantum violation of this Kochen-Specker inequality. We show this in Appendix C with the help from the converging hierarchy of semidefinite programming (SDP) tools discussed in Ref. [23] [see also Eq. (26) and Eq. (27) below].
Note that unlike the no-coloring proofs of the Kochen-Specker theorem, this is a state-specific proof [24,25]. 9 In fact, for 3-dimensional quantum state, this is a statespecific proof that involves the smallest set of vectors {|l a } satisfying the orthogonality relation l a |l a⊕1 = 0 [9]. 10 1. Generalization to all odd n.
One can generalize Klyachko's no-go result to all odd n as follows. Define n observables by the projectors onto vectors {|l a : a = 1, ..., n} defined as in Eq. (24) but with ϕ a = n−1 n πa and with θ chosen such that l a |l a⊕1 = 0, where ⊕ denotes sum modulo n. This is achieved when cos 2 θ = cos(π/n)/(1 + cos(π/n)). This set of n vectors forms what is known as an {n/ n−1 2 } star polygon [26]. The {5/2}, {7/3} and {9/4} star polygons are depicted in Fig. 2. Again, preparing the quantum state on the symmetry axis of the star polygon, the probability of anticorrelation for adjacent observables violates the Kochen-Specker bound of Eq. (21) with or equivalently, the Kochen-Specker bound of Eq. (22) with As with the n = 5 case, these values also represent the strongest possible quantum violation of these Kochen-Specker inequalities, as is shown in Appendix C. At large n, the quantum probability approaches unity quadratically as in contrast to the linear approach to unity of the Kochen-Specker bound. It is worth emphasizing that by using the quantum correlations for n measurements, the seer can achieve something very close to the ends he achieved in the original parable. Specifically, the seer can construct a prediction game such that suitors who reason classically think the game is fair (i.e. they think it is highly likely that some suitor will win) when in fact it is not (because classical reasoning does not apply and it is actually highly unlikely that any suitor will win).
The prediction game that meets the seer's ends is as follows. The suitor is asked to pick an adjacent pair of boxes that he believes to be both empty or both full and to open those. If his prediction for those two boxes is correct, the suitor wins, otherwise he loses. With what probability will a suitor who reasons classically expect to win? We presume that he knows the seer to be adversarial and so he reasons that the seer has prepared a classical configuration which makes his [the suitor's] task as difficult as possible. He reasons therefore that the configuration is one wherein only one adjacent pair of boxes is both full or both empty (by his classical lights, he knows that there must be at least one such pair for an odd number of boxes). Thus the suitor expects his probability of winning to be the probability that he has guessed correctly which of all the n pairs is the correlated one, times the probability that he has guessed their contents correctlyoverall, a probability of 1/2n. In fact, the probability of the suitor's prediction coming true is only of order 1/n 2 in the quantum scheme described above. Let us say the number of suitors is l, assumed large. Then if the seer chooses the number of boxes n such that n ≪ l ≪ n 2 , the suitors believe it to be very likely that one of them will win when in fact it is very likely that none of them will win.
C. A proof of the Kochen-Specker theorem based on the failure of transitivity of implication
Specker's intent in introducing his parable was to demonstrate the logical possibility of a failure of the transitivity of implication. The idea is straightforward. Suppose s 1 , s 2 and s 3 are propositions that assert the presence of a gem in boxes 1, 2 and 3 respectively, and ¬s 1 , ¬s 2 and ¬s 3 assert their negations. We have s 1 =⇒ ¬s 2 (because boxes 1 and 2 are never found both full), and ¬s 2 =⇒ s 3 (because boxes 2 and 3 are never found both empty). If implication were transitive, then we could conclude that s 1 =⇒ s 3 . But in fact we have s 1 =⇒ ¬s 3 (because boxes 1 and 3 are never found both full). Therefore, assuming a gem is sometimes found in box 1, transitivity fails.
Specker's 1960 article was framed within the tradition of quantum logic, and although some researchers have proposed that quantum theory might require us to abandon some of the rules of classical logic as rules of rightreasoning (see, for example, Ref. [27]), we will not consider this possibility here. Indeed, if we incorporate the context of a measurement in the propositions, so that we distinguish s 1 , finding a gem in box 1 in the context of measuring box 1 with box 2, from s ′ 1 , finding a gem in box 1 in the context of measuring box 1 with box 3, then the transitivity of implication can be salvaged and there is no challenge to classical logic.
Nonetheless, the failure of the transitivity of implication provides another perspective on how to generate no-go results for measurement-noncontextual outcomedeterministic models. In such models, implications among value assignments of observables are necessarily transitive because these value assignments do not depend on the context of the measurement. A failure of the transitivity of implication therefore implies the impossibility of such a model.
In the case of the double-query n-box OS correlations, if n is odd, the perfect anti-correlations justify the following implications around the ring of boxes, By the transitivity of implication, we would conclude that X 1 = 1 =⇒ X 1 = 0. Given that X 1 = 1 is sometimes observed, one has a contradiction. Consequently, the observation of the double-query n-box OS correlations implies the impossibility of a measurementnoncontextual outcome-deterministic ontological model.
We now demonstrate the existence of a quantum analogue of this argument in the case of n = 5. Specifically, we demonstrate that for the set of observables in Klyachko's proof, specified in Eq. (24) and depicted in Fig. 1, there is a quantum state such that First, note that an inference from X a = 1 to X a⊕1 = 0 can be made independently of the quantum state, because for any pair of orthogonal projectors, at most one of them can take the value 1. However, an inference from X a = 0 to X a⊕1 = 1 is only true for certain quantum states because a pair of projectors may both be assigned the value 0. To ensure that X a = 0 implies X a⊕1 = 1, we must choose a quantum state that lies in the span of the vectors |l a and |l a⊕1 in Hilbert space. This way, the vector orthogonal to this span is assigned value 0, such that if |l a is assigned value 0, |l a⊕1 must be assigned the value 1. Starting with an assignment of X 1 = 1, we need to make the X a = 0 to X a⊕1 = 1 inference twice in the pentagram: from X 2 = 0 to X 3 = 1 and from X 4 = 0 to X 5 = 1. Consequently, we need a quantum state that lies in the subspace (plane) spanned by |l 2 and |l 3 but also in the subspace spanned by |l 4 and |l 5 . Fortunately, these subspaces intersect on a ray (see Fig. 1), and therefore we take the quantum state to be the one associated with that ray, indicated in Fig. 1 as Therefore, assuming a preparation of the state |ψ 2 , we have the sequence of implications of Eq. (30). By the transitivity of implication, we can conclude that X 1 = 1 =⇒ X 1 = 0. Given that X 1 = 1 is assigned non-zero probability by |ψ 2 , specifically, p = 1 − 2 √ 5 ≈ 0.1056, we have derived a contradiction from the assumption of the transitivity of implication, and therefore also from the assumption of an ontological model that is measurementnoncontextual and outcome-deterministic for projective measurements (i.e. traditionally-noncontextual) 11 . 11 A slightly different way of seeing the contradiction is that tran-This is a proof of the Kochen-Specker theorem which is analogous to Hardy's proof of Bell's theorem, described in Sec. IV E. Interestingly, it is not possible to generalize this type of proof to the case of n > 5 using a set of vectors that form an n/ n−1 2 star polygon. For instance, in the case of n = 7, if we start with X 1 = 1, in order to make the inference from X 2 = 0 to X 3 = 1, from X 4 = 0 to X 5 = 1 and from X 6 = 0 to X 7 = 1, the quantum state would have to lie in each of the following three subspaces: the one spanned by |l 2 and |l 3 , the one spanned by |l 4 and |l 5 and the one spanned by |l 6 and |l 7 . But although any pair of these subspaces intersect along a ray, the three do not, so there is no quantum state that does the job.
The state-specific Kochen-Specker proof we have just presented turns out to be related to Clifton's 8-ray Kochen-Specker proof [24]. The latter makes use of the famous 8-vertex subgraph of the original 117-vertex Kochen-Specker proof [1]. Clifton's proof also has an interesting connection with the pre and post-selection effect known as the "three-box paradox" [28], as shown in Ref. [29]. A connection between Klyachko's Kochen-Specker proof and the 8-ray proof (as well as Hardy's nonlocality proof) has also been noted previously in Ref. [30].
To see how our proof is related to Clifton's, let us denote the vector orthogonal to the span of |l 2 and |l 3 by |χ and the one orthogonal to the span of |l 4 and |l 5 by |χ ′ , then the orthogonality relations of the eight vectors {|l 1 , |l 2 , |l 3 , |l 4 , |l 5 , |χ , |χ ′ , |ψ 2 } are summarized by the diagram in Fig. 3 (where nodes represent rays and the presence of an edge represents orthogonality). In an outcome-deterministic measurement-noncontextual model, every vector must receive a value 0 or 1 with exactly one member of every orthogonal triple receiving the value 1, and no more than one member of an orthogonal pair receiving the value 1. Clifton's proof can then be phrased as follows. Given a preparation of |ψ 2 , the vector |ψ 2 (considered as a sitivity of implication specifies that X 1 = 1 implies X 5 = 1, whereas by a joint measurement of X 1 and X 5 , we would infer that X 1 = 1 implies X 5 = 0. measurement outcome) must be assigned the value 1 and the vector |l 1 has a nonzero probability of being assigned the value 1. We denote the value assigned to vector |φ by v(|φ ). From v (|ψ 2 ) = 1 we infer v (|χ ) = v (|χ ′ ) = 0 and from v (|l 1 ) = 1 (which happens with nonzero probability) we infer v (|l 2 ) = v (|l 5 ) = 0. One then concludes from v (|χ ) = 0 and v (|l 2 ) = 0 that v (|l 3 ) = 1, and from v (|χ ′ ) = 0 and v (|l 5 ) = 0 that v (|l 4 ) = 1. However, v (|l 3 ) = 1 and v (|l 4 ) = 1 is a contradiction. This is the standard way of deriving a contradiction for the eight rays in Clifton's proof, however one could equally well use the fact that v (|χ ) = v (|χ ′ ) = 0 and v (|l 1 ) = 1 to justify anticorrelation across every edge around the ring {|l 1 , |l 2 , |l 3 , |l 4 , |l 5 }, which is just the proof we have presented above.
A. A separated pair of single-query 3-box systems
In this section, we consider another variation on Specker's parable. The seer has a novel 3-box system which allows only a single box to be opened, rather than two. To distinguish the two types of three-box systems, we call the former a single-query system and the latter a double-query system. We also assume that the seer can prepare a pair of single-query systems that mimic the behavior of the double-query system as follows: if the same box is opened on one system as is opened on the other, one obtains the same result (both are always found to be full, or both empty); if different boxes are opened, then one obtains different results (one is always full and the other empty). For the benefit of skeptical suitors, the seer allows for the queries of the two different systems to be implemented at space-like separation. We imagine that they are transported to different corners of the Assyrian empire: one to Abydos and the other to Babylon. The suitor dispatches two of his trusted classmates, one to each of these two cities, and instructs them to choose a box at random.
We are therefore imagining a situation wherein two observables are measured jointly by first preparing a pair of systems in a perfectly correlated state and measuring one observable on each.
As we will demonstrate below, this version of the Specker parable allows us to establish a simple proof of nonlocality in the same spirit as that presented by Mermin in Ref. [10]. Let us denote the choices made by the two class-mates by a and b respectively, taking values in the set {1, 2, 3}, corresponding to the choice of box. Further, we denote the results of box a at Abydos and box b at Babylon, respectively, by A a and B b , taking values in {1, 0} corresponding to the observations {full, empty}. Then we can express the condition that the outcomes must satisfy in this two-wing version of the Specker parable as where δ denotes the Kronecker delta function. To quantify the extent to which these correlations are realized, let us define R 3 as the weighted sum, assuming a and b are chosen uniformly at random, of the probability of achieving perfect negative correlation when a = b, and the probability of achieving perfect correlation when a = b. That is, where p(A a = B b |M a , M b ; P) refers to the probability of finding A a = B b conditioned on box a being opened at Abydos and box b being opened at Babylon; likewise for p(A a = B b |M a , M b ; P).
The OS correlations described in the two-wing Specker parable can be summarized as The assumption that M a and M b are jointly measurable in the sense of definition (1) implies that they must satisfy a condition of no superluminal signaling [17,31], namely, (34) which asserts that the conditional marginal probabilities p(A a |M a ; P) obtained by summing over B b are independent of the choice of the distant measurement procedure M b , and likewise for p(B b |M b ; P). It is simple to show, as we do in Appendix D, that by imposing the no-signaling condition, the correlations are constrained to be of the following form: We will henceforth call these the nonlocal OS correlations. The winning probability for Eq. (32) is unity for these correlations, i.e., They are the only non-signaling correlations that can win this prediction game deterministically. This implies, in particular, that the nonlocal OS correlations represent an extreme point of the convex set of non-signaling correlations [32], very much like the archetypical PR box 12 correlations [17] for the scenario where a, b only run from 1 to 2. Although these correlations do not allow for superluminal signaling, they do violate Bell's assumption of local causality [33], as we now demonstrate. In order to enforce perfect positive correlations when the suitor's two classmates make the same measurement, the Babylonian system must be prepared with an answer for each possible query that matches the answer that the Abydosian system is prepared to provide. It follows that there are deterministic noncontextual hidden variables determining the outcome on the Babylonian system. This step is familiar from Bell's original derivation of his theorem [6]: locality together with the assumption of perfect correlations implies the existence of deterministic noncontextual values for each system. Given such values, it is easy to see 13 that the overall probability of winning the game in a locally causal model is at most 7 9 . That is, This is a Bell inequality. The fact that R NLOS 3 = 1 for the seer's system is a violation of this Bell inequality and a proof that no locally causal model of the nonlocal OS correlations is possible.
The Bell inequality (37) can also be written in terms of the more conventional correlation function, or the socalled two-party correlator Ā aBb , whereĀ a ,B b take on values {+1, −1} as usual: 12 The terminology "box" is, in the present circumstances, unfortunate. It refers to a "black-box" (i.e. unexplained -indeed, inexplicable -source of correlations) between two distant parties, just as in our above scenario. The two-party correlator is simply the average value of the product of the result in Abydos when box a was chosen, multiplied by the result in Babylon when box b was chosen. Together with the normalization condition Ā a,Bb p(Ā a ,B b |M a , M b , P) = 1, we can now reexpress the winning probability as R 3 = 1 18 S 3 + 1 2 , where In these notations, it is again easy to verify that if the variablesĀ a ,B b admit pre-existing values ±1 (i.e. are determined by hidden variables), then S 3 ≤ 5 [cf. Eq. (37)].
(As is now well-known [19], the same bound also applies to any locally causal model where the values of the variables are determined by stochastic hidden variable models.) Specker's correlations require S 3 = 9, thus clearly violating the Bell-inequality.
B. Mermin's proof of Bell's theorem
What about correlations allowed in quantum theory? We know from a celebrated theorem by Cleve et al. (Theorem 5.12, Ref. [34]) that there is no quantum strategy that can give unit winning probability. While it is not possible to realize the over-protective seer parable as formulated above, it is nevertheless possible to demonstrate, using quantum mechanics, correlations that approximate the desired correlations better than any locally causal model can. As it turns out, the largest winning probability allowed by quantum theory is (see Appendix E for details) and hence S quantum 3 = 6 (which exceed the Bell-local bounds of 7 9 and 5 respectively). That quantum theory allows such non-trivial correlations can be verified by considering the two-qubit maximally entangled state 1 √ 2 (|0 |0 + |1 |1 ) (in theσ z basis) and lettingĀ 1 ,Ā 2 , andĀ 3 be the results of measuring the three Pauli operators equally spaced in theẑ-x plane, defined bŷ likewise for theB b , which are defined identically. Thus quantum mechanics allows us to move towards the extremal non-local correlations in our formulation of the parable. This proof that quantum theory violates Belllocality (Bell's theorem) is in fact the one popularized by David Mermin [10].
Generalization to all odd n
It is straightforward to generalize this new parable to the case of n boxes for all odd n ≥ 5. Specifically, posit a separated pair of n-box rings such that if the box that is opened in Abydos is the same as the one opened in Babylon, the outcomes agree, while if the index a of the box opened in Abydos differs by 1 from the index b of the one opened in Babylon, that is, if b = a ⊕ 1 or a = b ⊕ 1, then the outcomes disagree. As it turns out, the correlations must be of the form 14 b = a ⊕ 1 or a = b ⊕ 1 :p(1, 0|M a , M b ; P) = 1 2 We do not specify the nature of the correlation for other values of a and b.
We can define the average probability of success as It is evident that with a local strategy if one has perfect correlation when a = b, then when a = b ⊕ 1 or b = a ⊕ 1, one can have perfect anti-correlation with probability at most (n − 1)/n. Furthermore, no local strategy can do any better than this. Consequently, given that the conditions a = b, a = b ⊕ 1, and b = a ⊕ 1 arise with probability 1/3 each, the winning probability with a local strategy is upper bounded by 15 Quantum theory can violate this inequality. Using the same entangled state as above, we generalize Eq. (41) tô A a = cos ϕ aσz + sin ϕ aσx , where ϕ a = n−1 n π(a− 1), and likewise for theB b (Fig. 4). There are 3n kinds of measurement statistics that appear in R n . We consider each in turn. For the n terms wherein a = b, we obtain perfect correlation with probability 1, while for the n terms wherein a = b ⊕ 1 and the n terms wherein b = a ⊕ 1, we obtain anti-correlated outcomes 14 The proof of this proceeds analogously to the one given in Appendix D for the specific case of n = 3. 15 It is worth noting that none of the following Bell inequalities is facet-inducing (following the terminology of Ref. [35]), or tight (following the terminology of Ref. [36,37]). That is, they do not correspond to the boundary of the set of locally causal correlations with maximal dimension.
with probability cos 2 (π/2n). In all then, we find the corresponding probability of success as: Once again, for a large number of suitors, the seer can choose n, the number of measurement settings, to ensure that with very high probability all of the suitors will lose, despite their classically founded expectation that one of their number is very likely to win.
C. Connection to previous work
An analogous game is discussed by Vaidman [38] who considers a slightly different narrative device: a necklace having an even number n of beads each of which can be one of two colors and such that one finds all adjacent beads to be of different colors except for the first and last beads which are of the same color. It is clear that by replacing the first and last beads by a single bead, we have precisely the correlations considered above. Another variation of the game was considered by Braunstein and Caves [39]: there, perfect correlation is required for all adjacent pairs of measurements except that between the first and the last, in which case perfect anti-correlation is required. This game gives rise to the so-called "chained Bell inequalities" [39].
The problem of maximizing the winning probability R n is also relevant to the strength of a two-prover interactive proof system of the type described by Cleve et al. [34]. The two provers are taken to be two agents of the seer (one sent to Abydos and the other to Babylon), while the suitor is the verifier. The provers' task is to convince the verifier that a cyclic graph with an odd number n of vertices is 2-colorable (despite the fact that it is not). The verifier sends the name of a vertex to each prover such that the two vertices are either the same or adjacent. The provers, who cannot communicate with one another, must each respond with a color. The existence of systems generating the nonlocal OS correlations would provide the provers with a perfect winning strategy.
Cleve et al. have analyzed a two-player interactive proof, called the odd cycle game, which is very similar to the one we consider here. The odd cycle game is another natural generalization of Specker's parable to a pair of systems where for a given measurement on the Abydosian system, there are two rather than three options for the measurement on the Babylonian system: it is the same, i.e. b = a, or it has index one higher, i.e. b = a ⊕ 1. The possibility of a = b ⊕ 1, which is allowed in the game we have considered, and ensures symmetry between the two players, is excluded in the odd cycle game. 16
D. From OS correlations to PR-box correlations
Another way of generalizing the single-query 3-box OS correlations to a separated pair of parties is to imagine that each party has a 3-box system, but the first party only ever opens the first or second box, while the second party only ever opens the second or third box. If we imagine that there is correlation when they both open the second box and anti-correlation otherwise, then this set of measurements is already sufficient to obtain a contradiction with a local model, Specifically, the local deterministic values must satisfȳ 16 The upper bound on the winning probability with a local strategy for the odd cycle game is clearly R local n ≤ 1 2 + 1 2 n−1 n = 1 − 1 2n . The maximal quantum violation, which is determined in Ref. [34], is achieved if the measurements on Alice's system are the spin operators Aa in Eq. (45), while the measurements on Bob's system are a rotation by an angle of π/4n of the spin operators B b in Eq. (45). In this case, for the n terms wherein a = b, we have correlation with probability cos 2 (π/4n), and for the n terms wherein b = a ⊕ 1, we have anti-correlation with probability cos 2 (π/4n), such that R quantum n = cos 2 (π/4n) ≃ 1 − π 2 16n 2 .
but the product of the left-hand-sides isĀ 2 1Ā 2 2B 2 1B 2 2 = +1, while the product of the right-hand-sides is −1. The correlations of Eq. (47) are precisely the PR box correlations [17] that have been extensively studied in recent years.
E. Hardy-type no-go theorems for Bell-local models
In outcome-deterministic ontological models that are local or noncontextual, implications among value assignments of observables are transitive because these value assignments do not depend on the context (local or remote) of the measurement. The failure of the transitivity of implication therefore implies the impossibility of such models. Again, we find that this conclusion has been reached before in the literature on nonlocality. Specifically, the Hardy-type proof of nonlocality [11] can be expressed in this fashion [40], a fact that was first noted by Stapp [41] (for a simplified account, see Refs. [42,43]).
We begin by presenting Hardy's proof of nonlocality in its standard form. It uses a pair of binary-outcome observables on each wing of the experiment. Hardy demonstrated a way of choosing these observables such that for any partially entangled pure state, the correlations between these observables satisfy: while sometimes (A 1 = 1 and B 2 = 1) (i.e. with probability p Hardy ≡ p(A 1 = 1 and B 2 = 1) > 0), and never (A 2 = 1 and B 1 = 1) .
We can express this as a failure of the transitivity of implication as follows. From Eqs. (48), (51) and (49) (in its contrapositive form), we infer respectively, which we summarize graphically by If transitivity held, then these three inferences would imply that However, this contradicts Eq. (50) and consequently transitivity must fail. More explicitly, taking =⇒ to be material implication, the negation of Eq. (55) is the conjunction of A 1 = 1 and B 2 = 1, ¬ (A 1 = 1 =⇒ B 2 = 0) = (A 1 = 1 and B 2 = 1) , (56) so that the probability p Hardy ≡ p(A 1 = 1 and B 2 = 1) quantifies the frequency with which the transitivity of implication fails. We now consider the status of this sort of proof for the PR box. By relabeling the outcomes of the standard PR box, one can obtain correlations of the form with marginals of the form p(A 1 = 0) = p (A 2 = 0) = p(B 1 = 0) = p (B 2 = 0) = 1/2. Eqs. (57), (59) and (60) imply the inferences of Eqs. (52), (53), and (54) respectively. Meanwhile, Eq. (58), together with the fact that p(A 1 = 1) = 1/2, implies that sometimes A 1 = 1 and B 2 = 1, or equivalently, that sometimes Eq. (55) fails, so that we have a contradiction with transitivity. Indeed, the probability of this occurring is p Hardy = p(A 1 = 1 and B 2 = 1) = 1/2. Actually, p Hardy only quantifies the probability for one particular kind of contradiction, which requires A 1 = 1 to get going. In the rest of the cases, where A 1 = 0, we still obtain a contradiction because Eqs. (57), (59) and (60) also imply inferences of the form of Eqs. (52), (53), and (54) where A a ⇔ A a ⊕ 1 and B b ⇔ B b ⊕ 1. Transitivity then implies that A 1 = 0 =⇒ B 2 = 1, while Eq. (58) contradicts this. So one obtains a contradiction with certainty for the PR box.
There is another aspect of these PR box implications that cannot be emulated by quantum theory which has recently been pointed out by Fritz [44]: if one supplements the implications in Eqs. (52)-(54) with the implication B 2 = 1 =⇒ A 1 = 1 or any of the two reverse implications, that is, A 2 = 0 =⇒ B 1 = 1, or B 2 = 0 =⇒ A 2 = 0, then the resulting set of constraints cannot be satisfied by any quantum state and set of projective measurements.
As discussed in the introduction, and rehearsed in Sec. III C, Specker introduced his parable of the overprotective seer in order to demonstrate the possibility of a logic wherein there is a failure of the transitivity of implication. One therefore expects that the nonlocal OS correlations from Sec. IV A, which are based on Specker's parable, ought to provide a proof of nonlocality via such a failure of transitivity. This is indeed the case, as we now show. The nonlocal OS correlations, cf. Eq. (31), imply the following chain of implications If the transitivity of implication held, we would have However, Eq. (31) together with the fact that p(A 1 = 1) = 1/2, cf. Eq. (35), implies that sometimes A 1 = 1 and B 3 = 0, which contradicts Eq. (61). Indeed, we achieve this contradiction with probability p Hardy = p (A 1 = 0 and B 3 = 1) = 1/2. As with the PR box, one can obtain a contradiction with certainty also in the cases where A 1 = 0.
Although the nonlocal OS correlations cannot be achieved in quantum theory, it is interesting to ask whether the particular contradiction constructed above might be achieved with some nonzero probability for some choice of state and observables.
Indeed, this is possible. In particular, this can be achieved with p Hardy = 144/(27 + √ 3) 2 ≈ 0.17443 by using the quantum state |ψ = (1 + η 2 ) −1/2 (|0 |0 − η |1 |1 ) and the projectors defined by: , κ a = η (a+1 mod 3)+ 1 2 , and η = √ 3. The above Hardy-type proof of nonlocality via the failure of the transitivity of implications is entirely equivalent to the proof of nonlocality due to Boschi et al. [40]. Note that a slightly stronger contradiction with p Hardy ≈ 0.17455 can be obtained with a different choice of η [40]. Moreover, this latter value of p Hardy is only marginally different from the quantum-mechanical upper bound p Hardy ≤ 0.17456 obtained from the tools of Ref. [23]. This suggests that the strongest contradiction in this scenario may already be achievable using a two-qubit partially entangled pure state. 17 It is also worth noting that by considering a similar setup that involves an increasing number of boxes, and hence a longer chain of intransitive implications, quantum theory actually provides a contradiction with increasing p Hardy that asymptotes to 50% [40].
We end this section with a demonstration that there is a particular kind of failure of transitivity that one does not find in quantum theory. We begin by noting that with a PR box, we can get a contradiction with the transitivity of implication in a manner which is different from that of Hardy's proof, and in some ways more striking. In addition to deriving Eqs. (52), (53) and (54) from Eqs. (57), (59) and (60), we can derive Graphically, the chain of inferences is Were transitivity of implication to hold, we would conclude that A 1 = 1 =⇒ A 1 = 0, which, together with the fact that p(A 1 = 1) = 1/2, yields a contradiction. This sort of proof is also available for the nonlocal OS correlations. It can be characterized as providing a sequence of inferences about values of observables wherein the consequent of the last inference contradicts the antecedent of the first inference. The question is whether this sort of contradiction can be achieved in quantum theory. to be an orthonormal basis of H, ρ to be a density operator,1 to be the identity operator and U to be a unitary operator, we can always write |Ψ in the form Now suppose that one measures system A with the POVM |φ φ| ,1 − |φ φ| and one obtains the |φ φ| outcome. This leads to an updating of the description of the state of system B to cases, it may require infinite-dimensional Hilbert space to achieve the strongest correlations allowed by quantum mechanics even though the two-qubit correlations are only marginally different from the quantum mechanical upper bound derived from the tools of Ref. [23]. where and N χ is a normalization factor. Consequently, a subsequent measurement on system B of the POVM |χ χ| ,1 − |χ χ| will yield the |χ χ| outcome with certainty.
Next, consider the experiment wherein |φ φ| ,1 − |φ φ| is not made on A, but the measurement |χ χ| ,1 − {|χ χ| is made on B and the outcome |χ χ| is obtained. One then updates the description of the state of system A to where N φ ′ is a normalization factor. A subsequent measurement on system A of the POVM |φ ′ φ ′ | ,1 − |φ ′ φ ′ | will then yield the |φ ′ φ ′ | outcome with certainty.
The state |χ on B is called the relative state to |φ on A given |Ψ on AB [46]. Similarly, |φ ′ on A is the relative state to |χ on B given |Ψ on AB. If we find a particular state on one system, then we are certain to find the relative state on the other should we measure for it. Consequently, we can consider an arbitrary chain of such pairs of measurements, and at every step in the chain we can make a perfect inference from the positive outcome of one to the positive outcome of the other.
We pause at this point in the proof to note that this analysis provides a particularly simple way of understanding Hardy's proof of nonlocality. Using reasoning analogous to that above, the relative state to |φ ′ is |χ ′ where |χ ′ ≡ U ρ U † |χ (note that there clearly exist choices of ρ and U such that |χ ′ = |χ ). If the transitivity of implication held, then by this sequence of perfect inferences, we would conclude that whenever |φ φ| is found on A, it would be the case that |χ ′ χ ′ | is necessarily found on B. However, this conclusion is false because the relative state to |φ is |χ , so that the probability of finding |χ ′ χ ′ | on B is | χ ′ |χ | 2 which is less than one if |χ ′ = |χ . Thus transitivity must fail.
We now show that a quantum proof of nonlocality cannot be constructed in terms of a sequence of inferences wherein the consequent of the last inference contradicts the antecedent of the first. We define a set of N observables on A, each of which is projective, namely, φ (i) φ (i) i = 1, . . . , N and a set of N similar observables on B, χ (i) χ (i) i = 1, . . . , N . Here, χ (i) is the relative state to φ (i) and φ (i+1) is the relative state to χ (i) . This implies that we can infer from finding φ (i) φ (i) on A to the necessity of finding χ (i) χ (i) on B, and from finding χ (i) χ (i) on B to the necessity of finding φ (i+1) φ (i+1) on A. If transitivity of implication held, then we could chain these inferences together such that from finding φ (1) φ (1) on A, we would infer the necessity of finding φ (N ) The question is whether we can ever have such a chain where φ (N ) is orthogonal to φ (1) . Note from the anal- Therefore the condition for orthogonality is Given the non-negativity of ρ (and hence of ρ T and ρ T N ), this condition is only satisfied if ρ T φ (1) = 0, but this would imply that the probability of finding φ (1) φ (1) on the bipartite state |Ψ vanishes. In other words, the only bipartite state for which we can have a chain of inference wherein the final consequent denies the initial antecedent is one that denies the initial antecedent. Therefore, such a contradiction cannot be achieved. This is in contrast to what occurs in the case of PR boxes and the nonlocal OS correlations, and is therefore a feature which distinguishes quantum theory from these foil theories. It is interesting to note, however, that in quantum proofs of contextuality one can find a chain of inferences where the final consequent denies the initial antecedent and the initial antecedent is sometimes true, as shown in Sec. III C.
V. FRUSTRATED NETWORKS
It is instructive to consider a network representation of the various correlations that we have considered thus far. The bit associated with the outcome of a binary-outcome measurement (this is the only type of measurement we've considered) is associated with a node. Perfect positive correlation between outcomes of distinct measurements is represented by a solid line between the nodes, perfect negative correlation by a dashed line. Such representations of correlations have been discussed before in the context of nonlocality proofs, in particular by Mitchell, Popescu and Roberts [47] and in the Ph.D. thesis of Collins [48] and by Schmidt [49]. Fig. 5 provides network representations of the extremal correlations that were used in the no-go theorems for measurement-noncontextual outcome-deterministic models. The triangular network represents the OS correlations in Specker's parable; the square network represents the PR-box correlations (understood as a proof of contextuality, i.e. where the four measurements are considered to be implemented in one spatial location); the pentagonal network represents the extremal version of the correlations in Klyachko's no-go theorem; the hexagonal network represents the kind of correlations described by Vaidman in Ref. [38]. 6 provides network representations of the extremal correlations that were used in proofs of nonlocality. We have labeled the nodes to highlight the spatial region in which each of the outcomes occurs. The network on the left, which is graph-isomorphic to the square network above, represents the correlations generated by a PR-box [17]. The network on the right depicts the correlations found in the separated pair of single-query 3-box systems of Sec. IV. Let the bit describing whether there is an even or an odd number of dashed lines along a path be called the parity of the path. We shall say that a network is frustrated if for any pair of nodes, there exist paths with different parities connecting those nodes. Clearly, each of the networks in Fig. 5 is frustrated. It is this frustration which captures the impossibility of an outcomedeterministic measurement-noncontextual model of these correlations. For the networks given in Fig. 6, this impossibility also gives rise to a simple proof of nonlocality of the depicted correlations.
For any network, we can determine whether or not it is frustrated by looking only at its cycles. This is because frustration occurs when there are two paths with differing parities and this fact will reveal itself by examining the cycle consisting of that pair of paths. Thus, to see the ways in which a network can be frustrated, it suffices to consider the ways in which cycles can be frustrated. For any integer number of nodes, it is straightforward to find all the frustrated cycles with that number of nodes. For two nodes, there is only a single path and therefore no possibility for frustration. At 3 nodes, the frustrated networks are those indicated in Fig. 7. The case of two correlations and one anti-correlation corresponds, in the imagery of Specker's parable, to a case where if boxes 1 and 2 or boxes 1 and 3 are opened, one finds the same outcome, but if boxes 2 and 3 are opened, the outcomes always differ. Note, however, that these different networks are equivalent up to a relabeling of the outcomes and consequently represent essentially the same correlations. Indeed, all the frustrated networks with a given number of nodes can be obtained one from another by a relabeling of the outcomes. It therefore suffices to consider a single representative of the equivalence class of frustrated networks with a given number of nodes.
It is also possible to have a similar graphical representation for some of the no-go theorems for noncontextuality and locality that are based on a failure of transitivity of implication. We represent a set of implications among the values of binary-outcome observables by a directed graph with decorated edges. The implications of interest are of the form: X 1 = x =⇒ X 2 = y where x, y ∈ {0, 1} and either y = x or y = x⊕1. We depict this by inserting a directed edge (i.e. an arrow) from the node for X 1 to the node for X 2 and decorating the base of the arrow with the value x; the directed edge is solid if y = x and dashed if y = x⊕1. Note that this implication can also be written in its contrapositive form as X 2 = y ⊕ 1 =⇒ X 1 = x ⊕ 1. Therefore, we can always represent the same implication with an arrow in the opposite direction. When reversing an arrow, the value decorating the arrow stays the same if the arrow is solid and flips if the arrow is dashed.
If the parity is odd around a closed loop in such a directed graph, then the antecedent of the first implication is denied by the consequent of the last implication. Therefore, as long as the antecedent has non-zero probability, we have a failure of the transitivity of implication. Such a directed network is said to be frustrated.
In the introduction, we described how Specker's parable implies a failure of the transitivity of implication (under the assumption that value-assignments to observables are context-independent). Letting s i denote the proposition that X i = 1 (box i contains a gem), the set of implications are: s 1 =⇒ ¬s 2 , ¬s 2 =⇒ s 3 , and s 3 =⇒ ¬s 1 . These are represented by the directed network of Fig. 8(a), which is clearly frustrated. The set of implications that are used in the transitivity-based no-go theorem of Sec. III C are represented by the pentagonal version of this directed network, Fig. 8(b), which is also frustrated.
Unlike the undirected frustrated networks, which are composed of a set of correlations some or all of which are only approximated by the quantum correlations, the directed frustrated network of Fig. 8(b) is an exact specification of implications one finds in quantum theory, specifically, those described in the proof of Sec. III C. The only sense in which one could imagine a theory being "more contextual", according to this sort of proof, is by assigning a higher probability to the contradictiongenerating valuation of the first observable in the chain. An extremal version of such a proof would be one wherein both possible valuations of the first observable yielded a contradiction. We conjecture that such a proof cannot be found in quantum theory. Finally, the "striking" form of the PR box correlations, presented in Sec. IV E and associated with the set of implications below Eq. (62), is represented by the frustrated directed network in Fig. 9(a), and the generalization of this to the case of the nonlocal OS correlations is represented in Fig. 9(b). As was shown at the end of Sec. IV E, it is not possible to find a quantum state and a set of observables that instantiates such a set of implications while assigning a nonzero probability to the contradiction-generating valuation of the first observable.
VI. NO-GO THEOREMS FOR PREPARATION-NONCONTEXTUAL MODELS
So far, in all of our quantum analogues of Specker's parable, the correlations examined were between the outcomes of pairs of measurements that could be implemented jointly. In this section, we consider the possibility of achieving these correlations between the outcomes of pairs of measurements that are implemented consecu-tively 18 .
It is important to recognize that one need not rule out the possibility of consecutive measurements to ensure the impossibility of joint measurements. The original version of the Specker parable is misleading in this respect. It asks us to imagine that after opening two boxes, one is simply unable to open the third (as if its lid were glued shut with an unbreakable seal). The literal generalization to arbitrary measurements M 1 , M 2 and M 3 that can be measured jointly pairwise but not triplewise would seem to be that if M 1 and M 2 have been implemented, a mysterious force prevents us from carrying out the instructions that correspond to implementing M 3 . However, this conclusion does not follow from a denial of joint measurability as it is defined in Sec. II A. One can always implement M 3 following a measurement of M 1 and M 2 on a preparation P. It is just that the statistics of outcomes of M 3 that one thereby obtains is not the same as one would have obtained if M 3 were implemented on P directly. To be precise, if the joint statistics of outcomes of a pair of measurements M and M ′ are independent of the order in which they are implemented, then the consecutive implementation of the two measurements constitutes a joint measurement of M and M ′ by the definition of Sec. II A. Consequently, a denial of joint measurability implies a denial of the invariance of statistics under a reordering of the measurements. This way of interpreting a lack of joint measurability is precisely the one that is familiar from the quantum theory of projective measurements.
To see how the OS correlations might obtain for consecutive measurements, we present a new parable. We consider a single-query 3-box system, that is, one where only a single box can be opened at a time. A pair of boxes can be opened consecutively, but the second boxopening need not reproduce the statistics of outcomes that would have been observed had it been opened first. In this sense, the measurements associated with opening distinct boxes cannot be implemented jointly.
We now get to the specifics of the correlations, which are inspired by the original Specker parable. We assume that there is a special preparation P * of the 3-box system, such that if the same box is opened at the two times, then the same outcome is found, while if different boxes are opened at the two times, then different outcomes are found.
So far, there is nothing in this set of correlations that prohibits their being explained by a generalizednoncontextual ontological model. Because no two measurements are ever implemented jointly in this parable, there is no sense in which any measurement has a nontrivial context upon which its ontological representation 18 Because implementing the first measurement and selecting a particular outcome constitutes a preparation, one can equally well describe this section as a consideration of the possibility of achieving analogues of the OS correlations between preparations and measurements. This is discussed further below. might depend. Indeed, there are ontological models that explain the correlations easily. They need only posit that the first measurement disturbs the ontic state of the three-box system in order to enforce the appropriate correlations. For instance, suppose that three bits specify the gem occupation numbers of the three boxes and completely characterize the ontic state. It could be that finding a 0 (1) for a box forces the other two boxes to have occupation number 1 (0). (Indeed, if the suitor is opening boxes on a table, this kind of disturbance to the ontic state might be enforced by having a hidden mechanism under the table that automatically inserts or removes gems from the two boxes that were not opened.) To obtain a set of correlations that can challenge the assumption of generalized-noncontextuality, we need to modify the thought experiment slightly by adding the following assumption: in addition to the correlations described, it is the case that after the early measurement is complete, for every possible subsequent measurement (the theory may well allow more than the three measurements that are used in the protocol), it is impossible to obtain any information about the identity of the early measurement. We call this the trit-obliviousness condition (this terminology has its precedent in Ref. [50]).
Note that implementing the early measurement procedure and selecting a particular outcome constitutes a preparation. For each of the three possible measurement procedures, M 1 , M 2 and M 3 , and each of the outcomes 0 and 1, we obtain a distinct preparation procedure. We denote these by P 1,0 , P 1,1 , P 2,0 , P 2,1 , P 3,0 and P 3,1 in an obvious notation. 19 We can also define the preparations that result when one chooses not to condition on the outcome of the measurement procedure. We denote these by P 1 , P 2 and P 3 . Finally, we denote the probability of obtaining outcome 0 when the first measurement M t is implemented on the special preparation P * by w t,0 ≡ p(0|M t ; P * ), and we define w t,1 ≡ 1 − w t,0 . The statistics for the unconditional preparations are then given by p(X|M ; P t ) = w t,0 p (X|M ; P t,0 ) + w t,1 p (X|M ; P t,1 ) .
We have demonstrated that the simple ontological model suggested earlier to explain the two-time OS correlations cannot also satisfy the condition of tritobliviousness while preserving preparation noncontextuality. In the next subsection, we will show that no ontological model that can explain the OS correlations and the trit-oblivious condition can be preparationnoncontextual. In this sense, a suitor who is committed to generalized noncontextuality should be surprised if he sees the specified two-time correlations after having confirmed the trit-obliviousness condition.
It is useful to summarize the correlations that we have described above.
A. Diachronic pair of single-query 3-box OS correlations
There are six possible preparation procedures, denoted P t,b where t ∈ {1, 2, 3} (t for trit) and b ∈ {0, 1}, and three possible measurement procedures, denoted M y where y ∈ {1, 2, 3} . For simplicity, we assume that the prior over each of t, b and y to be uniform. The outcome X of the measurement procedure M y given a preparation procedure P t,b is the following function of t, b and y, that is, the correlations are such that p(X = c y (t, b)|M y ; P t,b ) = 1.
Finally, defining the effective preparation procedure P t as the mixture of P t,0 and P t,1 , it is assumed that no measurement can reveal any information about which of P 1 , P 2 or P 3 was implemented, ∀M : p(X|M ; P 1 ) = p(X|M ; P 2 ) = p(X|M ; P 3 ). (85) This is the trit-obliviousness condition.
Defining the average probability of success as we can also characterize the two-time OS correlations as those achieving R 3 = 1. Using the trit-obliviousness condition, we shall see that the assumption of preparation the measurement My simply reveals the value of the yth bit, that is, p(X = λy|My; (λ 1 , λ 2 , λ 3 )) = 1. It follows that λ p(X = 0|My; λ)p(λ|Pt) = 1 noncontextuality places a bound on the average probability of success, namely, We refer to this bound as a noncontextuality inequality.
The proof is as follows. For any measurement M, the probability of outcome X given preparation P t is simply Similarly, the probability of the ontic state λ given an implementation of P t is simply Given the trit-obliviousness condition, Eq. (85), and the assumption of preparation noncontextuality, Eq. (17), we infer that p(λ|P 0 ) = p(λ|P 1 ) = p(λ|P 2 ), which states that mixed preparations corresponding to different values of the trit t are not only indistinguishable at the operational level, but at the ontic level as well. Therefore, even if one knew λ, the posterior probabilities for t = 1, t = 2 and t = 3 would be the same, that is, one would know nothing about the trit t. The argument so far can be summarized as follows: for preparationnoncontextual models, trit-obliviousness at the operational level implies trit-obliviousness at the ontic level. The ontic state λ provides a classical encoding of (t, b), but one that does not contain any information about t.
To finish the argument, we take note of all the functions of t and b that contain no information about t. 21 These are equivalent, up to an affine transformation (i.e. up to a scalar multiple and an additive constant), to one of the following four functions where c y (t, b) is defined in Eq. (83). In an ontological model that respects preparation noncontextuality and the trit-obliviousness condition, the ontic state must be given by one of these four functions, that is, p (λ|P t,b ) = 21 In the sense that for any given value of the function f (t, b), the conditional probability p(t|f (t, b)) = 1/3 for all t. δ λ,b or δ λ,c1(t,b) or δ λ,c2(t,b) or δ λ,c3(t,b) . Note that in each case, the ontic state space is a single bit 22 .
In the case of an ontological model wherein λ = b, the best the measurement device can do is to always output b ⊕ 1 because with probability 2/3, y = t and c y (t, b) = b ⊕ 1, while with probability 1/3, y = t and c y (t, b) = b. Thus, for this ontological model, the average success probability is 2/3.
In the case of an ontological model wherein λ = c 1 (t, b), the best the measurement device can do is to output c 1 (t, b) when y = 1 and c 1 (t, b) ⊕ 1 when y = 1. Note that c 1 (t, b) ⊕ 1 = c 2 (t, b) for 2/3 of the values of t, b and c 1 (t, b) ⊕ 1 = c 3 (t, b) also for 2/3 of the values of t, b.
(To see this, it suffices to take the negation of the c 1 (t, b) column of the table and compare it with the c 2 (t, b) and c 3 (t, b) columns.) So we see that this choice of output generates the right correlations 2/3 of the time for y = 1. Thus for this ontological model, the overall success probability is 7/9.
By symmetry, the cases of λ = c 2 (t, b) and λ = c 3 (t, b) also achieve a success probability of at most 7/9. Therefore, the probability of success in a preparationnoncontextual ontological model is bounded above by 7/9.
B. Quantum case
We now consider to what extent one can achieve the diachronic OS correlations in quantum theory. The following is a protocol that uses a single qubit. The three measurements correspond to the three Pauli operators t of Eq. (41) corresponding to directions equally spaced in an equatorial plane of the Bloch sphere. The positive and negative eigenvalues are mapped onto outputs X = 0 and X = 1 respectively. The preparation procedures P t,0 and P t,1 correspond to the two eigenstates of t , with positive and negative eigenvalues mapped onto b = 0 and b = 1 respectively. We denote these states by the Hilbert space vectors |φ t,b . The Bloch sphere representation of these states and measurements is provided in Fig. 10. When y = t, the preparation corresponds to an eigenstate of the observable being measured, and the outcome X equals the bit b. Thus, X = c y (t, b) with probability 1 in this case. When y = t, the probability of obtaining X = b is | φ t,b |φ y,b | 2 = cos 2 (π/3) = 1/4 while the probability of obtaining X = b ⊕ 1 and thus X = c y (t, b) is 3/4. We have y = t in 2/3 of cases, so that the overall probability of success is Meanwhile, no information about t can be obtained by any quantum measurement given that the mixtures associated with different values of t are represented by the same density operator: 1 2 |φ 0,0 φ 0,0 |+ 1 2 |φ 0,1 φ 0,1 |= 1 2 |φ 1,0 φ 1,0 |+ 1 2 |φ 1,1 φ 1,1 | = 1 2 |φ 2,0 φ 2,0 | + 1 2 |φ 2,1 φ 2,1 | =1/2. Thus we have a violation of the noncontextuality inequality of Eq. (87). Note that the OS correlations are useful for achieving the following two-party secure computation, which is a kind of multiplexing. Let the two parties be called Alice and Bob. Alice has as input a trit t ∈ {1, 2, 3} and a bit b ∈ {0, 1}, each chosen uniformly at random. Bob has as input a trit y ∈ {1, 2, 3} chosen uniformly at random. Bob outputs a bit c and the goal of the task is for Bob to output c = c y (t, b), that is, Bob should output b if y = t and the negation of b otherwise. Alice can send a system to Bob encoding information about her input, however there is a cryptographic constraint: no information about the trit t can be transmitted to Bob, which is to say that the protocol must be trit-oblivious. This information-theoretic manner of characterizing the correlations provides a connection with the discussion of preparation noncontextuality found in Ref. [50].
C. Justifying preparation noncontextuality by locality
As discussed in Ref. [12], it is sometimes possible to justify an assumption of preparation noncontextuality using Bell's assumption of local causality [33]. This is the case for the assumptions of preparation noncontextuality that appear in the derivation of the noncontextuality inequality of Eq. (87). It suffices to note that if one implements a measurement procedure on half of a correlated pair of systems and one conditions upon its outcome, then this procedure can also be considered a preparation procedure for the other half of the correlated pair. Indeed, given the separated pair of single-query 3-box systems considered in Sec. IV A, every measurement procedure M t on the 3-box system in Abydos chosen from t ∈ {1, 2, 3} and yielding outcome b ∈ {0, 1} corresponds to a preparation procedure P t,b for the 3-box system in Babylon. If M t is measured in Abydos but one does not condition on the outcome, then this corresponds to a preparation procedure P t of the system in Babylon. In this case, the probability of observing an outcome X for a measurement of M y in Babylon given a preparation P t,b is precisely equal to the probability of observing an outcome X for a measurement of M y in Babylon given an outcome b for M t in Abydos . There is an isomorphism between the diachronic pair of single-query 3-box systems and the separated pair. Now suppose that the Abydosian and Babylonian measurements are space-like separated. In this case, the no-signaling constraint ensures that the choice of t in Abydos cannot influence the outcome statistics of any measurement in Babylon and consequently that the three preparation procedures P 1 , P 2 and P 3 are operationally equivalent, that is, ∀M : p(X|M ; P 1 ) = p(X|M ; P 2 ) = p(X|M ; P 3 ). This is the condition of trit-obliviousness.
Furthermore, an assumption of local causality implies that the choice of measurement in Abydos also cannot influence the distribution over ontic states for the 3-box system in Babylon. Denoting the ontic state of the Babylonian system by λ, local causality implies p(λ|P 1 ) = p(λ|P 2 ) = p(λ|P 3 ). But this is precisely the content of the assumption of preparation noncontextuality for the operationally equivalent procedures P 1 , P 2 and P 3 . Therefore local causality justifies this assumption.
This reasoning also shows that any local strategy for winning the prediction game for the separated pair of single-query 3-box systems implies a preparationnoncontextual strategy for winning the prediction game for the diachronic pair with the same winning probability 23 It follows that another way to derive the local bound of 7/9 for the probability of achieving the OS correlations for the separated pair, Eq. (37), is to appeal to this implication and the fact that the optimal preparationnoncontextual strategy achieves a winning probability of 7/9 for the diachronic pair, as shown in Eq. (87).
VII. JOINT MEASURABILITY OF POVMS
As we showed early on, we cannot find a triple of projective measurements in quantum theory that are jointly measurable pairwise but not triplewise. However, not all measurements in quantum theory are projective. The most general measurement is one that is associated with a positive operator valued measure (POVM). A POVM is a set of operators {E X : X ∈ S} such that E X ≥ 0, and X E X =1. The parameter X labels the outcomes of the measurement, which we assume form a discrete set. If the preparation procedure preceding the measurement is represented by the density operator ρ, then the probability of outcome X is given by Tr(ρE X ).
In this section, we consider the question of whether one could find a triple of non-projective measurements in quantum theory that are pairwise but not triplewise jointly measurable. As it turns out, this is indeed possible.
First, we adapt the definition of joint measurability to the case of POVMs. A pair of measurements associated with POVMs {E 1 X1 } and {E 2 X2 } are jointly measurable iff there exists a third POVM {F X1,X2 } such that E 1 X1 = X2 F X1,X2 and E 2 X2 = X1 F X1,X2 . It is worth noting that the problem of mathematically characterizing jointly measurable observables when these are not projective is a subject of on-going research [13][14][15][16].
We will consider two examples of such triples of POVMs such that any pair can be implemented jointly, but the triple cannot. They both make use of noisy spin observables. The three measurements we consider, labelled by an integer k ∈ {1, 2, 3}, are associated with where σ = (σ x , σ y , σ z ) is the vector of Pauli spin operators, whilstn 1 ,n 2 andn 3 are the three axes along which the spin is measured. Note that the POVM E k + , E k − can be written as a convex combination of the projective spin measurement alongn k -associated with the projectors Π k ± ≡ 1 21 ± 1 2 σ ·n k -and the trivial measurement {1/2,1/2}. That is, This is the sense in which we can consider E k + , E k − with η < 1 to be a noisy version of the observable σ ·n k .
A. Orthogonal spin axes
Our first example of such a triple of nonprojective measurements uses noisy versions of spin operators along three orthogonal axes: In other words, the condition 1/ √ 3 < η ≤ 1/ √ 2 is necessary and sufficient for the triple to be pairwise jointly measurable but not triplewise jointly measurable. This result is proven in Ref. [14], but for completeness, we provide an independent proof in Appendix F. For pedagogical reasons, we also provide a geometric picture in the Bloch sphere of the measurements that saturate these inequalities. To this end, defining the index set I ⊆ {1, 2, 3}, we introduce the (unnormalized Bloch) vectors where X k ∈ {−1, +1} and write the respective unit vectors asm {X k } k∈I . The POVM that measures a noisy spin observable along theẑ-axis jointly with the one along thex-axis and that saturates η ≤ 1/ √ 2 is of the form where the projectors {Π X1X2 } are associated with Bloch vectors {m X1X2 } forming the vertices of a square in theẑ-x plane, depicted in Fig. 11. Coarse-graining over X 2 yields the POVM F 1 , which is to say, a measurement of the ηsharp spin observable along theẑ axis with η = 1 √ 2 , depicted in Fig. 11. Similarly, coarse-graining over X 1 yields noisy spin observable associated with Bloch vectors s 2 ± = ± 1 √ 2x , which is to say along thex axis with η = 1 √ 2 . Joint measurements of every other pair of spin axes are described similarly. The POVM that measures noisy spin observables along axesẑ,x andŷ jointly and that saturates η ≤ 1/ √ 3 is of the form F X1X2X3 ≡ 1 4 Π X1X2X3 where the projectors {Π X1X2X3 } are associated with the Bloch vectors {m X1X2X3 } forming the vertices of a cube, depicted in Fig. 12. Coarse-graining over X 2 and X 3 yields the POVM F 1 ± ≡ 1 21 + 1 2 σ · s 1 ± where s 1 ± = ± 1 √ 3ẑ , which is to say an η-sharp spin observable along theẑ axis with η = 1/ √ 3, also depicted in Fig. 12. Similarly, coarsegraining over X 1 and X 3 yields a noisy spin observable associated with Bloch vectors s 2 , while coarse-graining over X 1 and X 2 yields one associated with s 3 ± = ± 1 √ 3ŷ . It is clear from these geometric representations that the reason there is a gap between the noise required for jointly measuring a pair and that required for jointly measuring the triple is that the length of the edge of a cube inscribed in a sphere is less than that of a square inscribed in an equatorial plane of that sphere.
Joint measurements of observables along orthogonal spin axes are not very useful for approximating the OS correlations. Indeed, defining the probability of obtaining anti-correlated outcomes when a pair of nonprojective measurements is implemented jointly, averaged uniformly over the three pairs, we find the following result.
Proposition 9. For the triple of measurements defined by Eqs. (92) and (94), that is, noisy spin observables along three orthogonal axes, the quantum probability of anti-correlation when a pair is measured jointly, averaged uniformly over the three pairs is (independent of the quantum state).
Proof. The intuitive reason is that each pair of spin observables is unbiased. More precisely, if we coarse-grain over the effects in the joint POVM {F X1X2 } of Eq. (96) with outcomes corresponding to anti-correlation, we get Therefore, for all quantum states, the probability of finding anti-correlated results is 1/2.
There is consequently no bias towards anti-correlation and therefore this triple of measurements is not helpful for approximating the OS correlations.
B. Trine spin axes
Our second example consists of noisy versions of spin observables along three axes equally separated in a plane (i.e. separated by a trine or an angle of 120 • ): n 1 = (0, 0, 1) These are depicted in Fig. 13.
In other words, the condition 2/3 < η ≤ √ 3 − 1 is sufficient for the triple to be pairwise jointly measurable but not triplewise jointly measurable.
Again, the proof is provided in Appendix F, but we can understand the result geometrically. The trine directionŝ n 1 ,n 2 andn 3 of Eq. (100) are indicated in Fig. 13. The POVM that measures a noisy spin observable along thê n 1 -axis jointly with the one along then 3 -axis and that saturates η ≤ √ 3 − 1 is of the form where and where the projectors {Π X1X2 } are associated with Bloch vectors {m X1X2 } forming the vertices of a square, depicted in Fig. 13. Coarse-graining over X 2 yields the POVM F 1 ± ≡ 1 21 + 1 2 σ · s 1 ± with s 1 ± = ± √ 3 − 1 n 1 depicted in Fig. 13. Similarly, coarse-graining over X 1 yields a noisy spin observable associated with Bloch vectors s 3 ± = ± √ 3 − 1 n 3 . Joint measurements of every other pair of spin axes are described similarly. The POVM that measures noisy spin observables along axesn 1 ,n 2 andn 3 jointly and that saturates η ≤ 2/3 is of the form {F X1X2X3 ≡ w X1X2X3 Π X1X2X3 } where w +++ = w −−− = 0 (implying that one never obtains a triplewise coincidence in the joint measurement) while w +−− = w −++ = w +−+ = w −+− = w −−+ = w ++− = 1/3 and where the projectors {Π X1X2X3 } are associated with Bloch vectors {m X1X2X3 } forming the vertices of a hexagon for the six values of X 1 X 2 X 3 such that w X1X2X3 = 0, as depicted in Fig. 14. Coarse-graining over X 2 and X 3 yields the POVM F 1 ± ≡ 1 21 + 1 2 σ · s 1 ± where s 1 ± = ± 2 3n 1 , depicted in Fig. 14. Similarly, coarsegraining over X 1 and X 3 yields a noisy spin observable associated with Bloch vectors s 2 ± = ± 2 3n 2 , while coarse-graining over X 1 and X 2 yields one associated with s 3 ± = ± 2 3n 3 . Note that, unlike the three previous examples, the Bloch-directions of the fine-grained (saturating) POVM elements coincide with the Bloch-directions of the coarse-grained POVM elements. This is a peculiarity of geometry, and is a feature also seen in the dual problem of identifying pure-state ensembles that saturate the bounds of so-called EPR-steering inequalities [52]. Given the discussion in Sec. IV B, one might expect the trine spin observables to instantiate a better approximation of the OS correlations. Indeed, we have the following proposition that supports this intuition.
Proposition 11. For the triple of measurements defined by Eqs. (92) and (100), that is, a triple of noisy spin observables along trine axes, the quantum probability of anti-correlation when a pair is measured jointly, averaged uniformly over the three pairs is (independent of the quantum state).
Proof. If, in the joint measurement of Eq. (101), we coarse-grain the two effects that correspond to anticorrelation, we obtain from which the result follows trivially.
Can we explain this degree of anti-correlation within a generalized-noncontextual ontological model? Given that the measurements involved are nonprojective, we need not represent them as assigning deterministic outcomes for every ontic state. Indeed, as discussed in Sec. II D, for nonprojective measurements, one is not warranted in assuming outcome determinism. It follows that the bound of 2/3 on the probability of anti-correlation, Eq. (12), which we derived under the assumption of measurements being projective, need not apply. Conceivably, the bound implied by generalized noncontextuality could be smaller for nonprojective measurements, and the quantum degree of anti-correlation might therefore still violate it. As it turns out however, the bound is actually larger for nonprojective measurements, and therefore the quantum degree of anti-correlation is entirely consistent with an ontological model that is measurementnoncontextual and outcome-deterministic for projective measurements. We show this now.
C. Generalized-noncontextual models for joint measurements of POVMs
Each measurement that is modeled by a POVM of the form of Eq. (92) can be considered as a convex combination of a projective measurement and a measurement of the trivial two-outcome POVM {1/2,1/2}, as seen in Eq. (93). In Ref. [18], it is proven that within any ontological model, the response function that represents a convex combination of measurement procedures is simply the convex combination of the associated response functions. Ref. [18] also contains a proof that within a measurement-noncontextual model, the response function that represents each outcome of the trivial twooutcome POVM {1/2,1/2} is the uniform function 1/2, i.e., regardless of the value of λ in the ontological model, the two outcomes occur with equal probability. We also recall from Sec. II E that in models of quantum theory, preparation noncontextuality implies outcome determinism for projective measurements. From these facts, we obtain the following result.
Lemma 12. In an ontological model that is generalizednoncontextual, the response function for the η-sharp spin observable of Eq. (92), denoted by M k , is where [X(λ)] denotes the response function p (X|λ) = 1 if X = X(λ) and 0 otherwise.
This yields a strong constraint on the response function for the joint measurement, denoted M 12 , of η-sharp spin observables along distinct axes. The joint response function p (X 1 , X 2 |M 12 ; λ) must yield p (X 1 |M 1 ; λ) when averaged over X 2 and p (X 2 |M 2 ; λ) when averaged over X 1 . The most general form that can recover these marginals where the marginals are so that we require We infer that β = γ.
In order to give the model the best chance of reproducing the operational statistics, we consider what values of α, β, γ, δ and ε achieve the largest possible amount of anti-correlation. The δ terms always yields correlation, while the β and γ terms yield correlation as often as anti-correlation. Only the α and ε terms can have anti-correlation more frequently than correlation. Thus, to maximize the amount of anti-correlation, one sets β = γ = δ = 0. It then follows that α = η and ε = 1 − η.
The same reasoning applies for the joint measurements of M 1 and M 3 and of M 2 and M 3 , so that for all i, j ∈ {1, 2, 3} such that i = j, can yield anti-correlation, so the probability of anticorrelation for the η term is at most 2/3. Meanwhile, the 1 − η term always yield anti-correlation. Therefore, One might have expected that the ability to add noise to the response function in the ontological model would not help explain a high degree of anti-correlation, but such an expectation fails to take into account the fact that the noise can itself be anti-correlated and thereby explain more anti-correlation in the statistics. Thus rather than only being able to explain a probability of anticorrelation of 2/3 in a generalized-noncontextual model, we can explain a probability of anti-correlation of 1 − η 3 which is always greater than 2/3 because η ≤ 1. For instance, for η = 1/ √ 2, the upper bound on R 3 is 1 − 1/(3 √ 2) ≃ 0.76430, while for η = √ 3 − 1, it is (4 − √ 3)/3 ≃ 0.75598. Because the degree of anti-correlation we found in quantum theory was less than 2/3 in both examples, there is no problem providing a generalizednoncontextual model. More precisely, the degree of quantum anti-correlation obtained in the example with orthogonal spin axes can be explained noncontextually because R quantum 3 = 1/2 < 0.76430, and the degree obtained in the example with the trine spin axes can be explained noncontextually because R quantum 3 = 0.63397 < 0.75598.
Is it the case that for all triples of nonprojective quantum measurements that can be implemented pairwise but not triplewise, the strength of anti-correlations can be explained by a generalized-noncontextual ontological model? The question remains open, but we expect a positive answer.
VIII. CONCLUDING REMARKS
There has been a lot of work in recent years on "foils to quantum theory", operational theories that one studies not primarily as competitors to quantum theory, but as useful tools for getting a handle on the principles underlying it. Only by situating quantum theory in a landscape of possible theories does it make sense to speak of the principles that pick it out, to answer Wheeler's question: "how come the quantum?". Specker's parable provides an interesting new kind of foil, because the kind of complementarity it exhibits -three measurements that can be implemented jointly pairwise but not triplewise -is something that is not found among projective measurements in quantum theory. This prompts the question: why does quantum theory not have this sort of complementarity? It might be interesting, for instance, to deduce the information-processing power of a foil theory incorporating such correlations. Furthermore, even if we consider a kind of complementarity that can be accommodated in quantum theory, such as five measurements that can be measured in adjacent pairs, there is an interesting question about why the correlations exhibited by quantum theory are not stronger. Why is quantum theory not more contextual or more nonlocal [17,[53][54][55][56][57][58][59][60][61][62]?
The same sort of question arises for quantum examples of triples of nonprojective measurements that can be imple-mented pairwise but not triplewise. Why can these not yield the strength of anti-correlations required to obtain a no-go theorem for generalized noncontextuality? We hope that these questions might provide a new angle on the problem of deriving the structure of quantum theory from within a landscape of operational foil theories.
IX. ACKNOWLEDGEMENT
This project was inspired by Ernst Specker's talk at the workshop "Information Primitives and Laws of Nature" which took place at ETH, Zürich in May 2008. On the topic of the joint measurability of POVMs, we thank Robin Blume-Kohout for a motivating discussion and David Pegg for helpful comments. On the topic of nonlocal OS correlations, we acknowledge useful discussions with Ben Toner and Jean-Daniel Bancal. We also thank Allen Stairs for comments on a draft of this article and Lucien Hardy for bringing Ref. [40] to our attention and for suggesting the connection between Specker's parable and the failure of transitivity of implication in proofs of nonlocality. Finally, we thank Adán Cabello for pointing out the connection between our Kochen-Specker proof based on the failure of transitivity of implication and Clifton's proof. YCL and HMW acknowledge funding from the Australian Research Council. Part of this work was conducted during visits by YCL and HMW to Perimeter Institute, and RWS to Australia, through the PIAF (Perimeter Institute Australia Foundations) collaboration. Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. YCL was also supported by the Swiss NCCR "Quantum Photonics" and the European ERC-AG QORE.
Appendix E: Maximum quantum violation of the n-box-set Bell-Mermin inequality In general, the problem of determining the maximal quantum violation of a Bell inequality is highly nontrivial (see, for example, Refs. [23,63,64] and references therein). Here, we will show that R quantum n defined in Eq. (46) is indeed the maximal winning probability, cf. Eq. (43), allowed in quantum mechanics. To this end, it suffices to show that the winning probability R n is upper bounded by R quantum n in quantum theory. For convenience, we will show this in terms of (E3) Following a procedure very similar to that described in Sec. III of Ref. [66] (see also Ref. [23]), one finds that for arbitrary Hermitian observables where v a± = 1 √ 2n n k=1 ω ak n Ā k ±B k , λ a = 1 − 2 cos 2π n a, (E5) and ω n = e −i2π/n .
Thus, Eq. (E4) implies that whenever the constraints Ā a 2 =1 and B b 2 =1 are satisfied for all a, b ∈ {1, 2, . . . , n}, the right hand side of Eq. (E4) becomes a sum of squares of polynomial of Hermitian operators and hence n 4 cos 2 π 2n − 1 1 −B [n] NLOS ≥ 0. As a result, the maximal quantum mechanical expectation value ofB [n] NLOS is upper bounded by n 4 cos 2 π 2n − 1 , so is the maximal value of S n allowed in quantum theory.
Equivalently, it follows from Eq. (E2) that in quantum theory, the maximal winning probability R n is upper bounded by: Proof. Clearly, But given that this equality holds for both values of X k and for all k, we have which yields the necessary condition on η, Eq. (F3).
To derive the sufficient condition, Eq. (F4), we construct a POVM that jointly measures a set of spin observables with value of η saturating the inequality. Any set of observables with smaller η can then be jointly measured by simply adding uniformly random noise to this POVM.
The simulating POVM is This establishes the sufficient condition.
Corollary 14. The necessary and sufficient conditions for joint measurability of a set of spin observables are: for a pair of orthogonal spin axes, for a triple of orthogonal spin axes, for a pair of trine spin axes, for a triple of trine spin axes, To saturate each of these inequalities, it suffices to implement the POVM defined in Eq. (F13).
|
2016-12-28T03:16:42.000Z
|
2010-10-06T00:00:00.000
|
{
"year": 2010,
"sha1": "221aff8793a6ce4fcee5203cfd97ff00a1ede78b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "221aff8793a6ce4fcee5203cfd97ff00a1ede78b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
267201144
|
pes2o/s2orc
|
v3-fos-license
|
Oritavancin as sequential therapy for Gram-positive bloodstream infections
Background Oritavancin, a long-acting lipoglycopeptide approved for use in acute bacterial skin and skin structure infections, has limited data evaluating use in serious infections due to Gram-positive organisms. We aimed to assess the effectiveness and safety of oritavancin for consolidative treatment of Gram-positive bloodstream infections (BSI), including infective endocarditis (IE). Methods We conducted a retrospective cohort study evaluating adult patients admitted to University of Colorado Hospital from March 2016 to January 2022 who received ≥ 1 oritavancin dose for treatment of Gram-positive BSI. Patients were excluded if the index culture was drawn at an outside facility or were > 89 years of age. The primary outcome was a 90-day composite failure (clinical or microbiological failure) in those with 90-day follow-up. Secondary outcomes included individual components of the primary outcome, acute kidney injury (AKI), infusion-related reactions (IRR), and institutional cost avoidance. Results Overall, 72 patients were included. Mean ± SD age was 54 ± 16 years, 61% were male, and 10% had IE. Organisms most commonly causing BSI were Staphylococcus aureus (68%, 17% methicillin-resistant), followed by Streptococcus spp. (26%), and Enterococcus spp. (10%). Patients received standard-of-care antibiotics before oritavancin for a median (IQR) of 11 (5–17) days. Composite failure in the clinically evaluable population (n = 64) at 90-days occurred in 14% and was composed of clinical and microbiological failure, which occurred in 14% and 5% of patients, respectively. Three patients (4%) experienced AKI after oritavancin, and two (3%) experienced an IRR. Oritavancin utilization resulted in earlier discharge for 94% of patients corresponding to an institutional cost-avoidance of $3,055,804 (mean $44,938/patient) from 1,102 hospital days saved (mean 16 days/patient). Conclusions The use of oritavancin may be an effective sequential therapy for Gram-positive BSI to facilitate early discharge resulting in institutional cost avoidance. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-023-08725-8.
Introduction
Bloodstream infections (BSI), including infective endocarditis (IE), due to Gram-positive organisms often require prolonged intravenous (IV) antimicrobial therapy resulting in considerable hospital length of stay (LOS) and healthcare costs [1,2].Several treatment modalities for patients requiring prolonged courses exist, including continued inpatient stay for IV antibiotics, outpatient parenteral antimicrobial therapy (OPAT), or discharge with oral antibiotics [3].However, not all patients are eligible for these approaches owing to psychosocial factors or housing instability [2,4].Continued hospitalization and OPAT both confer significant risks of complications (e.g., thrombophlebitis, infection, line dysfunction) and require substantial healthcare resources (e.g., lab monitoring, care coordination, and personnel) [5][6][7][8].
Oritavancin is a long-acting lipoglycopeptide antimicrobial with in vitro activity against a variety of Grampositive organisms, including Staphylococcus aureus (methicillin-susceptible [MSSA] and methicillin-resistant [MRSA]), Enterococcus spp.(including vancomycinresistant enterococci [VRE] due to VanA), and Streptococcus spp.Oritavancin has received FDA approval to treat acute bacterial skin and skin structure infections (ABSSSI) due to MSSA, MRSA, beta-hemolytic Streptococci, Streptococcus anginosus group, and vancomycinsusceptible Enterococcus faecalis [9].Given a prolonged half-life of ~ 245 h, oritavancin maintains a free plasma concentration above the minimum inhibitory concentration for many Gram-positive organisms for several weeks [10].These characteristics position oritavancin as an attractive option to extend the treatment of Gram-positive BSI beyond discharge.Preliminary studies evaluating oritavancin for the treatment of complicated Gram-positive infections appear promising; however, data for the treatment of BSI remain limited [11][12][13][14].Therefore, this study aimed to assess the effectiveness and safety of oritavancin as sequential therapy for BSI, including IE, due to Gram-positive organisms.
Methods
We conducted a retrospective, observational cohort study evaluating adult patients at the University of Colorado Hospital from March 2016 to January 2022.Included subjects received at least one dose of oritavancin for the treatment of a BSI due to any Gram-positive organism.Patients were excluded if they were > 89 years of age as required by the local Institutional Review Board (IRB), received oritavancin or had an index blood culture drawn at an outside hospital.This study received IRB approval before study initiation.
The electronic health record (EHR; Epic, Verona, WI) was queried for oritavancin administrations within the study period.The oritavancin product in use at the institution was the original formulation (Orbactiv).Data extracted from the EHR included patient demographics, comorbidities, infection and treatment information, length of stay, adverse events, and outcomes, including hospital readmission, re-infection, and mortality.The index organism(s) was the Gram-positive organism(s) referred to in the EHR by a treating physician as the etiology of the BSI.Infections in which more than one pathogenic organism was identified (including Gram-negatives and anaerobes) were classified as polymicrobial.Common commensal organisms such as coagulase-negative Staphylococcus spp.were considered contaminants and excluded if cultured in only one of two blood culture sets.S. aureus BSI was defined as complicated by one of more of the following: lack of defervescence by 72 h after initiating antibiotic therapy, metastatic sites of infection, repeat positive blood cultures with the same organism after 48 h of therapy, presence of implanted prosthesis/ devices, previous S. aureus BSI within 90-days, prior IE, active immunosuppression including neutropenia or a prior organ transplant, or catheter-related BSI without catheter removal within the first 72 h after positive blood cultures [15,16].IE was categorized as definitive or possible according to the modified Duke criteria [17].Acute kidney injury (AKI) was defined as meeting at least stage I injury according to the 2012 Kidney Disease: Improving Global Outcomes Clinical Practice Guideline for Acute Kidney Injury [18].
Outcomes
The primary endpoint was a 90-day composite failure, comprised of clinical or microbiological failure within 90 days from index culture in the clinically evaluable (CE) patient population.The clinically evaluable patient population includes those with follow-up within the healthcare system at 90 days.Clinical failure was defined as the initiation of a Gram-positive antibiotic after oritavancin administration, infection-related readmission due to the index infection, and all-cause mortality.Microbiological failure was defined as identifying a new BSI with the same species as the index organism.Secondary endpoints were the individual components of the primary endpoint, incidence of AKI, and incidence of infusion-related reactions.Adjudication of effectiveness outcomes was performed by an ID physician (M.K.).
Cost analysis
In patients discharged early (i.e., documented antibiotic end-date was after their discharge date), the number of hospital days saved associated with oritavancin use was calculated by subtracting the date of discharge from the documented antibiotic end date.The cost avoidance of reduced hospital days per patient was calculated by multiplying the hospital days avoided by the average cost of an inpatient stay in Colorado ($3,047/day) [19].Total institutional cost avoidance was determined by subtracting the average wholesale price (AWP) of oritavancin for each dose administered from the cost avoidance achieved from reduced hospital days.The AWP used for oritavancin at the time of the study was $2,626 and $3,939 for 800 mg and 1,200 mg doses, respectively [20].
In total, 8 patients had loss to follow-up, leaving 64 patients in our CE population.Composite failure at 90-days in the CE population was 14% (n = 9/64, Table 2).Clinical and microbiological failures occurred in 14% (n = 9/64) and 5% (n = 3/64), respectively.Two patients with microbiological failure died within 90 days, and one was started on a Gram-positive agent for presumed treatment failure.All-cause mortality occurred in 13% of patients.Ninety-day infection-related readmission was observed in 11% of patients, only one of which was due to recurrence with the index organism.Three patients experienced AKI after oritavancin occurring between 4 and 23 days after oritavancin administration.Two of the three patients met KDIGO stage 3 criteria.Only two patients had an infusion-related reaction.Complete case descriptions of patients who met the composite failure definition are presented in Additional file 1: Table s1.
Oritavancin utilization resulted in earlier discharge for 94% of patients in the overall cohort (n = 68/72).Eightyone doses of oritavancin led to 1,102 hospital days saved (mean 16 days/patient), corresponding to an estimated total institutional cost-avoidance of $3,055,804 over the 6-year study period (mean $44,938/patient).
Discussion
To our knowledge, this is the largest retrospective cohort evaluating the use of oritavancin solely in BSI, including complicated BSI.This study of patients undergoing treatment for Gram-positive BSI with oritavancin demonstrated favorable rates of clinical and microbiological cure.Overall, clinical failure in the CE population was low (14%) and in line with prior studies evaluating conventionally used therapies, vancomycin or daptomycin [21,22].Likewise, microbiological failure and infectionrelated readmission were uncommon, with only one patient experiencing infection-related readmission due to the index infection.Oritavancin use allowed for earlier discharge in most patients resulting in significant cost-avoidance while adverse drug events following oritavancin administration were infrequent.Overall, our findings suggest that oritavancin may be a reasonable alternative to standard therapies when used as sequential therapy after blood culture clearance.
Although limited, real-world use of oritavancin for complex infections has generally demonstrated promising effectiveness outcomes with a favorable safety profile [11][12][13][14].Schulz and colleagues previously demonstrated success or improvement in all 17 patients treated with oritavancin for documented or presumed osteomyelitis or intravascular infections caused by Gram-positive organisms [14].Another series of 10 patients with invasive Gram-positive infections demonstrated 70% treatment success with oritavancin after initial standardof-care antimicrobials [13].Nonetheless, limited data exist describing oritavancin utilization for the primary management of BSI.None of the patients in the registrational trials had BSI.Although the real-world CHROME registry evaluated 446 patients with ABSSSI and other Gram-positive infections, only seven patients had BSI, and their outcomes were not directly reported [11,23,24].Despite limited data with oritavancin in BSI, other long-acting lipoglycopeptides have also shown promising early findings.A retrospective study evaluating dalbavancin in BSI and IE suggested favorable outcomes [25].An analysis of sequential dalbavancin compared to standard-of-care therapy at our institution suggested similar effectiveness between the two approaches [26].Further, the use of dalbavancin was associated with reduced central catheter utilization, and shorter length of stay.Although a direct comparison to that study is not feasible, the overall low rate of overt clinical failure in both studies adds to the existing literature supporting the expanded role of long-acting lipoglycopeptide antimicrobials in treating invasive infection following clearance of BSI [25,26].Results from Dalbavancin as an Option in the Treatment of Staphylococcus Aureus Bacteremia (DOTS) trial are anxiously anticipated to further define the role of long-acting lipoglycopeptides in this setting (ClinicalTrials.govidentifier: NCT04775953).
Similar to the total cost-saving reported in this cohort (average $44,938/patient), multiple studies have shown the financial benefit of oritavancin use by reducing hospital length of stay or admission avoidance [5,6,27].A study by Brownell and colleagues evaluated 75 patients with ABSSSI treated with oritavancin and reported a per-patient average cost avoidance of $4,708.Similarly, a cost-minimization model comparing inpatient vancomycin to outpatient oritavancin for treatment of uncomplicated ABSSSI estimated cost savings between $1,752 to $6,475 per patient, depending on the number of patient comorbidities.In that analysis, budget neutrality was maintained with modeled readmission rates of up to 38%, demonstrating the insensitivity of cost avoidance with respect to readmission [6].Alike this study, we have previously shown that dalbavancin used as sequential therapy results in reduced hospital length of stay, corresponding to an average cost avoidance of $17,204 per patient [28].The difference in cost avoidance between the current study and those mentioned prior may be attributed to a larger proportion of diseases requiring longer treatment courses.Additionally, the median (IQR) days on antibiotics before oritavancin was shorter in the current study [11 (5-17) vs. 13 (7-24.5)],possibly influenced by earlier implementation of long-acting lipoglycopeptide as sequential therapy at our institution.Despite recent promising data with long-acting agents for treating severe Gram-positive infections, formal pharmacoeconomic comparisons have yet to be performed.Additional studies are needed to determine the optimal long-acting lipoglycopeptide, timing of therapy, and whether combination antimicrobial therapy for S. aureus and Enterococcus spp.can expedite BSI clearance and provide a lower incidence of complication and earlier readiness for patient discharge.This study should be interpreted with consideration of several limitations.The retrospective, non-comparative, single-center design may limit the generalizability of this data.The treatment of patients with oritavancin reflects our institutional practice and may select for patients with relatively uncomplicated BSI.Although most patients were treated for MSSA infection, all were pre-treated with standard-of-care antimicrobials, and oritavancin was reserved for consolidation therapy after blood culture clearance.Given the population described in this cohort, administration of one to two doses of oritavancin can ensure the completion of therapy in patients who may not follow up with care after discharge.
This study suggests an expanded role of oritavancin as consolidation therapy for Gram-positive BSI in select patients.In addition, oritavancin appears to have a favorable safety profile and can result in significant institutional cost avoidance.
|
2024-01-25T14:19:15.373Z
|
2024-01-24T00:00:00.000
|
{
"year": 2024,
"sha1": "827b55658f97d361a72a555d6b7f03826b0b3758",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/counter/pdf/10.1186/s12879-023-08725-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3cee01100922333ed733dbd3d90159eb78a783c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8231859
|
pes2o/s2orc
|
v3-fos-license
|
Mouse zygote-specific proteasome assembly chaperone important for maternal-to-zygotic transition
Summary During the maternal-to-zygotic transition (MZT), maternal proteins in oocytes are degraded by the ubiquitin–proteasome system (UPS), and new proteins are synthesized from the zygotic genome. However, the specific mechanisms underlying the UPS at the MZT are not well understood. We identified a molecule named zygote-specific proteasome assembly chaperone (ZPAC) that is specifically expressed in mouse gonads, and expression of ZPAC was transiently increased at the mouse MZT. ZPAC formed a complex with Ump1 and associated with precursor forms of 20S proteasomes. Transcription of ZPAC genes was also under the control of an autoregulatory feedback mechanism for the compensation of reduced proteasome activity similar to Ump1 and 20S proteasome subunit gene expression. Knockdown of ZPAC in early embryos caused a significant reduction of proteasome activity and decrease in Ump1 and mature proteasomes, leading to accumulation of proteins that need to be degraded at the MZT and early developmental arrest. Therefore, a unique proteasome assembly pathway mediated by ZPAC is important for progression of the mouse MZT.
Introduction
After fertilization, erasure of the oogenic program and reprogramming by establishing the embryonic program into totipotent zygotes are coordinately regulated (Pellettieri et al., 2003;DeRenzo and Seydoux, 2004;Stitzel and Seydoux, 2007). This process is called maternal-tozygotic transition (MZT) and is accompanied by degradation of maternal mRNAs and proteins and transcription of zygotic genes (Keshet et al., 1988;Evsikov and Marín de Evsikova, 2009;. Oocyte-derived mRNAs are degraded shortly after fertilization, and ,90% of RNAs stored in the oocyte are degraded by the 2-cell stage, which is an essential process for embryogenesis (Stitzel and Seydoux, 2007). Degradation of maternal proteins is also suggested to be an essential component of the MZT (Mendez et al., 2002;DeRenzo and Seydoux, 2004;Huo et al., 2004).
Two major pathways for bulk degradation of intracellular proteins exist in eukaryotic cells, one of which is the autophagymediated lysosomal degradation. Recently, the importance of autophagy for preimplantation development has been highlighted in studies using mice (Tsukamoto et al., 2008). They reported that oocyte-specific Atg5-knockout mice exhibited early embryonic arrest at the 4-cell or 8-cell stage, indicating that autophagy is important for the overt morphological changes in these stages.
Another proteolytic pathway is the ubiquitin-proteasome mediated degradation. Unlike autophagy, protein degradation by UPS occurs in a selective manner, which is owed to ubiquitin ligases that specifically recognize substrate proteins and attach polyubiquitin chains to them as a degradation signal for the proteasome (Coux et al., 1996;Baumeister et al., 1998). The UPS is essential for maintenance of cellular homeostasis in eukaryotic cells (Varshavsky, 2005;Ciechanover, 2006;Tai and Schuman, 2008).
Involvement of the UPS in the degradation of stored maternal proteins after fertilization has already been reported Solter et al., 2004), and in general, degradation by the UPS is carefully regulated. However, the mechanisms underlying the structure and functions of the UPS at the maternal-to-zygotic transition are not well understood.
In this study, we identified a molecule that we named zygotespecific proteasome assembly chaperone (ZPAC), which is specifically expressed in the mouse gonads and zygote. In the early mouse embryo, expression of ZPAC is transiently augmented at the MZT and plays an important role in the removal of maternal proteins by enhancing the biogenesis of the 20S proteasome.
Identification of ZPAC as an Ump1 interacting protein
To understand the mechanism governing entry of oocytes to totipotent zygotes during the MZT of early mouse embryos, we identified genes whose expression was specifically changed at the MZT using an mRNA differential display analysis comparing embryos at the late 1-cell stage with oocytes at the MII stage (supplementary material Fig. S1A). Of the genes identified, one was a gene that we named ZPAC (supplementary material Fig. S1E,F). The official symbol of the ZPAC gene is E330034G19Rik (GeneBank AAI39084), which was not functionally characterized but was predicted to be preferentially expressed in mouse oocytes and early embryos according to its EST profile described in the UniGene database.
To elucidate the function of ZPAC, we performed a yeast twohybrid screen of a mouse ovary cDNA library using ZPAC as a bait and identified Ump1 as a protein interacting with ZPAC (supplementary material Fig. S2). Ump1 is known as an assembly chaperone that facilitates the formation of 20S proteasomes and is especially required for the initiation of b-ring formation. Ump1 is also degraded upon generation of the 20S proteasome (Hirano et al., 2006;Hoefer et al., 2006;Fricke et al., 2007;Murata et al., 2009). ZPAC protein interacted with Ump1 protein via its Nterminal region (Fig. 1A).
Next, we examined the expression of ZPAC in various tissues in mice. While Ump1 mRNA was ubiquitously expressed, ZPAC mRNA was specifically expressed in the testis and ovary (Fig. 1B), consistent with the EST profile. We raised an antibody against recombinant ZPAC protein derived from its cDNA sequence and ascertained that ZPAC protein was expressed in these specific tissues (Fig. 1C). Ump1 mRNA and protein was also ubiquitously and constitutively expressed in all examined mouse tissues (Fig. 1B,C), but the expression levels of Ump1 protein in testes and ovaries were higher than those in other tissues except liver and heart ( Fig. 1C; supplementary material Fig. S6). This suggests that specific expression of ZPAC has some correlation with Ump1 protein levels at least in testes and ovaries (Fig. 1C).
ZPAC mRNA and ZPAC protein were detected in spermatogonia of mouse testes and fully-grown oocytes of mouse ovaries (Fig. 1D). This observation was confirmed in transgenic mice carrying integrated mouse ZPAC promoter-driven enhanced green fluorescent protein (EGFP) cDNA (supplementary material Fig. S3). Relatively intense signals for Ump1 proteins were also detected in spermatogonia and oocytes using immunohistochemical staining, similar to ZPAC protein (Fig. 1E), showing that both ZPAC and Ump1 are highly expressed in the germ cells.
Since ZPAC is specifically expressed in mouse gonads and Ump1 is ubiquitously expressed in mouse tissues, we examined whether ZPAC and Ump1 interact with each other in mouse gonads. Proteins extracted from testes and ovaries were subjected to immunoprecipitation using anti-ZPAC antibody. This showed that ZPAC and Ump1 formed a complex in these cells (Fig. 1F). The association of ZPAC with the proteasome assembly chaperone Ump1 led us to examine the possibility of whether a mouse gonad-specific protein ZPAC collaborates with Ump1 in the ubiquitin-proteasome system.
Unique expression of ZPAC during the MZT
Since ZPAC was expressed specifically in germ cells including oocytes, we examined gene expression profiles and subcellular localization of ZPAC as well as Ump1 during early mouse embryogenesis. In early mouse embryos, the ZPAC gene showed a unique expression profile, in which the level of ZPAC mRNA and protein transiently increased at the early 2-cell stage and then drastically decreased by the late 2-cell and 8-cell stages ( Fig. 2A), indicating that the identification of ZPAC in the differential display screen does not fully reflect the true behavior of this transcript. The amount of Ump1 mRNA was highest in oocytes and began to decrease as early as the 1-cell stage. However, the abundance of Ump1 protein was maintained from the 1-cell stage to the 4-cell stage, even after its mRNA level was markedly decreased, and it appears that the levels of ZPAC and Ump1 proteins were coordinately regulated ( Fig. 2A). We calculated the ratio of ZPAC/Ump1 proteins at the early embryonic developmental stage (supplementary material Fig. S6). This analysis showed a nearly constant ratio of ZPAC/Ump1 protein during early zygote development, suggesting that ZPAC may form a stoichiometric complex with Ump1.
Consistent with the immunoblot analysis, intense signals of both ZPAC and Ump1 proteins were diffusely detected in the cytoplasm and nucleus at the 1-cell and 2-cell stages (supplementary material Fig. S4). However, as development proceeded, the ZPAC and Ump1 signals decreased at the 4-cell stage and were barely detectable at the 8-cell stage (supplementary material Fig. S4). In oocytes and 1-cell to 2cell embryos, ZPAC and Ump1 proteins were partially colocalized in nuclear dot-like structures (Fig. 2B, left). Also, ZPAC co-localized with a3, a 20S proteasome subunit, in nuclear dot-like structures (Fig. 2B, right), which are quite similar to the nuclear dot-like structures in which ZPAC and Ump1 colocalized, but outside the nucleoli (Fig. 2B, left), suggesting that they work co-operatively in the cells.
Interestingly, polyubiquitinated proteins were mostly accumulated from oocytes to early 2-cell embryos and then rapidly decreased in 4-cell embryos (Fig. 2C), in which polyubiquitinated proteins disappeared most acutely between 24 and 36 hpi when proteasomal chymotrypsin-like activity from crude lysate of oocytes or early embryos was significantly upregulated (Fig. 2D). Accumulation of polyubiquitinated proteins was also observed in 2-cell embryos after treatment of proteasome inhibitor MG132 from 24 to 36 hpi (Fig. 2E). Consistently, transient augmentation of ZPAC expression coincides with a transient increase in the proteasome activity at the 2-cell stage and a decrease in polyubiquitinated proteins after the late 2-cell stage.
Taken together, these results raise the possibility that ZPAC cooperates with Ump1 in the ubiquitin-proteasome mediated protein degradation during the MZT of early mouse embryos, although upregulation of chymotrypsin-like activity does not directly indicate an increase in the level of the 20S proteasome.
ZPAC is important for the development of fertilized zygotes
To uncover the role of ZPAC during early embryo development, ZPAC-knockdown zygotes were generated by pronuclei injection of ZPAC antisense DNA with internal ribosomal entry sites (IRES)-EGFP as a marker for successful expression (supplementary material Fig. S5). A nearly complete loss of ZPAC proteins and mRNA in the ZPAC antisense DNA injected embryos showing Characterization of the interacting region of ZPAC as an Ump1-interacting protein by a yeast twohybrid assay. Ump1 interacting region is shown as a gray color box. (B) RT-PCR analysis of ZPAC and Ump1 mRNAs was carried out in various mouse tissues. In ZPAC, two different primer sets were used for RT-PCR. G3PDH was used as a positive control. (C) Immunoblot analysis for ZPAC and Ump1 were performed in the various mouse tissues. Actin was used as a loading control. (D,E) In situ hybridization (ISH) of ZPAC mRNA (D), and immunohistochemical (IHC) analysis of ZPAC protein (D) and Ump1 protein (E) in the mouse testis and ovary. Positive signals (brown color) for both ZPAC and Ump1 are detected in spermatogonia of mouse testes and fullygrown oocytes of mouse ovaries. Upper panels: negative control (NC, ZPAC sense RNA probe for ISH and normal rabbit serum for IHC). Middle panels: ISH and IHC for ZPAC or Ump1. Lower panels: magnified images of the rectangle-enclosed areas in the middle panels. Scale bars represent 100 mm. (F) Immunoprecipitation by anti-ZPAC antibody followed by immunoblotting using anti-Ump1 antibody in the mouse testes and ovaries. Normal rabbit serum (NRS) was used as a negative control. Actin was used as a loading control. (D) Proteasomal chymotrypsin-like activity in early mouse embryos was measured using Suc-LLVY-MCA as a substrate. Crude lysate from 300 fresh mouse oocytes or embryos were used in each stage. Three independent experiments were performed. Different letters indicate statistical significance (P,0.05). (E) Lysates prepared from untreated 2-cell embryos at 24 hpi and 36 hpi, and 2-cell embryos treated with MG132 from 24 to 36 hpi were performed immunoblotting using anti-Ub antibody. Actin was used loading control. Table S2). Knockdown of Ump1 is known to severely impair biogenesis of the 20S proteasome and cause cell death in mammalian cells (Heink et al., 2005;Hirano et al., 2005). In accordance with this, most of the embryos treated with Ump1 antisense DNA failed to develop beyond the 1-cell stage; only 9% and 2% of the knockdown embryos developed to the 2-cell and 4cell stages, respectively ( Fig. 3B; supplementary material Table S3). However, we observed no cell death until at least 60 hpi in the arrested 1-cell embryos by ZPAC-or Ump1-knockdown, in agreement with our previous observation that no apoptosis is seen in arrested 1-cell embryos by MG132 treatment until the same time .
Next, we examined whether the developmental defect observed in ZPAC-knockdown embryos was associated with proteasome activity in cells. The proteasomal chymotrypsin-like activity from crude lysate of ZPAC-knockdown arrested 1-cell embryos was significantly decreased by 77% or 83% compared with that of 1-cell or 2-cell embryos, respectively, although downregulation of its activity in ZPAC-knockdown embryos was also slightly less than Ump1-knockdown and MG132-treated embryos (Fig. 3C). Meanwhile, arrested 1-cell embryos at 24 hpi by treatment with the DNA replication inhibitor Aphidicolin, which were also arrested at the 1-cell stage as previously reported (Poueymirou and Schultz, 1987), showed similar activity to that of 2-cell embryos at 24 hpi. These results indicated that ZPAC as well as Ump1 are important for regulation of proteasome activity in early mouse embryos.
To assure inefficient protein degradation by downregulation of the proteasome activity, ZPAC-knockdown embryos were subjected to immunoblot analysis for ubiquitin along with Ump1-knockdown embryos, embryos treated with MG132 or the DNA replication inhibitor Aphidicolin. As expected, a significant accumulation of polyubiquitinated proteins was observed in MG132-treated and Ump1-knockdown embryos, but not in Aphidicolin-treated embryos ( Fig. 3D), while ZPAC-knockdown embryos as well as MG132treated embryos also accumulated polyubiquitinated proteins (Fig. 3D). Importantly, we confirmed that the accumulation of polyubiquitinated proteins in ZPAC-and Ump1-knockdown embryos was not a secondary effect of the developmental arrest.
Furthermore, to investigate the function of ZPAC in protein degradation in MII oocytes and fertilized zygotes, at which stages antisense DNA vector cannot be expressed because transcription does not occur until G2 phase at mouse 1-cell stage (Matsumoto et al., 1999), ZPAC or Ump1 antisense RNA was injected into the cytoplasm of MII oocytes and fertilized zygotes with second polar body extrusion ( Thus, these results indicate that ZPAC does indeed play an important role in the development of fertilized zygotes and suggest that ZPAC is involved in proteasome-mediated protein degradation in unfertilized oocytes and early embryos.
ZPAC-Ump1 complex specifically associates with assembly intermediates of the 20S proteasome in early mouse embryos
To clarify the mechanism in which ZPAC is involved in protein degradation, we examined the association of the ZPAC-Ump1 complex with the proteasome in early mouse embryos.
The assembly of the mammalian 20S proteasome starts from a-ring formation assisted by PAC1 (Psmg1)-PAC2 (Psmg2) and PAC3 (Psmg3)-PAC4 (Psmg4) complexes, followed by recruitment of b1-b6, some of which are in immature forms with propeptides, on the a ring with the assistance of Ump1, resulting in half-proteasomes, which then dimerize upon incorporation of b7 to form 20S proteasomes, accompanied by cleavage of b-subunit propeptides and degradation of Ump1 (Hirano et al., 2005;Hirano et al., 2008).
Since 2-cell embryos at 24 hpi were found to have high expressions of ZPAC and Ump1 proteins ( Fig. 2A) and showed a significantly higher proteasomal chymotrypsin-like activity in early mouse embryos (Fig. 2D), we used the extracts from 2-cell embryos to perform immunoprecipitation with anti-ZPAC or anti-Ump1 antibodies and subjected them to immunoblot analysis. Anti-ZPAC antibody precipitated Ump1, a-subunits and unprocessed precursor b-subunits of the 20S proteasome, but neither mature b subunits nor Rpt6, a subunit of the19S regulatory particle, were co-precipitated with ZPAC. ZPAC was also co-precipitated with PAC1 (Psmg1) and PAC3 (Psmg3), which are known to be specifically associated with assembly intermediates of the 20S proteasome (Murata et al., 2009) (Fig. 5A).
Anti-Ump1 immunoprecipitation reproduced essentially the same results, as expected from previous studies (Hirano et al., 2005) (Fig. 5A). As shown in localization of ZPAC and Ump1 in Fig. 2B and supernatant fraction in immunoprecipitation analysis of Fig. 5A, free ZPAC and Ump1 protein that are not associated with each other seem to exist in the oocytes and early embryos, but the significance of these free ZPAC proteins remains to be elucidated.
It has been reported that treatment with proteasome inhibitor elevated levels of b subunit precursor forms of 20S proteasome leading to increased de novo proteasome biogenesis (Meiners et al., 2003). To ascertain the differences in the amount of precursor or mature b subunits of mouse embryos and mammalian somatic cell lines, immunoblot analysis was performed using anti-b1, b2, b5, and a3 antibodies of 20S proteasome in the same amounts of lysates from mouse embryonic fibroblast (MEF) cells, human embryonic kidney (HEK) 293T cells, and 2-cell mouse embryos. As shown in Fig. 5B, compared with MEF and HEK293T cells, extraordinarily abundant precursor forms of examined three bsubunits was observed in 2-cell embryos.
Overall, these data demonstrate that the ZPAC-Ump1 complex associates specifically with precursor forms of 20S proteasomes and strongly suggest that ZPAC plays a role in the assembly of 20S proteasomes in early mouse embryos.
ZPAC facilitates assembly of 20S proteasomes by stabilizing Ump1 protein level in early mouse embryos The amount of proteasomes in mammals is primarily regulated at the transcriptional level under the control of an autoregulatory feedback mechanism that allows for the compensation of reduced proteasome activity (Meiners et al., 2003). Therefore, we firstly performed mRNA expression analysis of ZPAC, Ump1, five 20S subunits (a3/PSMA4, a4/PSMA7, b1/PSMB6, b2/PSMB7 and b5/ PSMB5), and one 19S subunit (Rpt6/PSMC5) genes in ZPAC-, Ump1-knockdown and MG132 treated embryos by quantitative RT-PCR with normalization of G3PDH mRNA levels. Treatment of embryos with MG132 resulted in a 1.3,1.6-fold or 2.6,3-fold induction of Ump1, a3/PSMA4, a4/PSMA7, b1/PSMB6, b2/ PSMB7, b5/PSMB5, and Rpt6/PSMC5 genes compared with 1cell at 12 hpi or 2-cell at 24 hpi, respectively, whereas there was a 2.5-fold or 1.4-fold increase in mRNA of ZPAC gene in MG132-treated arrested 1-cell embryos compared with 1-cell at 12 hpi or 2-cell at 24 hpi, respectively (Fig. 6A). These results indicate that RNA expression of ZPAC, similar to Ump1 and the examined six components of the standard proteasome, is regulated under the control of a positive autoregulatory feedback system of proteasome activity. Furthermore, consistent with the proteasome inhibitory effects as shown in Fig. 2E, transcriptional upregulation level of ZPAC, Ump1, and the examined six proteasomal subunits genes for the compensation of reduced proteasome activity was lower in ZPAC-or Ump1-knockdown embryos than in MG132-treated embryos ( Fig. 3C and Fig. 6A). Importantly, we confirmed that mRNA expression levels of Ump1 or ZPAC in ZPAC-or Ump1knockdown embryos at 24 hpi, respectively, were upregulated as the level of each in the 1-cell embryos at 12 hpi and transcripts levels of the examined six proteasomal subunits genes in ZPACor Ump1-knockdown embryos at 24 hpi are also upregulated up to the level of each in the 1-cell embryos at 12 hpi (Fig. 6A). These are consistent with the previous report that silencing of different proteasomal genes using siRNA in Drosophila S2 cells results in the reduction of mRNA level of targeted proteasomal subunits and acceleration of mRNA levels of several non-targeted proteasomal subunits (Wójcik and DeMartino, 2002).
Next, to examine whether ZPAC is indeed involved in 20S biogenesis, we subjected the cell extracts from ZPACknockdown embryos as well as Ump1-knockdown embryos to immunoblot analysis. Interestingly, knockdown of ZPAC significantly reduced Ump1 proteins and likewise knockdown of Ump1 caused an almost complete loss of ZPAC proteins (Fig. 6B) in spite of almost the same level of each mRNA expression as 1-cell embryos (Fig. 6A). These are consistent with the observation that reduction of Ump1 or ZPAC in oocytes in MII stage and fertilized oocytes injected with ZPAC or Ump1 antisense RNA, respectively (Fig. 4). These data suggest that the amount of Ump1 protein is greatly increased by association with ZPAC and vice versa in oocytes and early embryos, even though a certain amount of the Ump1 proteins remains despite a complete absence of ZPAC (Fig. 6B). This may also explain why a milder effect on viability of embryos is observed during ZPAC knockdown as opposed to Ump1 knockdown (Fig. 3A,B). In addition, Ump1-knockdown caused a significant reduction in 20S subunits a3, a4, b1, b2 and b5 while the amount of Rpt6 (19S subunit) was comparable to control embryos (Fig. 6B), consistent with its well-established function as a 20S assembly chaperone (Hirano et al., 2005;Hirano et al., 2006). Likewise, ZPACknockdown embryos also showed reduced amounts of a3, a4, b1, b2, b5 and a normal amount of Rpt6, quite similar to the observations in Ump1-knockdown embryos, although the decrease was less severe than in Ump1 knockdown (Fig. 6B).
As the reduction of the proteasome activity results in de novo formation of the proteasome, the newly formed proteasome enhances proteasomal degradation for a short period in a compensatory response (Meiners et al., 2003). To investigate the mechanism that underlies the observed reduction of 20S subunits in ZPAC-and Ump1-knockdown embryos, we performed immunoblot analyses in the cells extracted from embryos in the presence of MG132. As depicted in Fig. 6C, accumulation of ZPAC, Ump1, a3, b1, and Rpt6 were observed in ZPAC-and Ump1-knockdown embryos in the presence of MG132 as well as in only MG132-treated embryos. These results indicate that the reduced amount of Ump1 or ZPAC as a3, b1, and Rpt6 proteins in arrested 1-cell embryos by ZPAC or Ump1 knockdown (Fig. 6B), respectively, is not the result of secondary effects of cell arrest that lead to reduction of protein translation but the result of induced de novo proteasome formation of matured proteasomes in response to the proteasome inhibition by ZPAC or Ump1 knockdown. Interestingly, ZPAC or Ump1 proteins were accumulated in ZPAC-or Ump1-knockdown embryos treated with MG132, respectively, possibly resulting from gradual accumulation of ZPAC and Ump1 proteins, whose expression are also enhanced by the proteasome inhibition.
Since Ump1 is a protein with a short half-life that is degraded by newly assembled 20S proteasomes in human cell lines (Hirano et al., 2005), we tested whether ZPAC is also a short-lived protein like Ump1 in early mouse embryos. Embryos at 7 hpi were treated with or without Cycloheximide (CHX) in the presence or absence of MG132, and then subjected to immunoblot analysis using anti-ZPAC and anti-Ump1 antibodies. In embryos without CHX in the presence of MG132, the amount of Ump1 and ZPAC proteins were increased compared to embryos with CHX in the presence of MG132 ( Fig. 6D; supplementary material Fig. S6), indicating that de novo translation of ZPAC and Ump1 proteins occurs in arrested 1-cell embryos at least at 24 hpi as a result of the proteasome inhibition. Ump1 proteins disappeared before 48 hpi and were stabilized by MG132 treatment (Fig. 6D), indicating that the fate of Ump1 proteins in early embryos is similar to that in human cell lines (Hirano et al., 2005;Hirano et al., 2006). Correspondingly, ZPAC also disappeared at 48 hpi and was also a short-lived protein that is likely degraded in the proteasome, as suggested by being stabilized by MG132 (Fig. 6D).
Taken together, our data demonstrate that ZPAC is specifically associated with precursor forms of the 20S proteasome and is important for assembly of the 20S proteasome in early mouse embryos, probably by stabilizing Ump1 protein.
Discussion
The maternal-to-zygotic transition (MZT) is the first major developmental transition that occurs following fertilization (Schultz, 2002;Schier, 2007). The transition includes the degradation of many maternal mRNAs and proteins, and the beginning of zygotic gene expression resulting in a dramatic reprogramming of gene expression that is responsible for the normal development of early embryos. Our data provide evidence that the cell-type specific ubiquitin-proteasome system plays an important role in the degradation of maternal proteins during the mouse MZT. Notably, we identify ZPAC as a novel assembly chaperone for 20S proteasome at the mouse MZT, which is not found in somatic cells but is specifically expressed in germ cells and zygotes, and provide evidence that supports the role of ZPAC in specific mechanisms underlying the ubiquitin-proteasome system at the mouse MZT.
Our present findings lead us to propose a model for cell-type specific assembly of 20S proteasomes in early mouse embryos (supplementary material Fig. S7). Somatic cell-type assembly of the 20S proteasome is generally assisted by dedicated chaperones, like the PAC1 (Psmg1)-PAC2 (Psmg2) complex, the PAC3 (Psmg3)-PAC4 (Psmg4) complex, and Ump1, where Ump1 plays an important role in the assembly of immature b-subunits on brings (Ramos and Dohmen, 2008;Murata et al., 2009). In early mouse embryos, another assembly chaperone ZPAC is specifically expressed. ZPAC interacts with Ump1 and increases the stability of Ump1. The ZPAC-Ump1 complex promotes assembly of immature b-subunits, which are already produced and present in an excess amount in early mouse embryos, and eventually lead to the formation of half-proteasome precursor complexes. Then, the ZPAC-Ump1 complex is degraded upon generation of mature 20S proteasome. In the present model, we emphasize that correct assembly of 20S proteasomes in early mouse embryos is achieved by the general proteasome assembly factors in cooperation with another cell-type specific assembly factor.
Why is ZPAC exclusively expressed in oocytes and early embryos but not in other tissue cells if it is potentially advantageous for 20S proteasome assembly? There appear to be striking differences in strategies for 20S proteasome biogenesis between oocytes/early embryos and other tissue cells. In most tissues, cells other than oocytes and early embryos and also in rapidly proliferating cells such as cancer cell lines, precursor forms of b-subunits are hardly or only faintly detectable (Meiners et al., 2003). Indeed, we observed that early embryos have extraordinarily abundant precursor forms of bsubunits compared to the mature forms in contrast to MEF primary cell line and tumor cell line HEK293T cells (Fig. 5B). Consistent with this observation, immunoprecipitation with anti-ZPAC or anti-Ump1 antibodies did not deplete precursor bsubunits (Fig. 5A, ''supernatant'' lane). The propeptides of precursor b-subunits have roles to facilitate its own folding or molecular assembly, acting as intramolecular chaperones (Murata et al., 2009). Availability of Ump1 could be the rate-limiting step in the biogenesis of 20S proteasomes in the presence of excess precursor b-subunits. Thus, increasing the stability of Ump1 protein mediated by ZPAC, possibly by stabilizing them before or during the assembly process of the 20S proteasome, might be the most effective way to increase the amount of assembled 20S proteasomes in early mouse embryos. The higher complexity of eukaryotic 20S proteasomes requires additional factors to ensure their efficient and correct assembly compared with prokaryotic 20S proteasomes (Ramos and Dohmen, 2008;Murata et al., 2009). So, proteasome chaperones are suggested to be involved in a ''quality control'' mechanism during the assembly of the more complex eukaryotic 20S proteasome (Le Tallec et al., 2007;Li et al., 2007;Kusmierczyk and Hochstrasser, 2008). Therefore, our findings in this study add an additional layer of regulation of 20S proteasome biogenesis. With regard to its homologs, we are unable to find them in other species with BLAST homology searches using either the full length ZPAC or the N-terminal Ump1-interacting sequences as queries. Although we do not know whether early embryos in other species also use a similar mechanism to degrade maternal proteins through the ubiquitinproteasome system, it is possible that a functional homolog of ZPAC exists, which does not have a significant similarity in its primary amino acid sequence but may structurally resemble ZPAC. This principle is seen in the relationship between mammalian PAC1 (Psmg1) and yeast Pba1 or mammalian PAC3 (Psmg3) and yeast Pba3, both of which play a similar role in 20S assembly and are structurally very close to each other while having very low sequence homologies (Murata et al., 2009).
Besides oocytes and early zygotes, we also observed ZPAC in male germ-line cells in this study (Fig. 1D) and confirmed the coimmunoprecipitation of Ump1 with ZPAC in the testis (Fig. 1F). Spermatogenesis is known to be a complex process that originates in a small population of stem cells (Kanatsu-Shinohara et al., 2003). As the UPS is involved in the regulation of fundamental processes in mammalian stem and progenitor cells of embryonic, neural, hematopoietic, and mesenchymal origin (Naujokat and Sarić, 2007), we speculate that specific formation of proteasomes with ZPAC would be necessary for generation of male gametes.
In the course of our analysis, we noticed that the expression of ZPAC gene is profoundly upregulated under the control of a positive autoregulatory feedback system for sensing cellular proteasome activity, compared with general proteasome assembly factors involving Ump1 (Fig. 6A). In addition, whereas mRNA levels of Ump1 and the other examined five 20S proteasome subunits (a3/PSMA4, a4/PSMA7, b1/PSMB6, b2/PSMB7, and b5/PSMB5) and one 19S subunit (Rpt6/PSCM5) were decreased from 1-cell to 2-cell embryos, only ZPAC gene expression was transiently elevated in this period (Fig. 6A). Mammalian cells respond to various stimuli by controlling expression of proteasome genes, which allows the cell to cope with changing demands for protein degradation (Meiners et al., 2003). Indeed, exact expression of RIKEN cDNA E330034G19 gene (Gene Symbol, E330034G19Rik; we named ZPAC in this study) has been described by gene array analysis of lung from mechanically ventilated knockout mice for Nrf2 gene, which is a transcription factor that regulates the induction of several antioxidant enzymes (Papaiahgari et al., 2007), suggesting that ectopic expression of ZPAC gene in the lung could be induced by oxidative stress. Therefore, these results indicate that a unique expression profile of ZPAC gene is directed by transcription activation in a cellular response to the increased demands for proteasomal degradation of proteins.
The mammalian fully-grown or ovulated oocyte is the largest single cell in which large amounts of maternal mRNAs and proteins are stored. Degradation of enormous amounts of maternal mRNA and proteins after fertilization contributes to dynamic changes from the oogenic program to the embryonic program Solter et al., 2004;Pelczar et al., 2007;Li et al., 2010). A lack of this degradation and regulation would be harmful to embryonic development (Stitzel and Seydoux, 2007). What is more, damaged or misfolded proteins produced by oxidative stress during ovulation and mistakes in translation of stored maternal mRNA also need to undergo quality control by the UPS (Agarwal et al., 2005;Evsikov et al., 2006). Based on these facts, we demonstrated that the dynamic function of the 20S proteasome at the oocyte and early embryo has been demonstrated. In this context, the unique cell-type specific 20S proteasome assembly catalyzed by the ZPAC-Ump1 complex plays a pivotal role in the dynamic function of the UPS in early mouse embryos. In other words, as the demand for an increased capacity of protein degradation by the UPS might occur during MZT, the upregulation of ZPAC gene expression and the 20S proteasome biogenesis could be regarded as an adaptive response to such demand. Indeed, most of the 20S proteasome assembly is likely to be associated with the ZPAC-Ump1 complex in early mouse embryos, as suggested from our data concerning the proteasomal activity (Fig. 3C). At present, it remains to be seen whether there are any functional or compositional differences between 20S proteasome assembly by the ZPAC-Ump1 complex and 20S proteasomes assembled by only Ump1. In particular, further study is needed to investigate the capacity of each proteasome to recognize and degrade substrates.
In mammalian embryogenesis, zygotic gene activation occurring after fertilization is one of the critical events that govern the MZT for embryonic development (Li et al., 2010). The onset of zygotic gene activation is initially directed by stored maternal RNAs and proteins, and most maternal transcripts are replaced by new products of zygotic transcription. Also, the correct regulation of the onset of zygotic gene activation is an important factor for remodelling of an oocyte into a totipotent zygote. More recently, we have demonstrated that transient proteasome inhibition from 1 to 9 hpi allows fertilized oocytes to delay the onset of zygotic gene activation, indicating that proteasomal degradation of maternal proteins is implicated in the establishment of the embryonic program during the MZT . These findings would explain the effect of maternal protein degradation on maternal RNA decay and zygotic gene activation during the MZT.
In this study, we also observed an increase of the 20S proteasome by transient increase of ZPAC expression cooperating with Ump1 at the MZT (Fig. 2). This is likely to be in conflict with activating the degradation of Ump1 by an increased assembly of 20S proteasomes. However, the protein levels of Ump1 are also affected by its transcriptional levels. Indeed, in the early zygotes, a much higher level of Ump1 transcripts (nearly 100-fold higher) in the oocytes and 1-cell embryos than in embryos at later stages was observed ( Fig. 2A). Thus, while the 20S assembly is increased by the effect of ZPAC, which stabilizes Ump1 protein, the rate of Ump1 supply exceeds the rate of Ump1 degradation by 20S biogenesis. As a result, Ump1 protein is accumulated in early mouse embryos.
Understanding of the function of cell-type specific 20S proteasomes assembled by the ZPAC-Ump1 complex in the degradation of maternal RNAs and proteins, and zygotic gene activation during the mouse MZT helps elucidate molecular mechanisms governing the remodelling of the oocyte into the totipotent zygote and may also have implications for regulation of pluripotency.
Materials and Methods
Fluoro differential display (FDD) Differential display was performed by the Hieroglyph mRNA profiling system (TMR-flurorescent anchored primer adaptor kit, Genomyx, Beckman Coulter) according to the manufaturer's instructions. In brief, DNase-treated mRNA prepared from 15,000 MII oocytes or 1-cell embryos at 15 hpi were used for a reverse-transcription with 9 anchored primers (dT12NN(-T) AP). The resulting cDNA mixture was amplified by PCR using one of the 20 TMR-labeled arbitrary anchored primers (M13r-ARP). PCR products were electrophoresed for 5-5.5 hours with 3,000 V on 5.6% denaturing gel (Genomyx HR-1000, Genomyx). After electrophoresis, gel on glass plates and scanned for collection of the gel images. Bands including target fragments of cDNA were excised from dried gel and eluted. Then, re-amplified FDD PCR products were cloned by TA cloning and subjected to sequencing.
Animals, collection of oocytes, in vitro fertilization and embryo culture
All mice were purchased from Kiwa Experimental Animals (Wakayama, Japan) and maintained in light-controlled and air-conditioned rooms. All animal procedures conformed to the Guidelines of Kinki University for the Care and Use of Laboratory Animals. Collection of oocytes and fertilized embryos was essentially performed as described previously (Ho et al., 1995;Matsuoka et al., 2008).
Yeast two-hybrid screening
Yeast two-hybrid screening was performed according to the previously described protocol . The mouse full-length ZPAC cDNA fragment was amplified using PCR and subcloned into the pGilda vector (Takara Bio). A mouse ovarian cDNA library in the vector pB42AD was screened. The EGY48 yeast strain used for the screening assay contained both Leu2 and lacZ reporter genes under the control of a LexA-responsive upstream activation sites. For the assay, bait and library plasmids were used to simultaneously transform yeast using the lithium acetate procedure. Double transformant cells grown on Ura2, His2, Trp2 and Leu2 plates were incubated for five days at 30˚C. Positive colonies were picked up and assayed for the LacZ phenotype. Putative positives were detected and then further tested by assaying the colonies for b-galactosidase activity. Following confirmation of the specificity of the interaction, ZPACbinding partners were identified by sequence analysis. The transformation of only pGilda vector was used as a negative control. To identify the interaction domain of ZPAC with Ump1, partial ZPAC cDNA fragments (aa (amino acids) of 1-88, aa 89-176, aa 174-264, aa 265-351 and aa 1-351) were PCR-amplified and subcloned into the pGilda LexA vector and the full-length ORF sequence of mouse Ump1 was PCR-amplified and subcloned into the pB42AD vector.
RT-PCR and quantitative RT-PCR analyses
RT-PCR and quantitative RT-PCR analyses were performed as described (Amano et al., 2009). Total RNA was isolated from oocytes and embryos by using the RNAqueous micro kit (Ambion), and from adult tissues by using the TRIzol reagent (Invitrogen). cDNA was synthesized from 1 mg of total RNA by using Superscript III RT (Invitrogen). Prepared cDNA samples were amplified and analyzed by RT-PCR and quantitative RT-PCR. RT-PCR and quantitative RT-PCR analyses were performed using the represented primer sets in supplementary material Table S1. The primers for G3PDH were described previously . Amplifications were run in a 7300 ABI Prism Sequence Detector (Applied Biosystems).
Generation of anti-ZPAC antiserum
The design of peptide synthesis was based on the relative hydrophilicity and flexibility of regions analyzed by a computer program (GENETYX-Mac Ver. 12.0.3, GENETYX). A synthetic peptide (LKQENRRIWGR at residues 124-134) was purified that was designed from the deduced amino acid sequence of the ZPAC protein that spans exon 3. The region used as the synthetic peptide has high hydrophilicity and no putative site of modification. Anti-ZPAC antiserum was obtained by injection of the peptide-KLH (keyhole limpet hemocyanin) complex followed by booster injections at one-week intervals, six times in total, into New Zealand White rabbits (Kitayama Labs). ELISA was used to compare the serum titer from rabbits before and after immunization with the ZPAC peptide. Finally, anti-ZPAC antiserum was fractionated with 40% ammonium sulfate and used throughout this study.
Immunohistochemical staining
The procedures for immunohistochemical staining were essentially the same as those reported previously (Mizuno et al., 2006). In brief, the sample slides were incubated with ZPAC (1:5,000) or Ump1 (Biomol, 1:1,000) in Block Ace (Dainippon Pharm). After incubation, the slides were reacted with biotinylated donkey anti-rabbit secondary antibody (Funakoshi, 1:10,000) for 1 hour at room temperature and incubated with streptavidin for 1 hour at room temperature. Signals were visualized using alkaline phosphatase (Promega).
Immunocytochemical staining
Immunocytochemical staining was performed as described . In brief, oocytes and embryos were fixed in 4% PFA (Nacalai Tesque) in phosphate-buffered saline (PBS) for 30 minutes at room temperature, and the permeated samples were then incubated in PBS containing 0.1-0.2% Triton X-100 (Nacalai Tesque) overnight at 4˚C. The samples were then incubated with ZPAC (1:10,000) and/or Ump1 (Santa Cruz Biotechnology, 1:500) in PBS containing 30 mg/ml BSA overnight at 4˚C. After incubation, the samples were reacted with Alexa Flour 594-labeled goat anti-rabbit IgG and/or Alexa Flour 488-labeled rabbit anti-goat IgG secondary antibodies (Invitrogen, 1:1,000) for 1 hour at room temperature. To prevent cross-reaction between secondary antibodies, we confirmed that oocytes and embryos were treated with secondary antibodies separately, and then mounted on glass slides in a Vectashield mounting medium (Vector Laboratories) containing 2-5 mg/ml DAPI (Invitrogen). The fluorescence images of oocytes and embryos were obtained using a fluorescence microscope (BIOREVO BZ-9000; Keyence).
In situ hybridization and immunohistochemistry
In situ hybridization was performed as described previously with some modifications (Matsumoto et al., 1999). ZPAC sense and antisense RNA probe were synthesized from ZPAC cDNA (spanning bases 506-1032 of the mouse ZPAC cDNA sequence) cloned into pGEM-T-Easy vector (Promega) with digoxigenin-labeled UTP according to the manufacturer's protocol (Boehringer Mannheim). After hybridization, the hybrids were reacted with western blue stabilized substrate for alkaline phosphatase (Promega).
Production of transgenic (Tg) mice expressing EGFP under the control of ZPAC promoter
Transgenic mice with EGFP gene regulated by ZPAC promoter (24482/21; 4482 bp upstream from the start codon) were produced by standard procedures. In brief, the purified DNA fragment (ZPAC promoter/EGFP/SV40 terminator) was microinjected into the male pronuclei of zygotes collected from C57BL/6 mice (C57BL/6J, Charles River Laboratory). At 24 hours after DNA injection, morphologically normal zygotes that developed to the 2-cell stage were transferred into the oviducts of Day 1 pseudopregnant female mice (MCH:ICR, CLEA Japan Inc.). A vaginal plug was recognized on this day in the mice used for this procedure (Day 1) (C57BL/6J). Four sublines of the heterozygous transgenic mice were crossed with C57BL/6J mice for two generations before use in this study. For analysis of EGFP expression in the transgenic tissues, testes and ovaries were fixed in 4% paraformaldehyde overnight, embedded and sectioned.
Treatment of inhibitor
MG132 (carbobenzoxy-L-leucyl-L-leucyl-L-leucinal) was purchased from Sigma-Aldrich. To inhibit the activity of proteasomes in early embryos, embryos were cultured in KSOM medium containing 5 mM MG132. For control, the same protocol was used without MG132. For inhibition of protein synthesis, zygotes were treated with 1 mg/ml cycloheximide (CHX) (Sigma-Aldrich). Aphidicolin (A0781; Sigma Chemical) was used to inhibit DNA replication at the 1-cell stage at a concentration of 1.0 mg/ml. 1-cell embryos were incubated with each of the chemicals from 7 to 24 hours after in vitro fertilization.
Microinjection of antisense expression vectors
The procedure for microinjection was essentially as described previously Tsunemoto et al., 2008). To investigate the effects of knockdown of ZPAC or Ump1 on the development of early embryos, ZPAC antisense expression vector (pb-actin promoter/antisense ZPAC/IRES/EGFP/ SV40) with bicistronic expression of both ZPAC antisense RNA and EGFP or Ump1 antisense expression vector (pCAG promoter/antisense Ump1/IRES/luc+/ SV40) with bicistronic expression of both Ump1 antisense RNA and humanized firefly codon-optimized luciferase (luc+) gene was injected into the pronucleus of zygotes at 7 to 9 hpi. The injected zygotes showing EGFP or luciferase activity at 15 hours after microinjection were selected and then cultured to examine the effect of antisense DNA expression on subsequent embryonic development to blastocyst stage. In these experiments, pb-actin promoter/luc+/IRES/EGFP/SV40 or pCAG promoter/IRES/luc+/SV40 were used as a control expression vector.
In vitro RNA synthesis and microinjection of antisense RNA
The ZPAC and Ump1 RNA amplification was performed using Ampliscribe T7 Transcription Kit (Epicentre Technologies) from pGEM-T-EASY/antisense ZPAC and pGEM-T-EASY/antisensen Ump1 vectors. For efficient translation of the proteins in embryos or oocytes, the 59 end of each RNA was capped using RNA Cap Analog kit (Epicentre Technologies), according to the manufacturer's protocol. To investigate the ZPAC or Ump1 function in unfertilized or just fertilized eggs until 6 hpi, ZPAC or Ump1 RNA was injected into the cytoplasm of mouse MII oocytes or fertilized oocytes at 1 hpi, which were confirmed extrusion of second polar body. Dilution buffer was used as a negative control.
Co-immunoprecipitation
Co-immunoprecipitation was performed according to previous reports (Hirano et al., 2006). For co-immunoprecipitation, we used the ZPAC and Ump1 (Biomol) antibodies.
Densitometric quantification analysis
Densitometric quantification analysis of the immunoblot bands was performed using a Molecular Imager FX with Quantity One software (Bio Rad).
|
2016-08-09T08:50:54.084Z
|
2012-11-23T00:00:00.000
|
{
"year": 2012,
"sha1": "6aa8474509bad83f8553da1d4456efcadb4dabc9",
"oa_license": "CCBYNCSA",
"oa_url": "http://bio.biologists.org/content/2/2/170.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6aa8474509bad83f8553da1d4456efcadb4dabc9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
230770180
|
pes2o/s2orc
|
v3-fos-license
|
First observed interaction of the circumstellar envelope of an S-star with the environment of Sgr A*
Several publications highlight the importance of the observations of bow shocks to learn more about the surrounding interstellar medium and radiation field. We revisit the most prominent dusty and gaseous bow shock source, X7, close to the supermassive black hole, Sgr~A*, using multiwavelength analysis. For the purpose of this study, we use SINFONI (H+K-band) and NACO ($L'$- and $M'$-band) data-sets between 2002 and 2018 with additional COMIC/ADONIS+RASOIR ($L'$-band) data of 1999. By analyzing the line maps of SINFONI, we identify a velocity of $\sim 200$ km/s from the tip to the tail. Furthermore, a combination of the multiwavelength data of NACO and SINFONI in the $H$-, $K$-, $L'$-, and $M'$-band results in a two-component black-body fit that implies that X7 is a dust-enshrouded stellar object. The observed ongoing elongation and orientation of X7 in the Br$\gamma$ line maps and the NACO $L'$-band continuum indicate a wind arising at the position of Sgr~A* or at the IRS16 complex. Observations after 2010 show that the dust and the gas shell seems to be decoupled in projection from its stellar source S50. The data also implies that the tail of X7 gets thermally heated up due to the presence of S50. The gas emission at the tip is excited because of the related forward scattering (Mie-scattering), which will continue to influence the shape of X7 in the near future. In addition, we find excited [FeIII] lines, which underline together with the recently analyzed dusty sources and the Br$\gamma$-bar the uniqueness of this source.
INTRODUCTION
In the center of our Galaxy, the prominent variable radio source Sgr A* is located (Balick & Brown 1974). This source emits across a broad range of wavelengths, ranging from the radio up to the X-ray domain, with the peak at submillimeter wavelengths (see e.g. Genzel et al. 2010;, and references therein). Although Sgr A* is a low-luminosity source, its monitoring has been of high interest because of order-of-magnitude flares in the near-infrared and X-ray domains (Witzel et al. 2012;Do et al. 2019). Because of its nonthermal radiative properties, compact nature, variability, and position at the Galactic center, it has been associated with a supermassive black hole (SMBH) since its discovery (Lynden-Bell & Rees 1971), with most of the alternatives being ruled out based on the current observational data .
Sgr A* is also the only SMBH to date, where we can detect and monitor orbiting stars. Some of them are located inside the S-cluster, hence, they are called S-stars. These stars show pericentre distances of several 100 AU arXiv:2101.02077v1 [astro-ph.GA] 6 Jan 2021 (Gillessen et al. 2009;Parsa et al. 2017;Ali et al. 2020). Recently discovered stars push this distance an order of magnitude closer to the SMBH (Peißker et al. 2020a,d). These S-stars are widely covered by many publications. For example, Eckart & Genzel (1996) derived from the stellar proper motion a direct mass estimate of Sgr A*. In addition, Ghez et al. (2002) and Eckart et al. (2002) found stellar accelerations based on the orbital curvature. Genzel et al. (2000) derived a velocity dispersion as a function of the distance of S-stars and found values of up to several hundred km/s. One of the controversial but also interesting sources in the field of view (FOV) is the Galactic center (GC) gas cloud G2 (Gillessen et al. 2012;Eckart, A. et al. 2013;Valencia-S. et al. 2015;Shahzamanian et al. 2017;Zajaček et al. 2017;Peißker et al. 2020b) also known as the Dusty S-cluster Object (DSO) 1 . This object was found on its way approaching Sgr A* in the Doppler shifted Brγ maps of SINFONI, a near-infrared (NIR) instrument mounted at the Very Large Telescope (VLT, Chile/Paranal). In combination with the observed dust emission in the L -band (3.8 µm) with NACO (also operating in the NIR, mounted at the VLT), the authors of Gillessen et al. (2012Gillessen et al. ( , 2013; Pfuhl et al. (2015) claimed that the object will get disrupted during or after its periapse passage. Later on, Plewa et al. (2017) stated that the density of the ambient medium of Sgr A* is too low to cause a disruptive event. Even more, they excluded the possibility of a drag force acting on the DSO. In contrast, Gillessen et al. (2019) reported a drag force that influenced the observed Doppler shifted Brγ line shape. This underlines the ongoing confusion about the nature of the source. However, in Peißker et al. (2020b) we present a spectral energy distribution (SED) derived from the H-, K-, and L -band data of NACO and SIN-FONI. This SED consists of a dusty and stellar component and shows that the DSO is more likely a Young Stellar Object instead of a coreless ∼ 3 × M ⊕ cloud that moves on a Keplerian orbit around a 4.1 × 10 6 M super massive black hole. Clénet et al. (2003) and Clénet et al. (2005) reported for the first time two comet-shaped sources, namely X3 and X7. These dusty objects can be found in the midinfrared (MIR) but also show a NIR counterpart. Because of its close projected distance to X7, another line emitting source is located that we call X7.1 (G5 in Ciurlo et al. 2020).
The identification of these objects is still challenging, which is manifested in Fig. 1 in Peißker et al. (2020b). The potentially temporary distance of X7 and X7.1/G5 can lead to confusion about the identification without spectroscopic analysis. It is, for instance, not clear why the dusty object X7.1/G5 with an approximate L-band magnitude of 14.11 mag (∼ 0.57 mJy) can neither be observed in the NACO (L -band) data presented in this work nor in the shown 3.8 µm continuum data in Ciurlo et al. (2020) (see extended data Fig. 2 in the related publication). A dust-enshrouded source with a stellar counterpart should be detectable in the L -band as presented in Peißker et al. (2020b). A reliable approach is the spectroscopic analysis in combination with multiwavelength observations. This underlines the need for broad observation programs. Following the example of the DSO and X7.1, we emphasize a multiwavelength analysis of these (potentially) dust-enshrouded stars. With the observational coverage of different bands in combination with spectroscopy, the confusion about the nature of these objects can be minimized (see Zajaček et al. 2017).
However, Mužić et al. (2010) analysed the X7 source in detail and showed the connection to a possible nuclear wind that arises at the position of Sgr A*. This wind tis also mentioned in several observational and theoretical publications (Mužić et al. 2007;Zajaček et al. 2016;Yusef-Zadeh et al. 2017b;Peißker et al. 2020b,c;Yusef-Zadeh et al. 2020). In this regard, Peißker et al. (2019) reported a new bow shock source in the central arcsecond that the authors call X8 (G6 in Ciurlo et al. 2020) because of its close projected distance to X7. These two objects are the closest bow shock sources that could be used to determine the properties of a possible wind that arises at the position of Sgr A*.
In this work, we will update the analysis of X7 done by Mužić et al. (2007Mužić et al. ( , 2010 with the help of SINFONI integral field spectroscopy and NACO continuum data that cover almost 16 years. Additionally, we use L -band continuum COMIC/ADONIS+RASOIR data of 1999 to extend the analysis of X7 to about 20 years. This work is part of a broader investigation that is split up in two publications. Here, we emphasize the observational results and give an outlook on the second part where we theoretically investigate the observed source X7. In the second part of this survey, we will apply two models to describe an open and closed bow shock based on the work of Wilkin (1996Wilkin ( , 2000 and Christie et al. (2016). The spectroscopic capabilities of SINFONI give us an access to investigate the velocity along the bow shock source that could help to describe the nuclear and the stellar wind interaction as well as prominent Doppler-shifted emission lines. Furthermore, we will investigate the close projected distance of S50 to X7. This S-cluster star can be associated with the stellar counterpart of X7 and seems to interact with the dust tail of the bow shock source. In the multiwavelength analysis, we also model a two-component SED of X7. We also witness the ongoing decoupling in projection of the dusty and gaseous shell of S50 that is associated with X7.
In the following Sec. 2, we introduce the used instruments and the analyzing techniques. The results of the analysis are presented in Sec. 3. Section 4 summarizes the results and provides an outlook for future observations. In Appendix 4.6, we will give some supplementary information regarding the analysis and a possible scenario. In addition, we list the SINFONI and NACO data that were used for the analysis.
DATA & ANALYSIS
In this section, we will give a brief overview about the used instruments, the data reduction, and the applied analysis tools.
SINFONI & NACO
The Spectrograph for Integral Field Observations in the Near Infrared (SINFONI) was mounted on the VLT and undergoes an upgrade (for further information about the upgrade, see Kuntschner et al. 2014;Marchetti et al. 2014;Pearson et al. 2016). SIN-FONI operates in the NIR and provides observations in the J-(1.10 − 1.40 µm), H-(1.45 − 1.85 µm), K-(1.95 − 2.45 µm), and H+K-band (1.45 − 2.45 µm). The output files of the ESO pipeline are in the shape of a 3 dimensional data cube. This data cube consists of 2 spatial dimensions and 1 spectral dimension. The components of the data cubes are described in spaxels (pixels containing a spectrum, see Hörtner et al. 2012) rather than pixels. With SINFONI, we are able to isolate single emission lines in the H+K-band to create channel (line) maps. In comparison, the NACO 2 instrument works also in the J, L , and M -band (Lenzen et al. 2003;Rousset et al. 2003). Since dust can be traced in higher wavelengths, the L -band setup of NACO is favored for the search of the dusty bow shock source. In both cases, we apply the usual data reduction steps like, e.g., dark-and flat-field corrections. We also apply the mandatory sky correction to the adaptive optics (AO) corrected data. Additional correction steps are described in detail in Peißker et al. (2019Peißker et al. ( , 2020a where the here analyzed data is also used. Please con-sider also the Appendix E for a detailed overview about the used data. We also note that a part of this data was used in Parsa et al. (2017). The authors describe the Schwarzschild precision of S2, which was independently confirmed by Gravity Collaboration et al. (2020). The collaboration used GRAVITY, an interferometric instrument with a resolution almost one magnitude better than NACO 3 . This underlines the robustness and validity of the NACO data.
COMIC/ADONIS+RASOIR
The NIR camera COMIC was installed in La Silla/Chile at the 3.6 m telescope and used the AO system ADONIS+RASOIR (Beuzit et al. 1994;Lacombe et al. 1998). It operated in the J-, H-, K-, K -, L -, and M-band with two different plate scales (35 mas/pixel and 100 mas/pixel). It was optimized for L-and M-band observations and decommissioned in 2001 (Pasquini & Weilenmann 1996). For the here presented data, the exposure time was set to 10 seconds. The observational pattern was chosen to be s-o-s (sky-object-sky) followed by flat and dark exposures. For combining the data, we use the shift and add algorithm to maximize the signalto-noise (S/N) ratio. This is followed by rebining the data to smooth sharp edges caused by the resolution. The COMIC/ADONIS+RASOIR data analyzed in this work was first published in Clénet et al. (2001).
High-pass filter
Depending on the scientific goal, a suitable frequency pass filter can improve the amount of accessible image information. In the case of an elongated source like X7, using the Lucy-Richardson algorithm (Lucy 1974) is not the most satisfying option for the L -band NACO data. However, using a high-pass filter like the smooth subtract algorithm can reveal the stellar component of the bow shock source in the K-band SINFONI data if the object is confused with nearby S-stars. For that, we are subtracting a smoothed version of the input image file. The size of the Gaussian that is used for the smoothing should be of the order of the related image point spread function (PSF). The resulting smooth subtracted image should then be resmoothed with a Gaussian PSF that can be 10 − 20% smaller than the image PSF. With this technique, the influence of overlapping PSF-wings can be minimized.
RESULTS
This section shows the results of the survey of the X7/S50-system between 2002 and 2018. We present the line map and continuum detection of the bow shock source X7 and show the ongoing implied decoupling of the shell from its central star, S50. Furthermore, we compare the observations with the published COMIC/ADONIS+RASOIR data of 1999 and apply a photometric analysis to the NACO images.
Line map and velocity gradient detection of X7
Throughout the SINFONI data between 2005 and 2018, the source X7 can be at least partially observed.
A key parameter is the FOV. Hence, the SINFONI data in 2006, 2008, 2014, 2015, and 2018 can be used for a detailed analysis. By analyzing continuum subtracted line maps, we find the length from the tip to the tail of the detected Doppler-shifted Brγ emission to be around 0.23" in 2006. Furthermore, we measure the length of the bow shock of about 0.35" in 2018.
Because of the high S/N ratio of the SINFONI data in 2018 (see Appendix E) at the spatial position of the Doppler-shifted Brγ-emission of X7, we use this set to investigate the velocity gradient of the bow shock source. For this purpose, we fit a Gaussian to the blue-shifted Brγ-line in the related spectral range (2.16 ± 0.04 µm). Afterwards, the related spaxel carrying the velocity information is copied to the same position in a new array that is as big as the input file. We manually mask the close-by source X7.1/G5 (see Ciurlo et al. 2020;Peißker et al. 2020b) and non-linear pixel. In Fig. 1, the resulting velocity gradient is shown. We find a difference from the tip to the tail along the projected bow shock structure of ≈ 190 ± 20 km/s. Considering a spatial pixel scale of 12.5 mas in the H+K-band in the highest plate-scale setting of SINFONI and the measured projected bow shock length of 349 mas, we get a linear gradient of ≈ 7.1 ± 1.0 km/s/pixel in 2018. Furthermore, we find several prominent emission lines that indicate the presence of ionized gas (see Table 1). In several data sets, a H 2 emission triplet can be found (Appendix B, Fig. 10). Due to crowding and therein the resulting possibility of confusion, we limit the analysis of the H 2 emission triplet to the data of 2018.
Continuum detection of X7
L -band observation of the bow shock source X7 in the close distance of the S-cluster shows that the object is always one of the most prominent sources in the close vicinity of the SMBH (see Fig. 2). The L -band brightness and elongated shape of X7 underlines the unique character of the object. After 2002, X7 becomes increasingly brighter than most of its nearby stars like, e.g., S1, S2, S61, and S71. The bow shock shape is clearly noticeable in the NACO Lband (green circles indicate its position in Fig. 2). After 2010, the source shows a more elongated shape with an approximate projected length of 333 mas in 2016. This is almost three times as much as the L -band dust emission detected with NACO in 2002 (112 mas). Compared with the line emission area of the SINFONI data in 2006,2008,2009,2013,2014,2015, and 2018, we find matching values for the gaseous emission (for a detailed list, see Table 2). Hence, the size of the projected area of the ionized gas is coinciding with the L continuum dust emission of X7.
Since we observe a clear increasing projected size of X7 between 2002 and 2018, we investigate the Lband data of 1999 to see if this trend is also observable in data before 2002. For this purpose, we use the investigated COMIC/ADONIS+RASOIR L -band data in 1999 by Clénet et al. (2001). We apply a high-pass filter to reduce the influence of overlapping PSF. Afterwards, we use a Gaussian that is about 50% in size of the initial smoothing kernel (Appendix D, Fig. 12) on the resulting high-pass filtered image. In addition to some prominent members of the S-cluster, we identify at the expected position of S50 a spherical L -band emission several magnitudes above the noise level. By comparing the closest NACO L -band data to verify the COMIC/ADONIS+RASOIR identifications of 1999, we find matching positions for almost all stars/features.
S50
Mužić et al. (2010) reported that the stellar counterpart of X7 could be associated with the S-cluster star S50. In Fig. 12 (right side), we present the orbit plots of S50 based on the analysis presented in Ali et al. (2020). Throughout the available data covering the related spatial area, we find without confusion that the bow shock source X7 is moving along with S50 (see Fig. 3 and Appendix, Fig. 9) till 2009. S2 (K-band) and S65 (Lband) as the two brightest and therefore most prominent members of the S-cluster can always be observed in the same FOV as S50. Hence, we are using these two Sstars for a photometric analysis to investigate the magnitude of S50 and X7 in various bands (see Table 3). In combination with the published SHARP data (Schödel et al. 2002), we find a constant K-band magnitude of S50 of m K ≈ 16 mag. We find a similar magnitude with NACO (VLT) data of 2007 and SINFONI (VLT) data of 2019. Based on the data that covers almost 20 years, we conclude that the S-cluster member S50 does not show a variable K-band emission. However, this is not the case for the L -band continuum emission that For that, we fit a Gaussian to the spectrum of every spaxel in order to create a confusion free velocity map. In the lower panel, the spectrum of X7 can be seen where we mark prominent lines. The related spectrum is integrated over all pixel shown in the top right panel ('Velocity map'). The telluric emission between 1.80 − 1.93 µm is clipped. The most prominent blue-shifted emission lines are Brδ @ 1.9414 µm, HeI @ 2.0545 µm, Brγ @ 2.1619 µm, [FeIII] 3 G5 − 3 H6 @ 2.2144 µm, and [FeIII] 3 G5 − 3 H5 @ 2.2344 µm. Next to the blue-shifted HeI and Brγ line, we observe a red-shifted emission that is related to X7.1/G5. This source is in projection spatially close to X7 (Peißker et al. 2020b). seems to vary between 2008-2018. We will elaborate on this point in detail in Sec. 3.5.
Decoupling of X7 from S50?
Based on the L -band observations, we find a noticeable elongation of the source X7 that becomes increasingly prominent after 2009 (please see Fig. 2). Comparing the NACO L -band images with the SINFONI line maps, we find that the symmetric distribution of gas and dust cannot be observed after 2009. Compared to the SINFONI data between 2006-2008 and 2010-2018, the data shows a rather compact gas emission in 2009 (see Fig. 3). Whereas the data of 2006-2008 shows a symmetrical gas-to-dust distribution with respect to S50 and X7, we find that this symmetry of the S50-X7 system is broken for the observations between 2010-2018. Furthermore, we observe that the distance of the gaseous front of X7 (i.e., head) is increasing year-by-year with respect to S50 (Fig. 4). In contrast, the back (or tail) of the Brγ gas emission does not show a comparable behavior compared to the head after 2009. As previously described, this leads to an asymmetric distribution of the gas around the central stellar source S50. This broken symmetry between the shell and the star can also be observed in the NACO L -band data (see Fig. 5). Hence, the data implies that the gas and dust shell starts to detach in projection after 2010. This process can be tracked throughout the available NACO and SINFONI data beginning in 2009 and is indicated in Fig. 4. Furthermore, we find that the intensity maximum of the (SINFONI, 2009.47). We indicate the time of the pre-and post-event (that shows a discontinuous behavior of the increasing elongation of the X7/S50 system) with the horizontal lines before and after 2009 and 2010 respectively. To cover statistical variations, reading errors, background effects, and detector irregularities, we determine a spatial uncertainty of ± 10mas. For the position angle that is measured with respect to Sgr A*, an uncertainty of ± 2 • is given. The asterisk of the position angle measurement of 2018 indicates 60 • as a lower limit. This lower limit is justified because X7 is not aligned towards Sgr A* in 2018. dust is located at a distance of less than 13 mas to the position of S50 (Fig. 2). As a result, the tail of X7 gets increasingly brighter when comparing the data between 2002 and 2018.
Photometric analysis of X7
The photometry was done in the H-, K-, L -, and Mband. As shown in Fig. 2, Fig. 5, Fig. 12, and Fig. 4 the dusty bow shock of X7 gets elongated between 1999 and 2018. After 2007, the projected elongated size of the L -band emission exceeds a spatial coverage of two PSF (≈ 0.20") and we categorize the source in a front-(i.e. head) and back-part (i.e. tail). For this analysis, we focus on the tail of X7 since deriving the emission area of the faint L -band head magnitude is not free of confusion. For the photometric analysis, we use S65 because of its well-known stable magnitude of about 10.96 mag (Hosseini et al. 2020). For the magnitude of X7, we use the peak emission of the L -band dust emission (see Fig. 2). The magnitude of X7 is derived from the peak intensity and can be related to the tail of the source after 2008. For every dataset, a one-pixel aperture is used. No background subtraction is applied because of the high S/N ratio that exceeds several orders of magnitude the intensity of the surroundings. The fit presented in Fig. 6 Regarding point 1, the COMIC/ADONIS+RASOIR and NACO L -band data between 1999 and 2006 does not show a magnitude variation. Additionally, point 2 underlines a slightly variable L -band tail magnitude of the bow shock at the K-band position of S50. These variations of the L -band magnitude of X7 coincide with the discontinuous shape evolution that is observed in the Brγ line maps (see Fig. 3 and Fig. 5). By investigating several datasets of the GC that cover individual bands, we find an increasing flux towards higher bands (from H-to M-band, see Table 3) for X7. Using the magnitude values, we derive the SED with a two-component fit for the emission of S50 (H and K) and X7 (L and M ). This indicates a dust-dominated emission source with a multiwavelength appearance. Since the commonly observed dust temperature in the GC is about 200 K (Cotera et al. 1999), the derived envelope temperature of 450 K must be heated up by the internal stellar source S50.
DISCUSSION AND CONCLUSION
In this section, we will discuss the results and the implications for future observations of the X7/S50 system. We will also speculate about some possible interpretations regarding the increasing position angle and the implied decoupling of X7 and S50.
The shape of X7
From the survey of X7 over two decades with all publicly available SINFONI and NACO data, we have shown that the shape of the bow shock does change over time on a significant level. Even when we consider different weather and background scenarios, the here presented findings underline a dynamical star-envelope setup. As Fig. 3, we distinguish between two responsible processes for the evolution of the dust shell X7 which is reflected in the two fits. The overall trend is indicated with a blue transparent fit. On the right, the head (red), the tail (green), and S50 (blue) is shown with their position with respect to Sgr A*. Again, the trend shows that the head is moving towards Sgr A* and further away from S50. Typical uncertainties of about 1 px are not included to preserve the better readability of the plots. One pixel [px] corresponds to 12.5 mas.
shown in Table 2 and Fig. 3, the shape of X7 undergoes a transition: we find an almost constant position angle and magnitude with a linear increasing bow shock size both in gas and dust till 2009. Based on Mužić et al. (2010), this setup for X7 is expected because S50 as the stellar counterpart is located close to the front tip of the bow shock X7. As theoretically described by Wilkin (1996Wilkin ( , 2000 and observed by Mužić et al. (2010), we can confirm that the S-star S50 is located always at the position of the maximum peak intensity of the observed L -band emission of the bow shock X7. This L -band intensity peak can be found close to the apex of the bow shock at a distance of R 0 = ṁwvw Ωρav 2 a ≈ 2.5 × 10 15 cm (Mužić et al. 2010) till 2009. Here,ṁ w describes the mass-loss rate of the star, v w is the stellar wind velocity, Ω a dimensionless parameter to control the shape of the bow shock (Ω = 4π for an isotropic stellar wind), ρ a is the density of the ambient medium, and v a the relative stellar velocity in a non-stationary medium.
Between 2009.47 and 2010.49, we observe a discontinuous process since the Brγ and L -band size is decreased by almost 30% compared to the observation in 2009.26 (NACO). After 2010, not only is the Brγ and L -band continuum size expanding, but also the position of S50 seems to change with respect to the shell. Hence, R 0 is not a fixed value anymore and seems to change year by year. Because the stellar position with respect to its dusty envelope does not follow any simple stationary model, we will speculate about some possible interpretations. detached from the star) harbors the problem that these processes (including the trajectories of the dust grains) take up to several 1000 years as proposed by Henney & Arthur (2019). We have shown that the gas distribution is coinciding with the dust emission (see Table 2 and Fig. 5). In 2008, we find a matching size of the emission of about 230 mas. The NACO data of 2009.26 seems to follow the linear evolution of the observed emission size in 2008. For the SINFONI data of 2009.47, we observe a source size that is unexpected. Because of these timescales, we see a reduced chance for the possibility of dust-and bow-waves as a suitable explanation for the discontinuous evolution.
Another possibility are projection effects. Considering the possibility that S50 could maybe not be related to X7 at all and just moves on a random orbit that coincides in projection with X7 opens a new set of questions. In the following, we independently discuss these questions ignoring the already complete discussion of Mužić et al. (2010), where the authors exclude the possibility of a random encounter based on the matching proper motion of S50 and X7.
The most obvious one is regarding the statistical robustness of a randomized orbit that is oriented along the trajectory of X7 over time. As derived by Sabha et al. (2012) and Eckart, A. et al. (2013), the probability for such an event is of the order of 10 −4 to a few percent for a consecutive observation of 3 years. The probability for the outer region of the S-cluster should therefore be in a comparable range since we observe S50 along with X7 between 2002-2009 (NACO) and 1999 with COMIC/ADONIS+RASOIR (Appendix, Fig. 12).
As shown in Fig. 2 and Fig. 5, the shifted L -band intensity maximum towards the tail is followed by the projected position of S50. Based on the derived L -band magnitude year-by-year, the temperature of X7 is always well above 200K which can only be achieved by an internal heating source. Hence, we conclude that the tail of X7 gets heated up by S50. Alternatively, a wind that originates south-west of the position of Sgr A* could be responsible for the increased tail emission in 2018. However, this does not explain the Wilkinoide (Wilkin 1996) bow shock between 2002 and 2009 that is observed throughout the NACO and SINFONI data. In combination with the continuum and line emission data of 2006 and 2008 (see Fig. 5), we will not discuss the possibility of another wind coming from south-east any further, especially considering the observed footprint of a wind that originates at the position of Sgr A* or IRS16 in the mini-cavity (see, e.g., Lutz et al. 1993).
A more suitable explanation of the observed gas and dust emission of X7 is forward scattering explained by the Mie-theory. This scatter mechanism describes dust grains as an emitter with the mentioned forwarded scattering. Single and multiscattering events occur where, e.g., dust emits and transmits stellar light, which is reemitted by close-by grains. As long as S50 is embedded in the dusty shell X7, the ionized and blue-shifted Brγ-emission is symmetrically distributed following the aligned dust grains. After 2009, the peak emission of the L -band emission can be observed closer to the tail of X7 whereas the gaseous tip gets more prominent 4 .
Overall, we conclude that a projection scenario that describes a random encounter between S50 and X7 is highly unlikely but not excluded. 4.2. Two observed processes: the change of the position angle between X7 and Sgr A* Besides the observed decreased projected source size in 2009-2010, we find that the position angle (with respect to Sgr A*) is increasing faster as the shell of S50 is aligned towards the SMBH (Table 2). Even though a change of the position angle is expected since the proper motion of the X7/S50-system is directed towards the north (Mužić et al. 2010), the gas and dust shell is pointing/aligned to a position 0.45" north of the SMBH (see Fig. 2, 5) in 2018. Comparing the position angle of 2006 and 2018 shows a growth of about 40%. If S50 would be located close to the position of the tip of the bow shock at a distance R 0 , a growth by around 12% would be expected in 2018. However, assuming the chance of reading uncertainties, the position angle of 60 • in 2018 between Sgr A* and X7 marks a lower limit. The observations and the measured properties suggest to distinguish the description of X7 in pre 2009.26 and post 2009.46 since the object shows a discontinuous development as a function of time. Summing up the observational results leads to two assumptions: either X7 is a tidally stretched object 5 where the head is on its way towards Sgr A* (A), or the dust-and gas-shell seems to be ripped apart by an unknown interaction (B).
A) The trajectory of the head, as shown in Fig. 4, shows a clear trend towards Sgr A*. The distance between the SMBH and the gaseous head of X7 decreased by around 20% over almost two decades. Taking into account the proper motion of the S50/X7 system, this is expected. Even though a clear trend can be observed, projection effects could also play a role because of the orbit of S50 (see Appendix, Fig. 12). Studying the projected positions of the head, tail, and S50 with respect to Sgr A* (Fig. 4) implies that the R.A. distance of the head stays almost constant. If the head would be attracted by Sgr A*, we would not ob-serve a preserved dusty shell of X7 because the front would simply accelerate towards the SMBH with respect to S50 and the tail. Hence, the shape of the Brγ-emission in 2018 might be explained by the forward (and backward) single-and multiscattered stellar light of S50. If upcoming observations can confirm the observed decoupling of the head from S50 and its tail, it might trigger the flaring activity of Sgr A* above the statistical level (Witzel et al. 2012). Please consider also the Appendix (Fig. 11) for a possible outlook.
B) As discussed before, the Brγ line map of 2009 (Fig. 3) but also the size of the L -band continuum detection (Table 2 and Fig. 2) marks a noticeable step in the discontinuous evolution of X7. Adding the growth of the position angle of X7, the increasing distance between the head and S50, and the relative position of the shell and the S-star to the calculation creates the assumption that we observe a dissolving event. Since the overall shape of the dust shell as observed with NACO seems to be preserved even though a clearly increased elongation can be observed, it is safe to assume that the shell stays intact. Hence, clear evidence for the scenario of a destroyed shell cannot be given.
Considering the here discussed observational results leads to the problem of the ongoing spatial misplacement of S50 with respect to X7 and the growing position angle. We will elaborate on this in the following subsections.
Unexpected event around 2010
Recently, Vorobyov et al. (2020) modeled the behavior of gas and dust features of protoplanetary disks which move with a supersonic motion in a dense ambient medium. Considering the Brγ emission in Fig. 3 in 2009 in combination with the related L -band emission size (Table 2), we conclude that there might be a prominent decoupling of gas and dust as discussed by Henney & Arthur (2019). As discussed, the time scales of the cited work does not fit the observation. Hence, the observations suggest the presence of a disturbing event. We speculate that this event has been caused by the close fly-by (in projection) of S33, which would at least partially explain the almost compact Brγ line map emission in 2009 and the discontinuous evolution of the projected L -band size of X7 (see Table 2). A critical parameter of this speculative scenario is the 3-dimensional distance and therefore the position of S50/X7 and S33 with respect to each other.
For giving an estimate on the 3-dimensional distance between S33 and S50, we use the related proper motion (v t ) and line-of-sight velocity (v r ). For v r , we use a lower limit of around 500 km/s (Mužić et al. 2010). For deriving a LOS velocity, an averaged value of the observed H 2 Q(1) 6 and H 2 Q(3) 7 absorption line is used. Hence, for v t of S50 we derive a value of around 350 km/s in 2018 (see Appendix, Fig. 10 and Table 1). This velocity estimate results in a approximate 3-dimensional velocity of (v r 2 + v t 2 ) −1/2 ≈ 600 km/s. This results in an approximate distance d towards Sgr A* of d S50 ≈ 0.047 pc ≈ 1.19". From Ali et al. (2020), we use the 3-dimensional position of S33 based on their presented orbit plots. We find that the 3-dimensional distance of S33 in 2009 with respect to Sgr A* is about 1.2". Because the 3-dimensional distance of S50 with respect to Sgr A* is a lower limit, we set the distance of S33 to S50 at about 0.01" or 120 AU . Considering the derived 3-dimensional distance between S33 and S50, the modeled interaction between an intruder and the host star with an envelope as presented in Vorobyov et al. (2020) could be a possibility. A detailed model should answer the question about the stellar-wind interaction with the ambient wind (Yusef-Zadeh et al. 2020) but exceeds the scope of this work.
Furthermore, it should be mentioned that O'Gorman et al. (2015) and Wallström et al. (2017) presented ALMA observations which do not show a symmetrical dust/gas distribution of the envelope related to the host star (which happens to be in both cases a giant). Wallström et al. (2017) observed a so-called 'Spur' which describes an asymmetric gas feature related to the host star. This 'Spur' could be compared to the dust and gas shell X7 of S50. Wallström et al. (2017) argue that this 'Spur' might be created by a sporadic eruption event of the host star. Nevertheless, Zajaček et al. (2020) modeled recently the depletion of red giants and showed that the detached and shocked envelope of the host star can suffer from the interaction with Sgr A*. Even though Schartmann et al. (2018) used stellar winds to model the S2 peri-center passage, it is shown that the presence of a SMBH results in an asymmetric mass distribution. If the gas/dust shell got detached and its length scale increased beyond the stellar Hill radius, the gravitational influence of Sgr A* would dominate the evolution of X7 as was described by Eckart et al. (2013) and numerically modelled by Zajaček et al. (2014).
The nature of the source X7/S50
From the multiwavelength analysis with NACO and SINFONI in the H-,K-,L, M-band, and the modeled SED, we find that the X7/S50 system consists of a stel-6 Transition v=1-0 Q(1) 7 Transition v=1-0 Q(3) lar component in combination with the internally heated dusty envelope (Fig. 7). Comparing the SINFONI Brγ-line map of 2006 and 2018 with the NACO L -band continuum observations shows the gas-to dust-component ratio is around 1:2-1:3 which are typical values for HAe/Be or T-Tauri stars (Mannings & Sargent 2000). The weak H 2 -absorption lines (Appendix, Fig. 10) underline the possibility for observing a YSO as discussed in Mužić et al. (2010). The theoretical modeling of the dust and gas of X7 strengthen the possibility of a YSO. Additionally, Rivinius et al. (1997) reported wind variations for early-B hypergiants with mass-loss rates of several 10 −6 M yr −1 . This variations are also investigated by Muratorio et al. (2002). In both cases, the P-Cygni profile of highly excited [FeIII] multiplets/lines are indicators for a complex wind interaction with the stellar source. Even if we do not find a prominent P-Cygni profile in the spectrum, a nondetection can be explained by the high sky emission line variations which leads to over/undersubraction effects as shown by Davies (2007). Finding a P-Cygni feature would increase the complexity of the X7 system since there would be windwind-accretion processes that should be a part of the mentioned model. The wind launched at the position of Sgr A* would be accompanied by stellar winds of S50. Therefore, the S50 dust and gas accretion would be influenced by the aforesaid wind-wind process.
Furthermore, the origin of the excited [FeIII] lines is still not clear (Peißker et al. 2019(Peißker et al. , 2020b) even though we speculate the detection could be linked to the area and the Brγ-bar (Schödel et al. 2011;Peißker et al. 2020c). However, Wolf & Stahl (1985) mention that higher excited [FeIII] lines could have been pumped by HeI lines. In the spectrum of X7, we find a strong blue-shifted HeI line at 2.058 µm 8 with a matching LOS velocity. Hence, we consider the pumping of the forbidden Fe-lines as a possible explanation. For the sake of completeness, we note that every of the four most prominent emission lines in the present K-band spectrum in Fig. 1 is accompanied by a less intense line which is related to the source X7.1/G5. In addition, we do find a red-shifted H 2 line (about 650 km/s) at 2.228 µm (transition v=1-0 S(0)). Because of the direction of the Doppler-shifted H 2 line, this emission might probably be related to another species. From the here shown results and the discussed scenarios, we conclude that the stellar source of X7 can be associated without any doubt with the S-cluster star 8 Transition 2p 1 P 0 − 2s 1 S S50, which confirms the analysis of Mužić et al. (2010). As implied by the H 2 absorption lines, the LOS velocity of the star is blue-shifted. Hence, the Doppler-shifted direction of the stellar LOS-velocity matches the emission lines of the surrounding envelope, which also shows a blue-shifted motion. The shape of the bow shock in 2002 is almost spherical and Wilkinoide.
With the presented COMIC/ADONIS+RASOIR data of 1999, we find evidence that earlier L -band data than 2002 confirm the trend of a 'growing' dusty envelope. The two distinct observed processes, the LOS-velocity, and the star/envelope evolution underline the prominent dynamical process that highlights the uniqueness of the X7/S50 system. Along the X7/S50 source, we observe a strong and prominent velocity gradient in 2018. Considering the existence of a formed wind at the position of Sgr A* or IRS16, we assume this might be the origin of the gradient. In 2009, it seems the envelope starts to interact with the nearby S-cluster star S33 since we trace indications of this possible interaction in the same year (Fig. 3). The L NACO data shows that the tail of X7 gets brighter between 2010 and 2018. We predict that this gain of brightness will likely continue in the future. We also speculate that the ongoing interaction of S33 and Sgr A* with the shell of S50 could lead to the partial destruction of the bow shock.
Sporadic or stellar winds?
As we have observed and presented in Fig. 2 but also listed in Table 2, the shell of S50 is pointing in projection above Sgr A*. As proposed by Wardle & Yusef-Zadeh (1992), strong stellar winds arising from the IRS16 complex are responsible for the creation of the mini-cavity. The authors discuss an observed 2.217µm emission line at the position of the mini-cavity (see also Lutz et al. 1993) which can most likely be related to the [FeIII] multiplet observed in several dusty sources west of Sgr A* (Ciurlo et al. 2020;Peißker et al. 2020b). The ionized iron multiplet can also be observed for X7/S50 as shown in Fig. 1. If we exclude the possibility of a wind arising at the position of Sgr A*, the excitation of iron as well as the position angle (Table 2) could be linked to stellar winds from IRS16. The supermassive black hole would be responsible for refocusing the wind (Fig. 8) and sources that are leaving the 'slip-stream of Sgr A*' would suffer from this interaction. This dynamical evolution of the gaseous and dusty shell of the X7/S50 system underlines the need for a constant survey of the GC region in various bands. Wardle & Yusef-Zadeh (1992). The [FeIII] emission is also observed by Lutz et al. (1993). The position of X7 and the observed position angle of 2018 is implied with the red object. Sgr A* is located at the black dot.
If a wind responsible for the alignment and evolution of the X7/S50 system is indeed arising at the position of Sgr A*, the apparent change of the position angle with respect to Sgr A* is unexpected. Since we clearly observe the evolution of the elongation of the X7/S50 system, it may be explained by a temporarily active wind phase of Sgr A* as indicated by Morris & Serabyn (1996). Speculatively, this could contribute to the 'Paradox of youth' (Ghez et al. 2003) where star formation is 'allowed' for a short period of time. Nevertheless, in combination with the X7 proper motion (Mužić et al. 2010) directed towards the north, the alignment angle of X7/S50 may have been induced to the system before 2009. After 2009, the wind activity may have been decreased while the position angle increased (Table 2) because of the proper motion of the X7/S50 system.
Future observations with the Extremely Large
Telescope and the James Webb Space Telescope Near-and mid-infrared instruments will play a key role in investigating the evolution of the X7/S50 system. The prominent detection of X7 in the L -and M -band promises successful observations with MIRI (James Webb Space Telescope, see Bouchet et al. 2015;Rieke et al. 2015;Ressler et al. 2015), METIS (Extremely Large Telescope, see Brandl et al. 2018), and MICADO (Extremely Large Telescope, see Trippe et al. 2010). MIRI and METIS will be able to finalize the investigation about the possible clumpiness of X7 which could be used for theoretical models (e.g., the filling factor, see Peißker et al. 2020c). With a more accurate result, we will be able to precisely determine the density and therefore the mass of the dusty shell. Furthermore, we are able to search for more complex emission lines in the local line of sight ISM like, for example, N H 3 . Additionally, gas emission lines like, e.g., CO and HCN , can provide a more detailed description about the nature of the X7/S50 system. These gas-and ice-absorption lines can also be used as an additional probe for a stellar disk and a possible YSO. Moultaka et al. (2006) and Moultaka et al. (2009) showed that these lines are useful to determine local extinction values for the interstellar medium (see also Schödel et al. 2010;Peißker et al. 2020c). Even if we have shown S50 can be associated with the stellar counterpart of X7, a hidden star at a distance of R 0 from the apex of the bow shock should be detectable with MICADO (see the simulated view of the GC with MICADO in Davies & Genzel 2010). As we have presented in Fig. 2, investigating the GC with a wider FOV in the mentioned bands should also reveal more (elongated) sources that might be suffering from the wind that is formed at the position of Sgr A* or at IRS16. We conclude that the upcoming observations of the GC with the ELT will be able to manifest the dynamical influence of the nuclear wind. We can safely assume the X7/S50 system will not be the only source in the GC which is undergoing a dynamical influence. Yusef-Zadeh et al. (2017a) already showed that YSOs with bipolar outflows can be observed in the environment of the SMBH. Even though we cannot finally answer the question about the nature of the X7/S50 system, we see some weak traces that point towards its YSO nature. If the theoretical models reveal matching parameters of the X7/S50 system with a YSO, the origin of these sources is still not clear. However, the implication of a population of YSOs promises an important cornerstone in the investigation of the direct vicinity of the nearest SMBH that resides in our Galaxy. Here we are showing the relation between the K-band detection of S50 and the L -band emission of X7 (Fig. 9) observed with NACO. To compare the projected on-sky distances, we are rebinning the L -band data to the same pixel scale as the K-band data, i.e., two pixels correspond to 27 mas. By using the stellar position of S50 in the K-band, Figure 9. Galactic center observed with NACO in the L-and K-band in 2002. In the upper left and right panel, Sgr A* is indicated with a green ×, the white arrow points towards the position of X7 (L-band) and S50 (K-band). As in Fig. 12, S65 can be used as a reference source for the identification. With the combination of the L -and K-band data, we derive the position of the stellar source S50 with respect to X7 (lower panel, see the green dot inside the dusty emission). For the interested reader, we note that the K-band image also demonstrates the high asymmetrical stellar distribution of the S-cluster in projection.
ACKNOWLEDGMENTS
we pinpoint the stellar location in the L -band (see Fig. 9). This procedure is similar to the steps for the SINFONI detection with the difference that we are using data cubes. In the final mosaic data cube of a related year, we select the 2.0 − 2.2 µm range to extract the related K-band image. Then, we compare the position of S50 in the extracted K-band image with the continuum subtracted Brγ line maps that are constructed from the related data cube (Fig. 5).
B. H 2 EMISSION OF S50
For investigating the spectrum of S50, two main cornerstones have to be fulfilled:
A maximized data quality,
2. An individual detection of S50.
Regarding point 1, a high number (> 20) of single exposures with a satisfying quality (FWHM < 6.5 pixel in x-and y-direction) results in an increased S/N ratio. Using the SINFONI data in 2018 (Table 8) fulfills this first requirement. The second point is limited by nature. Using data where S50 coincides with its shell could lead to a confused and blended spectrum. However, studying the projected position of the stellar counterpart of the dusty and gaseous shell X7 reveals the data in 2018 matching the needed conditions (see Fig. 5). For the spectrum presented in Fig. 10, we use a PSF sized aperture. Furthermore, we fit a Gaussian to the detected H 2 triplet with a measured uncertainty of about ± 35 km/s. As pointed out by, e.g., Arulanantham et al. (2017) and Hoadley et al. (2017), H 2 lines can be used as a tracer for protoplanetary disks of YSOs. Considering the analysis of Mužić et al. (2010) and the proposed nature of S50 as a T-Tauri or Ae/Be Herbig star seems to be a reasonable connection. However, we would like to point out that future observations in combination with theoretical models will confirm or reject this claim.
C. X7, A TIDALLY STRETCHED FEATURE As a rather speculative scenario, we shortly discuss the possibility that X7 is a tidally stretched gas and dust feature (as proposed by Randy Campbell, UCLA, at GCWS 2019; proceedings in prep.). Isolating the observation of the X7/S50 observation in 2018 could lead indeed to the assumption that the source is a tidally stretched gas and dust feature. Even though this scenario promises a wide range of useful scientific implications, observations of comparable objects have shown that a tidally stretched object is rather unlikely (Gillessen et al. 2012;Eckart, A. et al. 2013;Valencia-S. et al. 2015). Considering Fig. 4 (left side), we do find an increasing distance of the head from S50. However, the overall trend of the X7/S50 system seems to be not affected by Sgr A* (Fig. 4, right side). Even with the observed and detected asymmetry regarding the stellar position with respect to its gaseous and dusty shell X7, the system is following the proper motion as found by Mužić et al. (2010). As pointed out several times, a long-time survey of the evolution of X7/S50 is required.
D. COMIC/ADONIS+RASOIR DATA OF 1999 In Fig. 12, we present the results of the long-time survey of X7 in the L -band with COMIC/ADONIS+RASOIR (1999) in combination with the NACO data (2002, 2003 -2018 is shown in Fig. 2). For the image presented in Fig. 12 which is observed with COMIC/ADONIS+RASOIR, we use a high pass filter to highlight features of the S-cluster. In both images, we clearly detect the structure of the S-cluster (Fig. 12). Even though the resulting COMIC/ADONIS+RASOIR image of 1999 suffers from a decreased magnitude sensitivity, we are still able to identify several (isolated) sources including the spherical shaped bow shock source X7 at the position of S50. As indicated by the orbital plots presented on the right-handed side of Fig. 12, we identify the nearby S-cluster stars S33, S71/S72, S65, and S87 and mark them accordingly. Moreover, we include the K-band based orbit of S50 (red dot) in the presented COMIC/ADONIS+RASOIR data of 1999 (red ellipse). Peißker et al. (2019Peißker et al. ( , 2020a. These listed publications underline the robustness of the used data. For the sake of completeness, it should be noted that Parsa et al. (2017) derived with the here used data the gravitational redshift of S2 caused by the SMBH. This was later independently confirmed by Gravity Collaboration et al. (2018) and indicates the quality of the data reduction process applied to the data. The green × marks the approximate position of Sgr A*. In 1999 and 2002, the position of S2 and Sgr A* are confused because of its close proximity to each other. Some re-identified S-stars are marked with a light green circle. In 1999, the orbital spatial position of the S-cluster stars S71 and S72 coincide which results in the bright spot marked with a light green circle. The right handed side shows orbits of the S-stars S33 (marked), S50, S71/S72 (marked), and S87 (marked). The position of S65 can be used for orientation in these plots (please see Fig. 2
|
2021-01-07T02:16:25.272Z
|
2021-01-06T00:00:00.000
|
{
"year": 2021,
"sha1": "109f175a0f6c855000a54fee4fad616fdbdcc9c9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2101.02077",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "109f175a0f6c855000a54fee4fad616fdbdcc9c9",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13941864
|
pes2o/s2orc
|
v3-fos-license
|
Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers
Michael Valenzuela and colleagues systematically review and meta-analyze the evidence that computerized cognitive training improves cognitive skills in older adults with normal cognition. Please see later in the article for the Editors' Summary
Introduction
Cognitive decline and impairment are amongst the most feared and costly aspects of aging [1]. The age-specific incidence of cognitive impairment is approximately double that of dementia [2,3] and can be expected to affect 15%-25% of older individuals [2,4]. Direct medical costs for older adults with mild cognitive impairment (MCI) are 44% higher than those for non-impaired older adults [5]. Because cognitive decline and impairment are essential criteria for dementia and often require informal care [5], interventions aimed at prevention or attenuation of such decline may have a substantial health and economic impact [3].
Several studies have now established strong and independent links between engagement in cognitively stimulating activities throughout the life span and enhanced late-life cognition, compression of cognitive burden, and reduced risk of cognitive impairment and dementia [6][7][8]. Intense interest has therefore focused on the potential of cognition-based interventions in older adults, especially computerized cognitive training (CCT) [9]. CCT involves structured practice on standardized and cognitively challenging tasks [10], and has several advantages over traditional drill and practice methods, including visually appealing interfaces, efficient and scalable delivery, and the ability to constantly adapt training content and difficulty to individual performance [9,11]. Sales of commercial CCT packages may soon reach US$1 billion per year [12], but the evidence base for such products, at least in older adults, remains unclear [13].
Prior systematic reviews of generic cognitive interventions in healthy older adults [9,[14][15][16][17][18] have noted limitations, especially lack of supporting evidence from active-control trials and lack of replication due to inconsistent or indeterminate methodology. Importantly, these reviews pooled data from studies of CCT along with studies of other cognition-based interventions such as mnemonics or cognitive stimulation that can be as simple as reading newspapers or participating in group discussion [15][16][17][18]. It is therefore perhaps unsurprising that these reviews reached inconclusive results. A more recent systematic review in healthy older adults [9] was not restricted to randomized controlled trials (RCTs) and included CCT studies along with other computerized interventions such as classes in basic computer use.
The effectiveness of CCT in enhancing cognitive performance in healthy older adults is therefore currently unclear, and the impact of design and implementation factors on efficacy has yet to be systematically analyzed. Using data from RCTs of narrowly defined CCT, we aimed to quantitatively evaluate the efficacy of CCT with respect to multiple cognitive outcomes in healthy older adults. Furthermore, we aimed to test the moderating effect of several key study features in order to better inform future CCT trial design and clinical implementation.
Methods
This work fully complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [19] (see Checklist S1). Methods of analysis and inclusion criteria were specified in advance and are documented in Protocol S1.
Eligibility Criteria
Types of studies. Eligible studies were published, peerreviewed articles reporting results from RCTs of the effects of CCT on one or more cognitive outcomes in healthy older adults.
Types of participants. Eligible studies had mean participant age $60 y and participants who lacked any major cognitive, neurological, psychiatric, and/or sensory impairments. Studies with MCI as an inclusion criterion were excluded, as cognitive performance in this population may vary substantially, particularly with respect to variability in the diagnostic criteria of MCI [20].
Types of interventions. Eligible trials compared the effects of $4 h of practice on standardized computerized tasks or video games with clear cognitive rationale, administered on personal computers, mobile devices, or gaming consoles, versus an active or passive control condition. Lab-specific interventions that did not involve interaction with a computer were excluded.
Types of outcome measures. Outcomes included performance on one or more cognitive tests that were not included in the training program (i.e., untrained), administered both before and after training. This review is limited to change in performance from baseline to immediately post-training on tests of global cognition, verbal memory, nonverbal memory, working memory (WM), processing speed, attention, language, visuospatial skills, and executive functions. Both primary and secondary outcomes were included. Long-term outcomes, subjective measures (e.g., questionnaires), noncognitive outcomes (e.g., mood, physical), imaging data, and activities of daily living outcome measures were excluded from the analysis.
Information Sources and Search Strategy
We searched Medline, Embase, and PsycINFO using the search terms ''cognitive training'' OR ''brain training'' OR ''memory training'' OR ''attention training'' OR ''reasoning training'' OR ''computerized training'' OR ''computer training'' OR ''video game'' OR ''computer game'', and by scanning reference lists of previous reviews. No limits were applied for publication dates, and non-English papers were translated. The first search was conducted on 2 December 2013. An updated search was conducted on 9 July 2014.
Study Selection
Two reviewers (A. L. and H. H.) independently screened search results for initial eligibility based on title and abstract. Full-text versions of potentially eligible studies and those whose eligibility was unclear based on title and abstract were assessed by A. L. and H. H., who also contacted authors when eligibility was unclear based on the full report. Disagreements regarding study eligibility were resolved by consulting with M. V., who approved the final list of included studies.
Data Collection and Coding
Coding of outcome measures into cognitive domains was done by two reviewers (A. L. and H. H.) based on accepted neuropsychological categorization [21] or by consensus, and approved by M. V. Table S1 provides the coding of outcomes by cognitive domains. Data were entered into Comprehensive Meta-Analysis (CMA) version 2 (Biostat, Englewood, New Jersey). Data from most studies were entered as means and standard deviations (SDs) for the CCT and control groups at baseline and follow-up, with test-retest correlation set to 0.6. In a few instances, data were entered as post-training mean change [22][23][24] or raw mean difference with a 95% confidence interval [25]. CMA allows for each of these different study outcomes to be flexibly entered into the model. When data could not be extracted from study reports, we contacted the authors requesting raw summary data.
CCT programs were divided into five content types: speed of processing (SOP) training, WM training, attention training, multidomain training, and video games. Video games were defined as computer programs that were distributed for entertainment purposes before they were tried as cognitive interventions [26].
When studies presented data for both active and passive control groups, only the active control group was used as a comparison to the CCT group. When studies presented data from both young and older adults, only data from the older group were analyzed.
Risk of Bias in Individual Studies and Study Appraisal
Risk of bias in individual studies was assessed using the items recommended in the Cochrane's Collaboration's risk of bias tool [27]: sequence generation; allocation concealment; blinding of participants, personnel, and outcome assessors; incomplete outcome data; selective outcome reporting; and other sources of bias. However, because the blinding of therapists and participants in CCT trials is impractical, we considered only blinding of assessors to determine risk of bias in the blinding item. We considered trials with high or unclear risk of bias those that did not include assessor blinding or did not perform intention-to-treat analyses. We considered all other trials as being at low risk of bias. Authors were contacted when the study details were unclear.
In addition, we used the Physiotherapy Evidence Database (PEDro) scale to assess study quality. The PEDro scale is a 11-item scale designed to assess the methodological quality and reporting of RCTs, and is reliable for rating trials of non-pharmacological interventions [28]. As with the risk of bias tool, we did not consider two items in the PEDro scale (blinding of therapists and participants), and therefore the maximum possible PEDro score for studies in this review was 9. All assessments were conducted by H. H. and additional external assessors (see Acknowledgments), and were subsequently reviewed by A. L.
Data Analysis
The primary outcome was standardized mean difference (SMD, calculated as Hedges' g) of post-training change between CCT and control groups. Analyses were conducted for all cognitive results combined, as well as for each of the following cognitive domains: verbal memory, nonverbal memory, WM, processing speed, attention, visuospatial skills, and executive functions (planned analyses of global cognition and language were not performed because of insufficient numbers of studies reporting these outcomes). Precision of the SMD was calculated for each trial by the 95% CI. A positive SMD implies better therapeutic effects over time in the CCT group compared to the control group.
When studies presented data from more than one cognitive test, these were combined in two ways. First, all test results were combined to produce a single SMD per study, following established procedure [29]. Second, tests were classified on their main neuropsychological competency (see Table S1), such that each study could contribute to one or more cognitive-domainspecific SMDs. When outcomes from a given study were combined, the effect estimate was the mean amongst the related tests, and the estimate's variance was scaled up based on an assumed intercorrelation between the tests of 0.7 [30,31]. All analyses were performed using CMA.
Because we expected studies to report multiple cognitive outcomes and display methodological variability [9,13], our analyses were planned in three stages. First, in our main analysis we combined all outcomes from each study and pooled these to determine the overall efficacy of CCT in enhancing cognition. Second, we performed domain-specific meta-analyses, in which only studies that reported outcomes on a specified cognitive domain were included, using one combined SMD per study. Third, to examine between-study variability and identify design elements that may moderate observed efficacy, we performed subgroup meta-analyses. In the first and second stages, the overall and domain-specific meta-analyses were performed using a random-effects model. Using the same convention for description of Cohen's d effect sizes applied to Hedges' g, SMDs of #0.30, . 0.30 and ,0.60, and $0.60 were considered small, moderate, and large, respectively. Heterogeneity across studies was assessed using the I 2 statistic with 95% confidence (uncertainty) intervals [32,33]. I 2 values of 25%, 50%, and 75% imply small, moderate, and large heterogeneity, respectively [33]. Forest plots were also used to visually characterize heterogeneity.
In the third stage, subgroup analyses were based on a mixedeffects model, which uses a random-effects model to generate within-subgroup variance and a fixed-effects model to compare effects between subgroups [34]. Between-subgroup heterogeneity was tested using the Cochrane's Q statistic [27] and was defined significant at the p,0.05 level. The following moderating factors were included in our analysis plan: type of CCT program (i.e., cognitive content of training), delivery format (group or homebased training), session length, session frequency, total duration of the program (dose), control condition (active or passive control), and risk of bias (high or low risk of bias as defined above).
Risk of Bias across Studies
In order to assess risk of publication bias, funnel plots for overall outcomes as well as for each cognitive domain were inspected for asymmetry (i.e., SMDs charted against their standard error) [35]. When ten or more studies were pooled in a given meta-analysis, we formally tested funnel plot asymmetry using Egger's test of the intercepts [36]. A positive intercept implies that smaller studies tended to report more positive results than larger trials. When the test found notable asymmetry (p,0.1), we report primary outcomes based on a fixed-effects model along with a randomeffects model, as the former gives more weight to larger trials and helps to counterbalance a possible inflation of therapeutic effect [35]; in these cases we discuss the more conservative effect estimate.
Sensitivity Analyses
For the main analysis (efficacy across all cognitive outcomes), we tested the robustness of our results to parametric variation of the following assumptions: test-retest correlation (set at 0.6 and tested from 0.5 to 0.7), within-study multiple outcome intercorrelation (set at 0.7 and tested from 0.6 to 0.8), inclusion of passive controls instead of active controls in studies with multiple controls (k = 3), and use of a fixed-effects model instead of a random-effects model. These results are reported in Table S5.
Study Selection
After duplicate search results were removed, 6,294 studies were initially screened for eligibility, of which 5,974 were excluded based on abstract and title. Three hundred twenty full-text articles were assessed for eligibility, of which 45 were deemed potentially eligible. After consulting with authors, three studies were excluded because they did not use randomized assignment [37][38][39], and a further two studies because authors did not provide necessary data [40,41]. The resulting 40 studies from electronic search were supplemented by 11 studies [42][43][44][45][46][47][48][49][50][51][52] obtained by scanning reference lists of previous reviews and consulting with researchers, providing a total of 51 articles included in the analysis (Figure 1). Data from one article [53] were split into two studies, resulting in a final number of datasets cited in this review of 52 (for a detailed description of groups selected from each study, see Table S2).
An active control group was used in 26 studies (50%), and assessor blinding was confirmed in 24 (46.2%) of studies. The average PEDro score was 6.2/9 (SD = 1.35), and 35 (66.6%) studies were found to have a high risk of bias (Table S4). As expected, risk of bias and study quality were connected: significant differences in PEDro scores were found for studies with high risk of bias (mean PEDro score = 5.69, SD = 1.08) compared to studies with low risk of bias (mean PEDro score = 7.18, SD = 1.33; t (50) = 24.324, p,0.001).
Type of CCT varied considerably across studies (Table 1). Twenty-four studies used multidomain training, nine used SOP training, nine used WM training, six used attention training, and four were video games. Group (center-based) training was conducted in 32 (61.5%) of the studies, and 19 (36.5%) provided training at home. A study by Berry et al. [55] combined data from participants who trained at home with others who trained in research offices, and was therefore excluded from our subgroup analysis of delivery mode. In a study by Shatil et al. [84], 50 participants received group-based CCT and ten trained at home; data for the latter ten participants were excluded from the analysis (raw data for this study were provided in Table S2.
Visuospatial skills. Eight studies reported visuospatial outcomes. The combined effect size was small and statistically significant (g = 0.22, 95% CI 0.15 to 0.29, p = 0.01; Figure 10). Heterogeneity across studies was moderate (I 2 = 42.66%, 95% CI 0% to 74.65%). The funnel plot revealed potential asymmetry, suggesting a greater effect in smaller studies ( Figure S1), but formal testing was not conducted because of the small number of studies.
Global cognition and language. Planned analyses of global cognition and language were not performed as these outcomes were reported in only three studies each ( [24,50,88] and [49,72,75], respectively).
A similar sequence of moderator analyses for each cognitive domain can be found in Figures S2, S3, S4, S5, S6, S7, S8. A summary of these outcomes is visually presented in Figure 12, a matrix that shows color-coded SMDs for each cognitive domain by each moderating factor. From this figure it is evident that there is no positive evidence for the efficacy of training involving WM (based on either all studies or by subgroup), nor for training administered more than three sessions per week, for any of the cognitive outcomes in this review. At the domain-specific level, evidence for the efficacy of CCT training at home, training only once per week, or in sessions shorter than 30 min is weak.
Discussion
CCT research involving healthy older participants has now matured to a substantial literature, encompassing 51 RCTs of reasonable quality. When examined en masse, CCT is effective at enhancing cognitive function in healthy older adults, but small effect sizes are to be expected. By definition this result pertains to the theoretical ''average'' older person-it is currently not possible to predict whether a given individual's cognitive abilities will improve beyond normal practice effects. More importantly, the efficacy of CCT depends on particular design choices as well as the cognitive outcome of interest. Moderator analyses revealed the inefficacy of home-based training compared to group-based training, as well as training more than three times a week. Domain-specific analyses found evidence of efficacy for nonverbal memory, processing speed, WM, and visuospatial outcomes, but not for attention and executive functions. Equally important, we found consistent evidence for the likely inefficacy of WM training and the use of brief training sessions.
Evidence of possible publication bias was found only for reports of verbal memory outcomes. In this case a more conservative fixed-effects model was used and found that CCT efficacy in this domain is weak at best (g = 0.08, 95% CI 0.01 to 0.15). Somewhat atypically, the funnel plot for SOP outcomes found that the largest trials tended to find the largest effect sizes. Given that more than half of all participants in this systematic review undertook speedbased training [47,[50][51][52][53][54][55]59,69], whose efficacy does not generalize beyond speed-based outcomes (Figure 12), it is possible this is a peculiarity of studies focused on speed training and testing.
Analyses of verbal memory and executive outcomes were sufficiently powered, encompassing 23 and 29 trials, respectively, yet yielded negligible effects. Whilst we recognize that no universal consensus is possible when classifying cognitive tests to particular domains, we consulted a widely cited textbook [21] for this task (see Table S1), and so the negative results for verbal memory and executive outcomes likely represent deficits in the efficacy of CCT in healthy older individuals. Further research aimed at assessing the therapeutic responsiveness of these two key cognitive domains is required, along with development of new and better targeted CCT technology. Consideration should also be given to combining CCT with other effective interventions, such as physical exercise for executive functions [89] and memory strategy training for verbal memory [90].
At the same time, the therapeutic value of several commonly implemented CCT design choices come under question. We found that WM training alone was not effective in healthy older adults, similar to the limited effects reported in a recent meta-analysis in children and young adults [91]. The Finnish Geriatric Intervention Study to Prevent Cognitive Impairment and Disability (FINGER) [92] is a major trial in progress that involves WM training along with other lifestyle-based interventions, and may shed light on the utility (or lack thereof) of this kind of CCT.
One of the attractions of home-based (often Internet-delivered) CCT is the ability to administer a customized and adaptive intervention in the individual's home, with potential for decreased implementation cost [9] and the facility to target the frail and immobile. However, our formal moderator analysis (based on the conservative Q statistic) revealed a significant interaction between delivery setting and therapeutic outcome, whereby group-based delivery was effective (g = 0.29, 95% CI 0.21 to 0.38) and homebased delivery was not (g = 0.09, 95% CI 20.02 to 0.21). A high degree of consistency amongst group-based training studies suggests that this conclusion is robust (Figure 11). If translated to Mini-Mental State Examination scores, this group-based CCT effect may approximate an average relative improvement of one point [93]. Potentially relevant practice variables when conducting group-based CCT include direct supervision by a trainer to help ensure adherence, treatment fidelity, and compliance; provision of motivational support and encouragement to master challenging tasks that are otherwise easy to avoid; problem solving of IT issues; and nonspecific factors such as social interaction. Indeed, a metaanalysis of memory training in older adults also found that groupbased administration was a moderating factor [94]. When conducting CCT, group setting may therefore represent a key therapeutic consideration. Conversely, the popular model of purely home-based training is unlikely to result in cognitive benefits in unimpaired older adults. Future studies may wish to investigate the value of combining initial group-based administration with more long-lasting home-based CCT, as well as test emerging technologies that allow remote clinical supervision and interaction via social media. Figure 11. Subgroup analyses of moderators of overall efficacy of CCT in older adults. a Q-test for between-group heterogeneity, mixedeffects model. b One study that combined data from both home-and group-based training [55] was excluded from this analysis. c Total number of training hours. d Session length could not be determined for one study. doi:10.1371/journal.pmed.1001756.g011 We also found interesting evidence for the importance of correct CCT dose. The results suggested that short sessions of less than 30 min may be ineffective, possibly because synaptic plasticity is more likely after 30-60 min of stimulation [95]. By contrast, our analysis clearly identified that training more than three times per week neutralizes CCT efficacy ( Figure 11). It is possible that there is a maximal dose for CCT, after which factors such as cognitive fatigue [96] may interfere with training gains. This might not be unique to older persons, as comparative studies in children [97] and young adults [98] have linked spaced training schedules with greater CCT efficacy.
Limitations
To our knowledge, this is the first quantitative meta-analysis of RCTs in the defined field of CCT in cognitively healthy older adults. As opposed to previous reviews that included various cognitive interventions and research designs [9,[14][15][16][17][18], we employed strict eligibility criteria, allowing comparison of results across cognitive domains as well as testing of the impact of design factors. However, by way of limitation our results do not necessarily generalize to older impaired persons, especially the high-risk MCI population, where results appear to be mixed [99,100]. This review also focused on change in neuropsychological measures immediately after the end of training; it therefore provides no indication about the durability of the observed gains, nor their transfer into real-life outcomes such as independence, quality of life, daily functioning, or risk of long-term cognitive morbidity. Because individual RCTs typically report multiple cognitive test results for a particular cognitive domain, these were combined statistically (as per prior practice [30,31]), but this approach is blind to the relative psychometric merits of the individual tests. More sophisticated analyses may therefore need to be developed that incorporate test-specific weightings when combining test outcomes. Finally, whilst the CCT literature is now substantive in terms of the number of RCTs (k = 51), the typical trial was modest in size (median N = 45). Future studies incorporating supervised group-based delivery and a session frequency of 2-3 sessions per week can anticipate an approximate effect size of g = 0.29, suggesting that a sample of 87 is sufficient to designate power at 0.8 and allow for 15% attrition.
Conclusions
Discussion of CCT tends to focus on whether it ''works'' rather than on what factors may contribute to efficacy and inefficacy [13,101]. This systematic review indicates that its overall effect on cognitive performance in healthy older adults is positive but small, and it is ineffective for executive functions and verbal memory. Accurate individual predictions are not possible. More important- ly, our analysis shows that efficacy varies by cognitive outcome and is to a large extent determined by design choices. In general, group-based CCT is effective but home-based CCT is not, and training more than three times a week is counterproductive. Consistently ineffective design choices should therefore be avoided. Improving executive functions or verbal memory may require development of new technology or combined interventions. There remains great scope for additional research to further enhance this non-pharmacological intervention for older individuals. Figure S4 Moderators of efficacy of CCT for working memory. a Q-test for between-group heterogeneity, mixed-effects model. b One study that combined data from both home-and group-based training [55] was excluded from this analysis. c Total number of training hours. (TIF) Figure S5 Moderators of efficacy of CCT for processing speed. a Q-test for between-group heterogeneity, mixed-effects model. b One study that combined data from both home-and group-based training [55] was excluded from this analysis. c Total number of training hours. d Session length could not be determined for one study [48]. (TIF) Figure S6 Moderators of efficacy of CCT for executive function. a Q-test for between-group heterogeneity, mixed-effects model. b Total number of training hours. c Session length could not be determined for one study [48]. (TIF) Figure S7 Moderators of efficacy of CCT for attention. a Q-test for between-group heterogeneity, mixed-effects model. b Total number of training hours. (TIF) Figure S8 Moderators of efficacy of CCT for visuospatial skills. a Q-test for between-group heterogeneity, mixedeffects model. b Total number of training hours. (TIF) Checklist S1 PRISMA checklist.
(DOC)
Dataset S1 Raw effect size and moderator data for overall (combined) and domain-specific results.
(XLSX)
Protocol S1 Study protocol. (DOCX) Editors' Summary Background. As we get older, we notice many bodily changes. Our hair goes grey, we develop new aches and pains, and getting out of bed in the morning takes longer than it did when we were young. Our brain may also show signs of aging. It may take us longer to learn new information, we may lose our keys more frequently, and we may forget people's names. Cognitive decline-developing worsened thinking, language, memory, understanding, and judgment-can be a normal part of aging, but it can also be an early sign of dementia, a group of brain disorders characterized by a severe, irreversible decline in cognitive functions. We know that age-related physical decline can be attenuated by keeping physically active; similarly, engaging in activities that stimulate the brain throughout life is thought to enhance cognition in later life and reduce the risk of age-related cognitive decline and dementia. Thus, having an active social life and doing challenging activities that stimulate both the brain and the body may help to stave off cognitive decline.
Why Was This Study Done? ''Brain training'' may be another way of keeping mentally fit. The sale of computerized cognitive training (CCT) packages, which provide standardized, cognitively challenging tasks designed to ''exercise'' various cognitive functions, is a lucrative and expanding business. But does CCT work? Given the rising global incidence of dementia, effective interventions that attenuate age-related cognitive decline are urgently needed. However, the impact of CCT on cognitive performance in older adults is unclear, and little is known about what makes a good CCT package. In this systematic review and metaanalysis, the researchers assess whether CCT programs improve cognitive test performance in cognitively healthy older adults and identify the aspects of cognition (cognitive domains) that are responsive to CCT, and the CCT design features that are most important in improving cognitive performance. A systematic review uses pre-defined criteria to identify all the research on a given topic; meta-analysis uses statistical methods to combine the results of several studies.
What Did the Researchers Do and Find? The researchers identified 51 trials that investigated the effects of more than four hours of CCT on nearly 5,000 cognitively healthy older adults by measuring several cognitive functions before and after CCT. Meta-analysis of these studies indicated that the overall effect size for CCT (compared to control individuals who did not participate in CCT) was small but statistically significant. An effect size quantifies the difference between two groups; a statistically significant result is a result that is unlikely to have occurred by chance. So, the meta-analysis suggests that CCT slightly increased overall cognitive function. Notably, CCT also had small to moderate significant effects on individual cognitive functions. For example, some CCT slightly improved nonverbal memory (the ability to remember visual images) and working memory (the ability to remember recent events; short-term memory). However, CCT had no significant effect on executive functions (cognitive processes involved in planning and judgment) or attention (selective concentration on one aspect of the environment). The design of CCT used in the different studies varied considerably, and ''moderator'' analyses revealed that homebased CCT was not effective, whereas center-based CCT was effective, and that training sessions undertaken more than three times a week were not effective. There was also some weak evidence suggesting that CCT sessions lasting less than 30 minutes may be ineffective. Finally, there was no evidence for the effectiveness of working memory training by itself (for example, programs that ask individuals to recall series of letters).
What Do These Findings Mean? These findings suggest that CCT produces small improvements in cognitive performance in cognitively healthy older adults but that the efficacy of CCT varies across cognitive domains and is largely determined by design aspects of CCT. The most important result was that ''do-it-yourself'' CCT at home did not produce improvements. Rather, the small improvements seen were in individuals supervised by a trainer in a center and undergoing sessions 1-3 times a week. Because only cognitively healthy older adults were enrolled in the studies considered in this systematic review and meta-analysis, these findings do not necessarily apply to cognitively impaired individuals. Moreover, because all the included studies measured cognitive function immediately after CCT, these findings provide no information about the durability of the effects of CCT or about how the effects of CCT on cognitive function translate into real-life outcomes for individuals such as independence and the long-term risk of dementia. The researchers call, therefore, for additional research into CCT, an intervention that might help to attenuate age-related cognitive decline and improve the quality of life for older individuals.
|
2017-05-02T17:59:57.668Z
|
2014-11-01T00:00:00.000
|
{
"year": 2014,
"sha1": "8a6960fda74a770806386881530d3276bc439c50",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosmedicine/article/file?id=10.1371/journal.pmed.1001756&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a6960fda74a770806386881530d3276bc439c50",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251073165
|
pes2o/s2orc
|
v3-fos-license
|
Impact of collaborative physician-pharmacist stewardship strategies on prophylactic antibiotic practices: a quasi-experimental study
Background An effective use of surgical antibiotic prophylaxis (SAP) appears essential to prevent the development of infections linked to surgery while inappropriate and excessive prescriptions of prophylactic antibiotics increase the risk of adverse effects, bacterial resistance and Clostridium difficile infections. In this study, we aimed to analyze SAP practices in an acute secondary hospital in Belgium during the years 2016–2021 in order to evaluate the impacts of combined stewardship interventions, implemented thanks to a physician-pharmacist collaboration. Methods A quasi-experimental study on SAP practices was conducted during 5 years (2016–2021) in a Belgian University Hospital. We first performed a retrospective observational transversal study on a baseline group (2016.1–2016.4). Then, we constituted a group of patients (2017.1–2017.4) to test a combined intervention strategy of stewardship which integrated the central role of a pharmacist in antibiotic stewardship team and in the pre-operative delivery of nominative kits of antibiotics adapted to patient factors. After this test, we collected patient data (2018.1–2018.4) to evaluate the sustained effects of stewardship interventions. Furthermore, we evaluated SAP practices (2019.1–2019.4) after the diffusion of a computerized decision support system. Finally, we analyzed SAP practices in the context of the COVID-19 pandemic (2020.1–2020.4 and 2021.1–2021.4). The groups were compared from year to year in terms of compliance to institutional guidelines, as evaluated from seven criteria (χ2 test). Results In total, 760 surgical interventions were recorded. The observational study within the baseline group showed that true penicillin allergy, certain types of surgery and certain practitioners were associated with non-compliance (p < 0.05). Compared with the baseline group, the compliance was significantly increased in the test group for all seven criteria assessed (p < 0.05). However, the effects were not fully sustained after discontinuation of the active interventions. Following the diffusion of the computerized decision support system, the compliance to guidelines was not significantly improved. Finally, the COVID-19 pandemic did not appear to affect the practices in terms of compliance to guidelines. Conclusions This study shows that optimization of SAP practices is achievable within a proactive multidisciplinary approach including real-time pharmaceutical interventions in the operating area and in the care units practicing SAP.
Background
The Lancet Commission on Global Surgery identified that 313 million surgical procedures are performed worldwide each year and that at least 4.2 million people Open Access *Correspondence: antonelle.pardo@umons.ac.be Pardo et al. Antimicrobial Resistance & Infection Control (2022) 11:100 worldwide die within 30 days of surgery each year. This number of postoperative deaths accounts for 7.7% of all deaths globally, making it the third greatest contributor to deaths in the world [1]. In a study of the University Hospital of Charleroi in Belgium, post-operative mortality rates within 30 post-operative days were 1.1% [2]. In this hospital, risk factors of post-operative mortality were identified as the absence of an anesthesia nurse, American Society of Anesthesiologists scores (ASA scores) > 2, emergency, duration of surgery and rate of admission to critical care unit [2,3]. Surgical site infections (SSIs) have been shown to compose up to 20% of all healthcare-associated infections [5]. Appearing in at least 5% of patients undergoing a surgical procedure, SSIs are an important source of morbidity which increase the postoperative mortality [4,5]. The use of surgical antibiotic prophylaxis (SAP) is an effective measure to prevent the development of SSIs. In addition, it appears that inappropriate and excessive prescriptions of prophylactic antibiotics increase the risk of adverse effects, bacterial resistance and Clostridium difficile infections, but also increase the length of stay and the costs of health care [6][7][8].
To evaluate the compliance of SAP practices to guidelines, several criteria for SAP prescriptions can be observed: the indication, the antibiotic agent, the antibiotic dose, the route of administration, the timing of administration, the number of administrations and the duration of the prophylaxis [6]. In our study, we mainly focused on surgical interventions to which surveillance was recommended by the Belgian Antibiotic Policy Coordination Committee (BAPCOC): hip prosthesis, coronary artery bypass grafting, colorectal intervention and endoscopic prostate resection [9].
Recommended tools for an action plan include: (A) an antibiotic stewarship multidisciplinary team, (B) local guidelines, (C) implementation of guidelines, (D) specific prescriptions and stop orders, (E) expert systems and audits. Indeed, according to the literature, individual and educational barriers could be overcome by local consensus (tool B) and education [10]. Education is considered essential in a stewardship strategy, but studies confirm that education alone, without the incorporation of active actions, is not significantly effective in improving the frequency of compliance with practices [11][12][13][14].
Previous published papers showed that SAP practices could be optimized by the implementation of isolated strategies such as the pre-operative delivery of nominative kits of antibiotics [15], the implementation of a computer-based prescription system [16], and pharmacist Interventions [17][18][19].
In Belgium, Management Group of Antibiotics are regulated under Belgian law and have been mandatory since 2007 in all acute hospitals and in all large chronic hospitals (with a minimum of 150 beds). Belgian Management Group of Antibiotics should be composed of at least the following members: the hospital's antibiotic therapy management delegate, a hospital pharmacist, physicians from different specialties (clinical infectiology and/or medical microbiology, hospital hygiene), a clinical biologist who can be a physician or a pharmacist. In 2016, the Belgian University Hospital in which this study was conducted was equipped with tools (A) and (E), tool (B) having to be partly updated. To implement updated institutional guidelines (tool (C), it was necessary to solve the three types of barriers to implementation: individual, educational and structural. The structural barrier could be lifted by a structural solution such as the modification of the role of the pharmacy. During the year 2016, it was therefore decided that the updating of the institutional guidelines and the education of the practitioners applying antibiotic prophylaxis would be carried out by three members of the Management Group of Antibiotics of the hospital: a pharmacist, a microbiologist and an infectious disease specialist. These three members constituted an antibiotic stewardship multidisciplinary team dedicated to SAP.
In this study, we aimed to evaluate the impacts on SAP practices of several collaborative physician-pharmacist strategies implemented after the identification of risk factors associated with non-compliance towards updated institutional guidelines. SAP practices were thus studied each year from 2016 to 2021, knowing that the years 2020 and 2021 were affected by the Coronavirus 2019 disease (COVID-19) pandemic. A previous study carried out in our hospital revealed that antibiotics were overused during the COVID-19 pandemic in 2020 [20]. Therefore, our last audits aimed to evaluate the impact of the COVID-19 pandemic on SAP practices.
Design, setting and participants
A retrospective study was performed on a cohort of patients hospitalized in a 600-bed teaching hospital in Charleroi, Belgium. Patients were included if they were at least 18 years old and had undergone one of the following operations: hip prosthesis, coronary artery bypass grafting (CABG), colorectal surgery, transurethral resection of the prostate, endoscopic retrograde cholangiopancreaticography (ERCP). Patients were excluded if they were diagnosed as infected at the time of the surgery. Patients under 18 years old were also excluded.
Six groups of patients were thus constituted during the following time periods of 15 weeks: (post-computerized tool group) • period 4: between January 6, 2020 and April 17, 2020 (first group of the COVID-19 period) • period 5: between January 4, 2021 and April 16,2021 (second group of the COVID-19 period).
The baseline group was constituted during a pre-intervention stage (period 0) without stewardship. On this baseline group, we performed an observational transversal study in order to identify risk factors associated with non-compliance towards prophylactic antibiotic guidelines.
Then, we conducted a quasi-experimental study to assess the impacts of collaborative physician-pharmacist stewardship strategies on SAP practices from 2016 to 2021. In the test period of 2017, which served as an interventional period, a full-time pharmacist provided guidance to prescribing physicians in order to optimize SAP practices. The test group was constituted during this test period (period 1). Then, the pharmacist retrospectively audited the test group in terms of SAP practices. Real-time pharmaceutical stewardship was discontinued after the period 1 and to evaluate the sustained effects of stewardship, SAP practices were also audited in a cohort of patients hospitalized in 2018 after the interventional period (post-test group). The impact of the diffusion of a computerized decision support tool was then assessed by auditing SAP practices in 2019 in the post-computerized tool group. Finally, we evaluated the impact of the COVID-19 pandemic by evaluating SAP practices on two cohorts of patients operated respectively in 2020 (first group of the COVID-19 period) and in 2021 (second group of the COVID-19 period).
Criteria
With reference to the Belgian/Luxembourg edition of the Sanford guide to antimicrobial therapy [21] and the recommendations of several American scientific societies summarized in one report [22], the antibiotic stewardship multidisciplinary team of the hospital established in 2016 institutional guidelines which specified, for each type of intervention, the antibiotic prophylaxis regimen to use when it is indicated. These updated institutional guidelines were also based on the results of the antimicrobial resistance analysis within the hospital. These guidelines cover all surgical disciplines for adult patients within the hospital and promote the rational use of antibiotic prophylaxis starting in the pre-operative period of specific clean and clean contaminated operations.
To evaluate the use of prophylactic antibiotics in the hospital, for the five interventions audited, seven parameters (Indication, Drug agent(s), Dose(s), Route of administration, Time of pre-operative dose administration, Number of administration(s), Duration of prophylaxis) were assessed against the institutional guidelines (Table 1).
Prophylactic antibiotic regimens recommended in the updated institutional guidelines for the operations included in the study are:
Audits and stewardship interventions
Collaborative physician-pharmacist combined interventions started after the 2016 baseline period and included: (i) From November to December 2016, the central role of a pharmacist in the antibiotic stewardship multidisciplinary team for compilation of updated guidelines, audits, feedback of audit and educational seminar to prescribing physicians; (ii) From January 9, 2017 to April 21, 2017 (test period), the pharmacist aiming to implement the updated guidelines by making outreach visits to practitioners and delivering pre-operatively nominative kits containing the antibiotics with a written recommendation adapted to the type of intervention and to patient factors (recommendation also made available in the electronic patient record); (iii) In the period May-August 2017, the pharmacist making audits and feedback to the stewardship team; (iv) A physician-pharmacist collaboration developing an internal computer-based decision tool (https:// db. serv-idb. net/ antib iopro ph) validated by the Management Group of Antibiotics and diffused in the hospital in December 2018 (use recommended but not mandatory); (v) In the period July 2019-February 2020, the pharmacist making audits and feedback to the Management Group of Antibiotics and to prescribing physicians; (vi) In the period September-November 2021, the pharmacist finalizing audits.
Data collection and statistical analysis
For each group of the study, data were collected from patients' medical records. Compliance to guidelines was evaluated within each group through the seven items described here above. In order to identify risk factors associated with noncompliance, a retrospective observational transversal study was achieved on the baseline group. In this study, the outcome variables were the compliance rates to guidelines in terms of the seven items audited and the independent variables were as follows: age, obesity, gender, IgE mediated penicillin (or Ciprofloxacin) allergy, multidrug-resistant organisms, ASA Score > 2, length of preoperative stay, type of intervention, surgeon or gastroenterologist, anesthetist, presence of a nurse anesthetist during the intervention, duration of the intervention, blood loss during surgery ≥ 1.5L. A Table 1 Institutional Criteria for the rational use of antibiotic prophylaxis
Indication for prophylaxis Specific interventions of clean and clean contaminated operations where the benefit is demonstrated
Recommended drug(s) (+ Alternative drugs for patients with IgE-mediated allergy to the recommended drug(s)) Antibiotics active on bacteria presumed responsible for infections (incision site/ surgical site) with the narrowest spectrum of antibacterial activity. The prophylactic regimen should cover MRSA for carriers identified before the intervention Dose of anti-infective agent(s) Determination of the antibacterial dose by integrating the following elements: The individual characteristics of the patient: its weight and its rate of glomerular filtration (in renal impairment, the first dose does not require dose adjustment but the subsequent doses may need adjustment according to the results of glomerular filtration rates) The antimicrobial specific pharmacokinetic and pharmacodynamics properties Number of administration(s) Determination of the number of administration(s) by integrating the following elements: The maximum duration of prophylaxis The patient's glomerular filtration rate (in renal impairment, the first dose does not require adjustment but the number of subsequent doses may need adjustment according to the results of glomerular filtration rates) The half-life of the drug The type and the duration of the intervention and the volume of blood lost during the intervention Route of administration Intravenous route generally. Oral route for antibiotics that reach equivalent tissue concentration when given orally
Time of administration
The most important administration is that performed before the incision. The timing of this administration depends on the infusion time (i.v route) or the absorption time (oral route) of the drug agent: The antibiotic must be administered 15-60 min before the incision for antibiotics with rapid i.v administration (e.g. cefazolin). Earlier administration is necessary for i.v. antibiotics which must be administered over a period of ≥ 60 min (e.g. 2 h before the incision for vancomycin) or for oral antibiotics (e.g. 2 h before the operation for ciprofloxacin tablets). A dose will be re-administered intraoperatively (4 h after the initial dose for cefazolin) when the duration of the intervention from initiation of preoperative dose is greater than twice the half-life of the drug agent. When blood loss is significant (≥ 1.5 L), an additional dose must be re-administered intraoperatively after fluid resuscitation. multivariate statistical analysis (Wald test) was thus realized considering variables with a P < 0.10 in the univariate analysis. All the tests were bilateral, and the significance level for p-values was 0.05. The overall significance of the model was determined using the χ 2 test at a significance level of 0.05. Interventions for which the studied outcome variable was missing were excluded (3 excluded for the compliance rate in terms of "Time of pre-operative dose administration", 2 excluded for the compliance rate in terms of "Duration of prophylaxis"). Interventions for which the independent variables studied was missing were also excluded (only 1 excluded for the dependent variable "ASA score > 2").
Odds ratios for the relationships between each independent variable and each outcome variable were then determined.
The groups from year to year were then compared in terms of clinical and demographic characteristics and in terms of compliance to guidelines for each of the seven items audited. Data were analyzed using χ 2 for categorical data (sex, number of patients per type of intervention, number of long duration interventions (> 3 h), number of allergic patients, compliance to guidelines for each of the 7 items audited) and t tests for continuous data (age).
Data were entered and subsequently analyzed using Microsoft Excel (version 2016; Microsoft Corporation, Redmond, WA, USA) except for the multivariate statistical analysis that was performed on the programming software R (R 3.2.3, December 2015, R Core Team). The missing data corresponding to an outcome variable were excluded from the statistical analysis (for the variable "Compliance rate in terms of time of preoperative dose administration": 3, 10, 1, 3, 4 and 5 data were excluded from the baseline group, the test group, the post-test group, the post-computerized tool group, the first group of the COVID-19 period and the second group of the COVID-19 period, respectively; for the variable "Compliance rate in terms of duration of prophylaxis": 2 data were excluded from the baseline group and 2 from the second group of the COVID-19 period).
Data were analyzed using a t test for comparison between the baseline group and the test group in terms of prophylactic antibiotics cost.
Results
In total, 760 interventions were recorded within the six groups described in Table 2. The groups were constituted by using identical time period to eliminate any potential seasonal influence.
Identification of risk factors of non-compliance in the baseline group (Period 0)
The baseline group included 130 interventions carried out between January 11, 2016 and April 22, 2016, as indicated in Table 2. The results of compliance to updated guidelines within the baseline group and the results of the multivariate statistical analysis are shown, respectively, in Tables 3 and 4
Improvement of SAP practices in the test group that received real time pharmaceutical stewardship (Period 1)
Regarding the test group, 118 surgical interventions were included between January 9, 2017 and April 21, 2017. In terms of clinical and demographic characteristics, the test group was similar to the pre-test group (Table 2). Compared with the pre-test group, the compliance was significantly increased in the test group for all the seven criteria audited (p < 0.05). Moreover, as requested by the BAPCOC, the items drug agent(s) (97.5%) and duration of prophylaxis (96.6%) became, in the test group, compliant to local guidelines in more than 90% of cases.
No economic impact on antibiotic prophylaxis comparing the baseline group with the test group
The mean prophylactic antibiotics cost (mean ± standard deviation) for the patients in the baseline group was 9.2 ± 6.8 € while it was 10.8 ± 10.9 € for the patients in the test group. The statistical analysis did not show a significant difference (p = 0.17) between the two groups in terms of prophylactic antibiotics cost.
Decrease of the compliance to guidelines in the post-test group (Period 2)
Between January 8, 2018 and April 20, 2018, 124 surgical interventions were recorded to constitute the post-test group ( Table 2). The comparison of antibiotic prophylaxis practices in the 2017 test group (n = 118) versus the 2018 post-test group (n = 124). revealed a significant decrease in compliance for 5 of the 7 items assessed (p < 0.05 for the items Drug agent(s), Dose(s), Time of pre-operative dose administration, Number of administration(s), and Duration of prophylaxis). The rates of compliance in terms of drug agent(s) (89.5%) and duration of prophylaxis (84.7%) had thus fallen back below 90% in the post-test group.
Non-statistically significant impact of the computerized decision support tool in terms of compliance to guidelines (Period 3)
The post-computerized tool group was constituted between January 7, 2019 and April 19, 2019 and included 120 surgical interventions ( Table 2). The comparison of antibiotic prophylaxis practices in the 2018 post-test group (n = 124) versus the 2019 post-computerized tool group (n = 120) revealed a trend of compliance increase for 5 of the 7 items assessed (non-significant, p > 0.05) allowing a return above 90% for the two BAPCOC indicators (the items drug agent(s) (92.5%) and duration of prophylaxis (90%)).
No obvious impact of the COVID-19 pandemic in terms of compliance to guidelines (Period 4 & 5)
The first group of the COVID-19 period was similar to the 2019 post-computerized tool group in terms of demographics characteristics (p > 0.05) except for the gender (p < 0.05); regarding clinical characteristics, statistically significant differences (p < 0.05) were observed for certain types of interventions decrease in the rate of transurethral resections of the prostate and increase in the rate of colorectal procedures, p < 0.05 (Table 2). Therefore, a bias, probably linked to the first wave of COVID-19 in Belgium, could not be excluded when comparing the compliance rates between these two groups of 2019 and 2020. Nevertheless, there were no statistically significant differences for all items of compliance except for the variable indication of prophylaxis (p < 0.05). The two groups of the COVID-19 period were, for their part, similar with respect to demographics and clinical characteristics and in terms of compliance rates. A similarity was also observed when comparing demographics, clinical and compliance characteristics of the second group of the COVID-19 period with the 2019 post-computerized tool group (Tables 2 and 3).
Discussion
At the beginning of this work, surgical antibiotic prophylaxis practices were not in line with updated national and international guidelines. Indeed, the first audit, achieved on the baseline group, revealed an over 13% non-compliance to updated guidelines for six of the seven items audited. Through a multivariate statistical analysis, the present work allowed to formulate hypotheses regarding the various risk factors associated with this non-compliance: Penicillin IgE-mediated allergy, certain types of surgery (colorectal surgery, hip prosthesis surgery, transurethral resection of the prostate) and two anesthesiologists who were frequently associated with transurethral resections of the prostate. Therefore, we cannot exclude some dependence between these two practitioners and the transurethral resections of the prostate.
These data are consistent with those of the literature which also revealed as risk factors for non-compliance IgE-mediated penicillin allergy and certain types of surgery, in particular urological surgery and digestive surgery [7,23].
As reported by Muller's team for the pre-operative administrations [7], one of the least respected criteria in the baseline group was the time of pre-operative dose administration. Post-operatively, the number of administrations was frequently non-compliant with the main reason being a number of administrations higher than that recommended. Lack of education and incomplete professional rules were the main barriers associated with the risk factors identified in our study. Our former institutional guidelines in particular, did not specify intraoperative re-administration and alternative drugs in the event of IgE-mediated allergy.
After analysis confirming the similarity between groups, a statistical analysis indicated a significant difference in compliance between the test group and the baseline group for all audited items (Table 4). These results show a positive impact of our global stewardship strategy on compliance to the updated recommendations for surgical antibiotic prophylaxis. However, this improvement in compliance was not associated with an economic benefit since we observed no significant difference between the baseline group and the test group in terms of prophylactic antibiotics cost.
To our knowledge, the literature does not mention studies developing a global strategy of this order to improve SAP practices. Comparing with previous reports applying more restrained strategies, our results also confirm a positive impact on SAP practices, in particular with an increase in the rational selection of antibiotics, the appropriate duration of prophylaxis, and the correct time of administration of the first preoperative dose [15,16,19,24]. Moreover, our global strategy herein developed allowed us to reach rates of compliance higher than those obtained in previous reports [15,16,19]. As described in the guidelines written by the Infectious Diseases Society of America [25], this type of strategy with a persuasive aim has certain advantages and disadvantages which were also observed during the implementation of the action plan in our hospital. For the benefits encountered: • Increased visibility of the stewardship program thanks to the presence of the pharmacist in the operating room and in departments involved in the study. • Establishment of good collegial relationships facilitating consensus (in the various departments involved in the study, presentation of the pharmacist as the referent for "Antibioprophylaxis" with facilitation of collaboration and support to prescription). • Educational benefit and improved adherence to recommendations by prescribers (combinations of persuasive interventions accomplished during the action plan).
For the inconveniences encountered.
• Success depends on the interventional method used. In this work, the dissemination/display of paper recommendations and the delivery by the pharmacist of antibiotic prophylaxis kits incorporating a recommendation sheet adapted to the patient were highly effective and largely contributed to the success of practices improvement. • Intensive work and determination are required, especially for the delivery of the kits early in the morning for the very first surgeries and a constant monitoring of the operating schedule that can change from hour to hour. • Need to convince practitioners who can be resistant to change.
After this test phase, we evaluated the sustained effect of the collaborative physician-pharmacist stewardship implemented. The results revealed that, without active stewardship, there were a significant decrease in the compliance to guidelines for 5 out of 7 items assessed (p < 0.05, comparing the test group with the post-test group). Despite this, for all items assessed, the rates of compliance in the post-test group were still higher than those measured in the baseline group.
The impact of computerized decision support tool was non significant, with similar compliances to updated guidelines between the post-test group and the postcomputerized tool group (p > 0.05). There was however a slight increase in compliance for 5 out of 7 items assessed (p > 0.05) allowing a return above 90% in terms of compliance rate for the two BAPCOC indicators studied. In our hospital, the computerized decision support tool we implemented has also certain advantages and disadvantages.
For the benefits encountered: • The tool integrates the guidelines updated and validated by the different actors of antibiotic prophylaxis and allows to consider specific patient criteria. • The tool allows rapid and efficient decision-making, adapted to the patient's parameters and in compliance with guidelines. • Recommendations are accessible via a computer link (also from outside the hospital). • Stewardship strategy is less labor-intensive. • The tool can sensitize the teams to the importance of antibiotic prophylaxis.
For the inconveniences encountered. • No connection with the computerized record of the patient (requiring a manual encoding by practitioners). • Absence of reminder recalling the pre-operative administration of antibiotics. • Underused by practitioners (use not mandatory).
The above three points would need to be developed and implemented in order to positively impact SAP practices. As reported in the literature [16,[26][27][28], computerbased help for clinical decision and prescription seems to be a useful tool for surgical antibiotic prophylaxis but it should be accompanied by direct regular educational measures.
On 11 March 2020, the Belgian Hospital in which this study was conducted admitted for the first time a patient with a positive SARS-CoV-2 reverse transcriptase polymerase chain reaction (RT-PCR) test [20]. The Hospital & Transport Surge Capacity committee in Belgium announced that hospitals must stop all consultations, examinations and interventions planned from 14 March 2020. On April 30, 2020, the committee communicated guidelines to hospitals for a gradual resumption of regular care [29]. The first group of the COVID-19 period was therefore affected in terms of distribution of the different interventions studied, with a particular decrease in the rate of transurethral resections of the prostate. In this 2020 group, only compliance rates in terms of indication were significantly decreased compared to the 2019 group. The two groups of the COVID-19 period were, for their part, similar with respect to all demographic, clinical and compliance characteristics. In order to exclude a possible bias linked to the redistribution of interventions from the 2020 first wave of COVID-19, we also compared data collected for the second group of the COVID-19 period with those of the 2019 group. This comparison revealed no statistically significant differences in terms of demographic, clinical and compliance characteristics confirming that the COVID-19 pandemic did not affect the compliance of SAP practices to institutional guidelines. Out of 147 RT-PCR tests carried before hospitalization within the second group of the COVID-19 period, 5 were found to be positive with no obvious impact on compliance (data not shown).
The stewardship carried out in this work appears to be adapted to the objective of renewing antibiotic prophylaxis practices. However, more improvement measures are needed for a long-term effect. In particular, the preoperative computerized and automated prescription of SAP based on computerized patient data; the delivery by the pharmacy of nominative SAP kits based on the doctor's computerized prescription; the repetition of active interventions and audits in order to maintain the awareness of practitioners, particularly in a university hospital with a high turnover of doctors in training. These quality improvement initiatives require the dedication of specific personnel at all decision-making levels and the release of time dedicated to the maintenance and continuous improvement of quality. The support of the members of the hospital management is therefore essential.
The study we developed presents a series of limitations commonly encountered in observational and quasiexperimental studies. On one hand, confounding bias linked to the change of practioners from year to year cannot be ruled out in the quasi-experimental study. Also, in the observational study, some uninvestigated factors could influence noncompliance. For practical, economic and swiftness reasons, convenience sampling was selected; but, indeed, the absence of randomization may reveal a selection bias. Identical seasonal periods and the same period of time had been determined in order to limit the occurrence of this type of bias. On the other hand, the study could not be blinded; practitioners were aware of participating in a study. Biases such as the Hawthorne effect could therefore appear [30]. Also, our data collection procedure resulted in the absence of some data. Collecting a full amount of data prospectively would require such a large amount of work that the IT solution used for retrospective analysis was clearly necessary. The inclusion of several services in this study, aids the generalizability of the results, but the limited time and monocentric aspect of the study significantly reduce the number of patients within each unique category, thereby limiting the prediction power. The limited number of patients within each group and the lack of accessibility of patient health data after discharge from hospital did not allow to evaluate the evolution of clinical or microbiological outcomes such as incidence of surgical site infections or infections with antimicrobial-resistant bacteria. Despite this, the compliance with SAP recommendations that we measured in this work represents an undeniable quality indicator in the prevention process.
Conclusions
Rational use of SAP requires a long-term proactive, collaborative and common approach including SAP prescribers and antibiotic stewardship multidisciplinary team. Indeed, this study shows that, among all stewardship strategies implemented to positively impact SAP practices, the most performant strategy clearly appear to be real-time pharmaceutical interventions in the operating area and in the care units concerned by antibiotic prophylaxis. The discontinuation of these active interventions, however, results in a slight decrease in compliance
|
2022-07-27T13:31:44.611Z
|
2022-07-26T00:00:00.000
|
{
"year": 2022,
"sha1": "cd16ef648be1e30c108c745bf8ee8f1bb4893caf",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "58a248580ce3eff6f073867c7e0fcab0f54e383d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260825332
|
pes2o/s2orc
|
v3-fos-license
|
Malocclusion and Scoliosis: Is There a Correlation?
Introduction: Scoliosis is a complex three-dimensional malformation of the spine. Although its etiology is still being investigated, it is clear that a number of factors can influence this syndrome. The spinal deformity of idiopathic scoliosis can be viewed from an etiopathogenetic perspective as a symptom of a complicated condition with a multifactorial etiology. Numerous studies have established its relationship with malocclusion, but it is still unclear how these factors interact. Malocclusion is a change in the physiological alignment of the upper and lower teeth that can be either dental or skeletal in origin. This study’s objective is to assess the relationship between scoliosis and malocclusion. Material and Methods: A total of 646 patients were enrolled (554 females and 92 males), 447 with scoliosis and 199 without, from private dental and orthopedic practices, to answer an anonymous questionnaire. They were selected in private dental and orthopedic practices where they had dental and orthopedic examinations. Twenty-two patients were excluded because of a lack of answers. Participants were given a bilingual survey, in English and Italian, composed of 13 questions formulated specifically for this study, using Google Forms (Google LLC, Mountain View, CA, USA). Results: Univariate analysis of the question “Do you have scoliosis?” shows a significant correlation with the following questions: “Was scoliosis a family issue?” (p < 0.05 OR 7.30 IC: 3.05–17.46) “Do you have malocclusion?” (p < 0.05, OR: 1.19 IC:1.0–1.34) and “Was mal-occlusion a family issue?” (p < 0.01, OR: 1.39 IC 1.10–1.77). Performing a multivariate analysis for the same variables, the best predictors of scoliosis were “Was scoliosis a family issue?” (p < 0.001) and “Was malocclusion a family issue?” (p < 0.05), while the question “Do you have malocclusion” lost significance. Conclusion: This study adds further confirmation that there might be an important connection between malocclusion and scoliosis; it suggests that dentists and orthopedists have to check, as early as possible, for the probable presence of both pathologies to avoid a severe progression which, in most cases, may require significant therapy and even surgery.
Introduction
Idiopathic scoliosis is a deformity of the spine that primarily affects previously healthy children, predominantly girls, during a growth spurt (Weinstein et al.) [1].The evidence that idiopathic scoliosis is a complex three-dimensional deformity of the spine, rather than a simple lateral curvature, has been well described in the following studies: Roaf [2]; Pedriolle and Vidal [3]; Pedriolle, Becchetti, Vidal and Lopez [4]; Deacon, Flood and Dickson [5]; Dickson [6]; Stagnara [7]; and Pedriolle [4].Those suffering from scoliotic deformities with typical vertebral rotation in the thoracic and lumbar spine showed a significant decrease in thoracic kyphosis and an increase in lumbar lordosis (Inoue et al.) [8].Lateral views show that the displaced segments of the spine are always an extension, even when kypho-scoliosis is present (Perdriolle et al.) [9].
Recent studies also focus on the significance of the scoliosis component in the sagittal plane and its role in the pathogenetic evolution of idiopathic scoliosis (Schlosser et al.) [10].Furthermore, morphological changes in the scoliotic vertebrae appear to be related to the sagittal spinal profile in adolescents with idiopathic scoliosis (Pasha et al.) [11].
The main diagnostic criterion is a coronal curvature exceeding 10 degrees on an anterior-posterior X-ray.The severity of scoliosis is expressed by the Cobb angle.This condition has been divided into three types: infantile (presenting from birth to 3 years), juvenile (presenting from 3 to 10 years) and adolescent (presenting from 10 years to skeletal maturity (Pedriolle et al.) [12].
However, structural scoliosis can be seen with a Cobb angle under 10 • (Xiong et al.) [13] with a potential for progression.Progression is more common in girls during the growth spurt at puberty, referred to as progressive idiopathic scoliosis (Negrini et al.) [14].
By definition, idiopathic scoliosis is of unknown etiology: clinical history, clinical analysis and radiological examinations do not provide clear evidence for any specific origin (Machida et al.) [15].From an etiopathogenetic point of view, therefore, the spinal deformity caused by idiopathic scoliosis may be defined as a sign of a syndrome with a multifactorial etiology (Negrini et al.) [14], confirming what was already described by Brooks et al. in 1975 [16].
If progressive scoliosis remains untreated, it can create several problems, even lifethreatening ones, by developing pulmonary conditions, chronic pain and drastic changes in quality of life [14].
The link between scoliosis and dental malocclusion is still controversial.There are several articles that have studied this association (Laskowska et al.) [17]; (Lippold et al.), [18], (Saccucci et al.) [19], but there is not enough evidence about the correlation and the etiology (Perez Belloso et al.) [20].Malocclusion is an abnormal relationship between the teeth on the upper and the lower arches (dental) and, in some situations, between the jaws (skeletal).A malocclusion frequently appears during the growth period, especially in Western societies: therefore, a dental patient can simultaneously be an orthopedic patient as well.The sagittal malocclusion is usually defined by Angle's class one (I), class two (II) with divisions 1 and 2, and class three (III).From a skeletal point of view, Angle's class II is characterized by a mandible that is posteriorly located (retrusion) or poorly developed.A dental class II can be divided into two types: division 1 and division 2. Division 1 is when the upper incisors are tilted outwards, creating significant overjet, or when there is a significant sagittal distance between the upper and the lower incisors.Division 2 is characterized by what is called a "deep bite" or an excessive vertical overlap of the maxillary central incisors over the mandibular central incisors.A dental class III is characterized by the mandible and lower teeth being in an advanced position compared to the maxillary teeth and can be characterized by a cross-bite that can be anterior or lateral [21].In other words, the palatal dental arch always needs to be "larger" than the mandibular dental arch.This proportion is completely inverted in a class III, and when this proportional relationship is partial or monolateral or bilateral but not frontal, it gives raise to cross-bites.A monolateral cross-bite determines, in almost all cases, a dysfunctional movement of the jaw, as the teeth are not properly allied to slide past each other and to protect each other, resulting in the stifled growth and movement of the crossed teeth.Some articles have investigated which malocclusion is more likely linked to scoliosis.It appears that asymmetrical malocclusions could favor or be favored by scoliosis, although the direction of the influence (ascending or descending) is still not clear.It has been reported [22] that a cross-bite, and in particular the monolateral type, is the type of malocclusion most connected to scoliosis.An orthodontist should be able to diagnose malocclusion and correct it, especially while the patient is in the growing phase [23].
Furthermore, according to some authors, it seems that transversal malocclusions are the malocclusions more related to scoliosis and the worsening of it.Moreover, a few authors suggest that a deep bite and other vertical abnormalities appear in patients with various spinal pathologies; not just scoliosis, but also with those who have a pelvic tilt and pelvic torsion.Other studies report that a deviated midline and asymmetries of the mandible are tied to the severity of the scoliosis, but it is not clear which is the main etiological factor or even if there is a common one that cause both illnesses [24].Some studies have reported that a class II malocclusion occurs in scoliosis patients more often than in patients with a healthy spine curvature [24,25].
This study analyzes if there is a link between temporo-mandibular disorders, scoliosis and malocclusion.Furthermore, the questionnaire asked if the patients had relatives with scoliosis to understand if there is a possible genetic predisposition.The patients were asked if they had any previous orthodontic treatment to see if there is any biological or timing connection between pathologies and orthodontics.Moreover, it was asked if patients had been informed about the possible and probable connection between occlusion, spinal posture and growth.The purpose of this article is to analyze a significant number of patients with and without scoliosis and malocclusion to identify any possible associations between scoliosis and malocclusion, as reported by those patients who filled out the questionnaire.
Material and Methods
A total of 646 patients were enrolled (554 females and 92 males), 447 with scoliosis and 199 without.They were selected in private dental and orthopedic practices where they had dental and orthopedic examinations.Twenty-two patients were excluded because of a lack of answers.Patients were given a bilingual survey, in English and Italian, composed of 13 questions formulated specifically for this study, using Google Forms (Google LLC, Mountain View, CA, USA) and accessible online (Table 1).The questionnaire was shared online through various emails.All collected data were anonymized and identified only by an ID and a time stamp.There were no reminders transmitted to patients to help them feel free to answer.It was specified that the purpose of the questionnaire was to find ways for clinicians to improve their skills in treating patients.It was ensured that each patient provided one answer by controlling the timing and different kinds of responses.The patients were asked to complete the questionnaire without any possible compensation or benefit in return.The questionnaire was compiled specifically for this study, and due to the contingency of the COVID-19 pandemic waves, pre-testing was not a possible option.All participants signed informed consent and accepted the privacy policy for the protection of their personal data before completing the survey.No personal information that could identify the individuals was collected and the data were analyzed in aggregate form only.All data points are expressed as absolute frequency (percentage).Dichotomic correlations of data were assessed using the Fisher exact test while age groups were compared by using the Mann-Whitney U test and logistic regression.Considering the worldwide prevalence of malocclusion of 56%, we anticipated a minimum difference of 14% in prevalence, an alpha error of 0.05 and beta of 0.2; thus, we calculated the sample size for dichotomic variables and established 186 responders per group [26].
Results
A total of 646 patients responded to the survey, but 22 of them were excluded for missing more than four responses.The questions and results of the survey are compiled in Table 2.In Table 3, univariate analysis of the question, "Do you have scoliosis?"shows a significant correlation with the following questions: "Was scoliosis a family issue?"(p < 0.05 OR 7.30 IC: 3.05-17.46),"Do you have malocclusion?"(p < 0.05, OR: 1.19 IC:1.0-1.34) and "Was malocclusion a family issue?"(p < 0.01, OR: 1.39 IC 1.10-1.77).Performing a multivariate analysis for the same variables, the best predictors of scoliosis were "Was scoliosis a family issue?"(p < 0.001) and "Was malocclusion a family issue?"(p < 0.05), while the question "Do you have malocclusion" lost significance.A univariate analysis was performed considering the questions: "Did your orthopedist inform you about the possibility that scoliosis/posture and malocclusion can influence each other?" and "Did your dentist inform you about the possibility that malocclusion could influence the spine and your posture?", correlating them with current age, age at diagnosis and the following questions: "Was malocclusion a family issue?","Did you suffer from TMJ (mandibular) pain?", "Did you suffer from TMJ pain?" and " Did your scoliosis appear before, after or while the orthodontic therapy?", but we found no correlations.
For the question "What kind of malocclusion do you have", the answer "Deviated mandible" showed a higher prevalence in patients who reported having scoliosis (p < 0.05, OR: 2.67 IC 1.01-7.67)compared to those who did not.However, during the analysis of the answers, this question was deemed to have been too confusing in its formulation and was dropped from the final results.
Discussion
Scoliosis is still a disabling disease today and if not diagnosed early it can lead to serious complications.Early screening for scoliosis is desirable and prevents patients from longer and more complex treatments and spinal surgery.There are still only a few peerreviewed studies linking scoliosis to malocclusion.Huggare et al. [22] and Lippold et al. [18] reported the relationship between idiopathic scoliosis and facial asymmetry or malocclusions with a transverse discrepancy such as cross-bites.A study by Saccucci et al. [19] reported a higher incidence of malocclusions in individuals with scoliosis compared with the group of healthy subjects.According to Laskowska M. et al., [17], the incidence of malocclusions is greater in children with idiopathic scoliosis than in healthy ones.This result is in accordance with the results of this study.However, according to Langella F. et al. [27], there is evidence from low-quality studies suggesting an increased prevalence of occlusal dysfunction in patients with known spinal deformity, but the conclusions have a high risk of bias.No evidence of beneficial effects of orthodontic treatment on spinal deformity was found.Lippold et al. [18] reported a predisposition to cross-bites in scoliotic individuals.It was interesting to note that the three most reported types of malocclusion in our study were the generic crowded teeth (31%), followed by deep bite (21%) and then overjet (14%), confirming some data in the literature, but cross-bites were mentioned only by 9% of the responders.Unfortunately, currently, this connection is not known by most orthodontists, orthopedists and doctors, who treat these two kinds of pathologies, although the clinicians treating postural problems, myofascial therapists, cranio-osteopathic physicians and those who have some training in these issues have been clinically aware of this link for decades.Does this lack of awareness have an explanation?The mouth and the spine seem to be two distant systems but, in a clinical setting, they are more intertwined than one might think.For example, a recent study [28] focused on the role played by the temporomandibular joint and dental occlusion on the balance of the mother's body and on the muscular forces reflected during childbirth labor.
It is possible to believe that the study of the relationship between occlusion and spine, and between occlusion and body posture, should be encouraged because the topic has great clinical relevance among various health professionals, as it could improve the quality of care for patients with scoliosis and/or malocclusion [28].
The role of the tongue and swallowing also plays a crucial role in the etiopathogenesis of malocclusion.The tongue, along with correct swallowing, shapes the palate and dental arches and, at the neurological level, during swallowing, the tongue activates the widely distributed receptors of the cortical and subcortical areas [24,25,28].A lower habitual posture of the tongue and the consequent narrow palate can lead to respiratory problems impacting physiological nasal breathing.To compensate for these respiratory problems, the patient changes their head and neck position.The tongue posture is also very important because it is connected to the hyoid bone that is itself connected to various cervical muscles.This can, according to different studies, determine a change in body posture [23], and therefore it is important for the orthodontist to correct the tongue position to help patients' optimize breathing and posture.Tongue position can even be altered by a restricted frenulum [24].In these cases, the first approach may be either orofacial myofunctional therapy or surgery (tongue-tie release) or a combination of the two (surgery preceded and followed by orofacial myofunctional therapy).These minor treatments can be of substantial benefit to scoliotic patients.
One of the first steps to helping reduce or slow the progression of scoliosis and malocclusion is to educate patients and professionals about this specific correlation to reach an early and correct diagnosis.According to the answers of this questionnaire, people are not aware of the connection between scoliosis and malocclusion, but neither do the orthodontists or the specialists in charge of managing the scoliosis themselves, as 42% of responders mentioned having scoliosis before the orthodontic treatment, 83% responded that the orthopedist did not mention connections between scoliosis and malocclusion and a similar percentage (86%) mentioned that the orthodontist did not mention any connection between orthodontic treatment and scoliosis.Therefore, it is very important to educate patients with scoliosis about the need to consult with an orthodontist, and educate patients with significant malocclusion about the need to see a spine specialist, because it is probable that the same patient would present both issues.An early diagnosis is very important in both problems because it would help avoid invasive surgical therapies often used to treat scoliosis or severe skeletal malocclusions.It appears from the results or our questionnaire that not only is there a link between malocclusion and scoliosis, but those patients with scoliosis have a higher possibility of having temporomandibular/orofacial pain disorders as well, as 43% of the responders indicated they did have TMJD/orofacial pain.Moreover, it is useful for patients with TMJD/orofacial pain problems to receive an assessment of their posture and spinal condition along with their dental occlusion.Conversely, it would be very helpful for medical doctors who treat TMJ/orofacial pain disorders to assess spinal posture, which can contribute to and increase the severity of symptoms, because of all the muscular connections between the cervical spine and the temporomandibular area.
For this reason, occlusions like the unilateral cross-bite or the asymmetrical class II should be promptly treated in patients with scoliosis to avoid worsening of the spinal curvature.The results of this questionnaire suggest that both conditions are present in other family members, even considering the limited awareness the patients might have had about the health history of their family members.This awareness of family history can be helpful for both diagnosis and prevention because, if scoliosis or malocclusions are present in several members of the family, a patient should be advised to be more proactive and to avoid or minimize the onset of one or both conditions.Since scoliosis is not always evident and is asymptomatic in the beginning, it is prudent to check the spinal posture and alignment before the beginning of any orthodontic treatment, as sometimes parents and relatives mistakenly think that orthodontic therapy can cause scoliosis.
The current study investigated if patients with scoliosis also presented malocclusion in significant numbers, but future studies will be needed to establish a more specific relationship between the two disorders, or even a causal relationship, as a cause-effect relationship is not currently defined.In order to establish this relationship, tighter collaboration between orthodontists and orthopedists is needed to gather relevant data.
It makes sense for orthodontists to spear-head this collaboration in search of the specific relationships between malocclusion and scoliosis, because they are the ones who are more likely to see young children.An orthodontist works with children to take advantage of the growth and development spurts.If an orthodontist has easy protocols and tools available to assess scoliosis, then its damaging effects may be prevented or limited.
Conversely, if the orthopedist is aware of the connections between scoliosis and malocclusion, then the two professionals may be able to work in tandem, as both professionals are aware of part of the situation.Cross-education and using common assessment tools would allow them to better serve the young patient, who could be diagnosed and treated in tandem, as opposed to sequentially.Additional professionals may be involved as needed, such a PT or a posturologist, working during the orthodontic treatment.
One possible explanation on why this collaboration between orthodontists and orthopedists is still not happening is that scoliosis may be difficult to detect/diagnose in its early stages and postural instruments and tools may be expensive and not widely available, so there is still a need for an inexpensive solution.Currently, diagnosis for scoliosis (the Cobb angle) requires X-rays, which are controversial and therefore are not advised as a first approach.
Ideally, at the very least, there is a need for a common/reciprocal way to assess scoliosis and malocclusion.A proposal for an easy-to-use, multidisciplinary protocol for assessment of both malocclusion and scoliosis could be the Adam Forward Bending Test (Adobor et al., 2011) [29], a simple and inexpensive method, that, although not infallible, is the most used test in scoliosis research worldwide (Komang-Agung, Dwi-Purnomo, and Susilowati, 2017; Gashaw, Janakiraman and Belay, 2021) [30][31][32] and has been for decades (Wang, Ye & Wu, 1996) [32].
It is reasonable to conduct further studies about the possible importance of the Adam Forward Bending Test as a diagnostic tool for orthodontists.It could help those professionals in detecting the signs of scoliosis and making the proper referral, while familiarity with the Angle's classification of malocclusion on the part of orthopedists could be helpful to the dual approach of these disorders in children and may establish if and which one leads to the other one.
Overall, it is right to mention some limitations of this study.By its very nature, a questionnaire involves personal perceptions, personal knowledge and opinion of a certain subject.And some people might have not been aware of the significance of some questions, or the intended meaning of some answers.Moreover, another limitation of the questionnaire was that among the types of malocclusion listed, the answer Other probably replaced the answer Class III, which was missing among the options.The question: Is scoliosis a family issue?might have been difficult to understand as well, especially if the respondents were young.
Conclusions
Considering all the findings of this study and the limitations of a questionnaire, it is still possible to reaffirm the correlation between scoliosis and oral malocclusion.This study suggests the necessity of assessing the spinal condition in patients with a diagnosed malocclusion, as well as checking for certain types of malocclusions in patients with a possible or confirmed presence of scoliosis.This study, which included a significant number of patients, offers an important contribution to the research about this pathological link between two systems that seem distant and disconnected but which are more intertwined than was previously assumed.
It seems clear that, since orthodontists and orthopedics are complementary and neither have all the answers, there is a current necessity to include malocclusion and scoliosis assessments in each other's evaluation or assessment protocols, as the current collaboration between professionals is missing to the detriment of the patients' health.
What kind of orthodontic device did you use?
* Limited to those who have scoliosis.
|
2023-08-12T15:17:47.670Z
|
2023-08-01T00:00:00.000
|
{
"year": 2023,
"sha1": "bb61f6b46eb6033b56e84cee40fbf7b8db775851",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4426/13/8/1249/pdf?version=1691718009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a2bf3acd2db2f77548b58f17100b07b59bda47fb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257077074
|
pes2o/s2orc
|
v3-fos-license
|
Benign cartilaginous tumors of the hand, a five-year retrospective study
Benign and malignant cartilaginous bone tumors of the hand are rare findings, however representing a particular pathology due to the capacity to induce significant functional impairment. Even though a large proportion of tumors of the hand and wrist are benign, these may present destructive characteristics, deforming adjacent structures until compromising function. The most appropriate surgical approach for most benign tumors is intralesional lesion resection. Malignant tumors often require wide excision, up to segment amputation to obtain tumor control. A five-year retrospective study was performed on patients admitted in our Clinic with benign cartilaginous tumors of the hand, in which 15 patients were admitted within this period, 10 presenting with enchondroma, four presenting with osteochondroma, and lastly one with chondromatosis. After clinical and imaging evaluation, all the aforementioned tumors were surgically removed. Definitive diagnosis for all bone tumors, either benign or malignant, was established by tissue biopsy and histopathological examination, dictating therapeutic strategy.
Introduction
Benign cartilaginous bone tumors represent a broad spectrum of conditions represented by multiple entities, such as enchondroma, osteochondroma, subungual exostoses, periosteal osteochondroma, chondromyxoid fibroma, as well as periosteal chondroma, chondroblastoma not otherwise specified (NOS) and lastly osteochondromyxoma, as defined by the World Health Organization (WHO) [1,2].
Intermediate or locally aggressive tumors include chondromatosis NOS and atypical cartilaginous tumor. Biopsies of malignant lesions can be false negative owing to sampling procedure, due to composition heterogenicity of malignant and benign areas in the same tumor [3,4].
Benign cartilage tumors are frequent in young population, having a peak occurrence during the second and 3 rd decades of life [3]. Osteochondroma and enchondroma are most commonly diagnosed as solitary lesions, however they can occasionally be present in numerous skeletal regions pertaining to genetic conditions: Ollier disease, multiple osteochondroma, multiple hereditary exostosis (MHE), and lastly Maffucci syndrome [5][6][7].
Chondrosarcoma is the malignant cartilaginous tumor of the bone. It either develops malignant de novo, or as benign preceding cartilaginous osseous tumors turned malignant. There are four histological chondrosarcoma entities: periosteal, dedifferentiated, mesenchymal, and clear-cell chondrosarcoma [1,12].
Correct preoperative diagnosis is essential for bone tumors, based on clinical findings and mainly on imagistic techniques. Radiography still remains the most important imaging tool for bone tumors diagnostic allowing characterization of localization, consistency and evaluation of aggressive features [13]. Ultrasound (US) is not accurate in diagnosis of bone lesions, being more useful in evaluation of soft tissue mases; however, US can detect tumor extension in soft tissue and the tumor rapports with surrounding structures being also useful in performing guided biopsies [13,14]. Computed tomography (CT) allows detailed visualization of bone structure, periosteal reaction, detects tumor calcifications being a useful tool in differential diagnosis of these tumors [13,15]. Currently, after radiographic evaluation, magnetic resonance imaging (MRI) represents the main test used to evaluate musculoskeletal tumors, including the chondroid tumors, being able to define in detail tumor characteristics, local extension, vascularization being a preferred method in patient followup [13,16]. Figure 1 (A-F) displays imaging findings in a female patient with an enchondroma of the 4 th metacarpal in the right hand.
Figure 1 -Imaging findings in a 42-year-old female patient with an enchondroma of the 4 th metacarpal in the right hand: (A) Plain frontal view on radiography; (B) Longitudinal view of metacarpal enchondroma; (C) Frontal view of 4 th metacarpal enchondroma; (D) Sagittal view of 4 th metacarpal enchondroma; (E and F) Magnetic resonance imaging -frontal 3D reconstruction of 4 th metacarpal enchondroma.
Definitive diagnosis for all bone tumors, either benign or malignant, was established by tissue biopsy and histopathological (HP) examination, dictating therapeutic strategy [17].
Surgical removal using various techniques is the treatment of choice for bone tumors, although in some cases of benign asymptomatic tumors, they can be kept under observation, with regular follow-up. Resulting bone defects after tumor resection may require bone reconstruction using bone grafts, allografts, prosthesis, or alloplastic materials [7,[17][18][19].
Aim
The aim of this retrospective study was to describe clinical, imagistic, and pathological findings of benign cartilaginous tumors of the hand, as well as describing the current therapeutic options adapted according to the particularities of each case. Analysis was performed on these particular benign cartilaginous tumors of the hand, tumors that are rarely encountered, but represent a particular pathology due to their capacity to induce significant functional impairment.
Patients, Materials and Methods
A five-year retrospective cross-sectional study was performed on patients admitted with benign cartilaginous tumors to the Emergency Clinical Hospital, Bucharest, Romania between 2015 and 2019. Inclusion criteria were cartilaginous benign tumors of the hand and wrist. Exclusion criteria were tumors outside of the upper limb aforementioned areas, malignant character, non-cartilaginous etiology of the tumors. Both written and verbal consent were obtained at the time of the admission from each of the patients pertaining to this study. Fifteen patients were admitted to the Department of Plastic Surgery, aged between 18-and 69-year-old, nine females and six males, that presented with symptoms of nerve compression syndromes, pathological fractures, and motor deficiency. All patients were evaluated through imaging radiological investigations and all aforementioned tumors were sent for HP examination. Data were gathered from Hospital physical and digital records archives pertaining to the Departments of Plastic Surgery, Imaging and Histopathology. The results were analyzed using Microsoft Excel and were compared to existing literature, as drawn from PubMed database.
Results
In the presented study, 15 cases of benign cartilaginous tumoral masses were documented, which exhibited a variety of symptoms, such as nerve compression syndromes, local pain, motor functional loss, pathological fractures, deformities, and poor quality of life. There were 10 cases of enchondroma (one of them being an Ollier disease), four cases of osteochondroma and one case of chondromatosis ( Table 2). All the aforementioned tumors were surgically removed. Out of 10 enchondroma cases pertaining to this study, with the median age of 40, four were localized on the metacarpals, two on the proximal phalanges alone, one on the middle phalanx, one on the distal phalanx, one on both proximal phalanx and metacarpal bone, and lastly one on both proximal and middle phalanges (Ollier disease). Four of them occurred in the 5 th finger, three in the second finger, two in the 3 rd finger, and lastly the patient with Ollier syndrome in both 4 th and 5 th finger rays (Figure 2, A and B). Seven out of 10 cases of enchondroma required surgical resection followed by bone reconstruction with bone grafts, harvested from either the iliac crest or tibial diaphysis, only one patient requiring resection alone.
Figure 3 -45-year-old male patient with enchondroma of the middle phalanx: (A) Oblique radiographic view of middle phalanx enchondroma of the 3 rd finger; (B) Frontal radiographic view of middle phalanx of the 3 rd finger; (C) Macroscopic view of middle phalanx enchondroma of the 3 rd finger; (D) Bone defect after resection of enchondroma; (E and F) Tibial bone graft harvest; (G) Bone defect covered with tibial bone graft; (H) Final appearance after bone grafting defect.
HP results revealed lobulated cartilaginous tissue with uneven chondrocytes, some binucleations, areas of hypercellularity, focal myxoid areas and enchondral ossification. The cartilaginous lobules are partially covered by a thin layer of fibrous tissue (Figures 4 and 5).
Among the four cases of osteochondroma, one presented due to a nerve compression of the digital nerve due to an osteochondroma of the 1 st metacarpal, the second had a localized tumor in the distal phalanx, thirdly a patient which presented with an osteochondroma of the radius at the radiocarpal joint, and lastly a patient with a tumor in her proximal phalanx of the 5 th finger. All of them were surgically removed through curettage, excision alone without the need of bone grafting. Osteochondromas are composed of a mature hyaline cartilage cap without significant cellular atypia and a bony stalk. The mature cancellous bone of the stalk is contiguous with that of the native bone (Figures 6 and 7).
Lastly, only one patient presented chondromatosis, which presented with tumor relapse in multiple locations of both hands and metacarpals, in which simply tumor reduction was performed, as seen in Figure 8 on the radiography. The HP result revealed nodules of mature cartilage without significant atypia, with focal points of enchondral ossifications, suggestive of chondromatosis ( Figure 9).
Discussions
As compared to other locations, bone tumors have a low incidence in the hand, approximately 6%, most of them being rather benign than malign, but generating significant symptoms, such as pain and swelling, with either acute or insidious onset. The functional loss is generated by local aggressive destruction and deformation [17,19].
Diagnosis of such tumors begins with the medical history and physical examination. Inspection and examination of the hand should evaluate both surfaces of the hand for proper finger alignment, correct passive, and active range of motion, structural of soft tissues, local deformities, skin modifications, and lastly neurological assessment of the territories of radial, median and ulnar nerve [17].
Paraclinical investigations should be initiated with radiography of both hands. Benign lesions often present congruent borders, without cortical interruption, but with a degree of expansion. Malignant lesions present interrupted or lack of border definition, abnormal loss of bone and invasion of adjacent soft tissues. A more detailed characterization of the lesion can be realized through CT. In case of soft tissue invasion suspicion, MRI should be performed for appreciating extension. Lastly, biopsy of the lesion is the diagnosing procedure, which should be attempted with accuracy to not spread possible malignant cells [16,17,20,21].
The adequate treatment option depends on estimated form and functional loss, dimensions, tumor control and recurrence rates. Benign lesions often need curettage or excision and reconstruction. Wide resection or amputation are the most common options for malignant tumors [17,20,21].
Enchondroma
Enchondromas are found with an incidence of 3% of bone tumors, and up to 13% of benign bone tumors. These are the most frequent primary osseous tumors, reaching nearly 90% of hand tumors. Such lesions occur most common in the 3 rd and 4 th decades of life [7,17,22,23]. The median diagnostic age in our study was 40 years old with a wider age range between 18 and 69 years old.
Enchondromas are benign cartilaginous tumors composed of mature hyaline cartilage, having specifically reported to be localized within the ulnar-sided phalanges and metacarpals of the hand. The most commonly afflicted area is the proximal phalanges, followed by the middle phalanges, then metacarpals involvement, and rarely, within distal phalanges [7,17,22,24]. In our study, regarding enchondroma distribution, four were localized on the metacarpals, two on the proximal phalanges alone, one on the middle phalanx, one on the distal phalanx, one on both proximal phalanx and metacarpal bone, and lastly one on both proximal and middle phalanges (Ollier disease). Four of them occurred in the 5 th finger, three in the second finger, two in the 3 rd finger, and lastly the patient with Ollier syndrome in both 4 th and 5 th finger rays confirming a predilection for ulnar-side occurrence.
The majority of enchondroma are asymptomatic tumors that develop slowly. Incidental radiological imaging can frequently establish a diagnosis. Pain, swelling, and deformity are the most common presenting signs of enchondromas. However, in some cases, pain after modest trauma leading to a pathological fracture induced may raise the suspicion and facilitate the diagnosis of enchondroma [7,19].
Enchondroma is distinguished by a solitary, well defined with lobulated contour, radiolucent lesion in the central metaphysis. Calcifications with increased signal radiodensity can be seen in the center of this lesion. CT is effective for detecting of "arcs and rings" patterns due to enhanced chondroid mineral deposits. MRI is useful in establishing tumor aggressiveness and extension to adjacent tissues [7,19,25].
The enchondroma has an extensive hyaline cartilaginous matrix, which is occasionally calcified, on HP examination. A unique pattern observed in hand enchondromas are presence of atypia and an abnormal number of cells, which can be signs of malignancy in other skeletal locations [11,26].
The lesions are generally poorly cellularized, with the exception being enchondromatosis or aforementioned hand localization. It is hard to distinguish enchondroma from grade I chondrosarcoma due to similarities in radiographic and HP findings. Hypercellular and cytological abnormalities describe enchondroma of the hand, which are also characteristics of chondrosarcoma. Enchondroma in adults is indicated by calcified chondroid matrix and chondrocyte necrosis. Chondrosarcoma is more likely to be diagnosed if there is entrapment of pre-existing host bone and mucomyxoid matrix alterations [27,28].
The patient's symptoms and pathological fracture risk impact the treatment of a solitary enchondroma [29]. Although there are no specific monitoring criteria, close follow-up for every 3-6 months is recommended to establish the lesion's stability followed by an annual radiological evaluation for a period of three years for stabile lesions [30]. The main surgical aims are HP diagnosis validation and prevention of deformation, pathological fractures, malignancy [19,22,24].
Bone curettage is a widely accepted definitive form of treatment for hand enchondromas. Bone cavity can be replaced with allogeneic, autogenic, or synthetic bone [17,22]. The first step in treating an enchondroma with a pathological fracture is to immobilize the segment until the fracture heals. Afterwards, the enchondroma can be treated surgically [19,22,24].
Most of our cases, seven out of 10 enchondromas required autologous bone grafting, which ensures bone stability, preventing pathological fractures, harvested from iliac crest or tibial diaphysis. Fifth ray amputation was imposed in two of the patients due to severe osseous destruction that could not ensure adequate tumor resection simultaneous with preservation of segmental function.
Enchondromatosis or Ollier disease (WHO nomenclature) is characterized by an asymmetric distribution of cartilage lesions that can be exceedingly varied (in terms of size, quantity, location, progression of enchondromas, age of start and diagnosis, and surgical necessity). Ollier disease is a genetic condition having a prevalence of one out of 100 000. Extremities enchondromas are frequently noticeable on physical examination as masses embedded inside the phalanges, metacarpal, and metatarsal bones. Long tubular bones, such as the tibia, femur, and/or fibula, are usually affected by enchondromas; flat bones, notably the pelvis, can also be compromised. It's necessary to highlight the uneven distribution of lesions, which can be restricted to one limb or one half of the body. When lesions are spread throughout the entire body, one side is usually more afflicted than the other [31,32].
Multiple enchondromas, soft tissue hemangiomas, and vascular anomalies define Maffucci syndrome [17,33]. Malignant tumors in other places than bones can occur, examples including gliomas, pancreatic carcinoma, or ovarian malignancy, which are common findings in patients with Maffucci syndrome [34,35]. Chondrosarcoma has a 40% probability of malignant transformation in both disorders, Maffuci syndrome having a higher risk compared to Ollier disease (50% vs 35%). There is no medical therapy for enchondromatosis. Complications (pathological fractures, growth defects, and malignant transformation) may necessitate surgery. The overall prognosis is difficult to predict. Early-onset variants, as is usually the case, appear to be more severe [17,19,32,36,37].
Osteochondroma
Osteochondroma represents around 9% of all bone tumors and it is the most prevalent benign bone tumor, accounting for 20-50% of bone benign tumors [5]. Hereditary osteochondromas can present with single or multiple lesions [5,38]. The solitary osteochondroma (85%) generally appears at the age of 20. The findings of our study demonstrated a variable age distribution leaning to middle-aged adults. Osteochondromas are uncommon in the hand, although being prevalent elsewhere in the body. Because isolated lesions in the hand are uncommon, the patient should be examined for osteochondromatosis. If a solitary lesion is present, it usually affects the proximal phalanx, and the bone is frequently shortened [5,39,40].
The etiology of osteochondromatosis is considered to be due to growth plate herniation passing the periosteal layer, generating a broad or narrow base osseous spur. The osteochondroma and the underlying parent bone have medullary continuity, which is a defining feature of this condition. Metaphysis is the most common location of osteochondromas localized after radiographic imaging. The tumoral cortex is connected to the originating bone cortex by a marrow cavity. Single asymptomatic osteochondromas can be followed over time. Excision is recommended in the event of symptomatic solitary osteochondromas or if there is a risk of malignant transformation. Osteochondroma has a favorable prognosis, with just 1% of cases becoming malignant; hereditary osteochondroma has a 2-5% chance of turning malignant [5,19,24].
In our study group, osteochondromas had a homogenous hand distribution from the radiocarpal joint to the distal phalanx. All the tumors were surgically excised to reduce clinical symptoms and to provide definitive diagnosis.
Other types of bone tumors must be considered when establishing the differential diagnosis, such as the periosteal chondroma and chondromyxoid fibroma.
Periosteal chondroma is bone periosteal surface originating benign cartilaginous tumor, generated by hyaline mature cartilage. Paraosteal connective tissue represents tumor source, localized histologically between the periosteum and the cortical bone, which accounts for its increased frequency in tendinous or ligamentous insertions. All age groups are susceptible occurring twice as frequent in males than females. This lesion, usually measuring less than 6 cm, can be found in tubular bones regardless of size [3,41].
Periosteal chondroma
Periosteal chondroma has greater cellularity compared to enchondroma. It can be distinguished from periosteal chondrosarcoma since the latter has a greater size with osseous cortex. Excision remains the main therapeutic approach, preferably a complete en bloc removal to reduce the possibility of recurrence [3].
Chondromyxoid fibroma
Chondromyxoid fibroma is a benign cartilaginous bone tumor, mostly found in long bones, particularly in the lower limb, localized in proximity to the knee joint, as well as small bones in the foot. An eccentric metaphysis lesion that produces lysis, with well-contoured borders, either oval or round can be seen on radiographic imaging. Matrix calcification is rare. It is distinguished histologically by a chondroid and fibrous matrix with myxoid content. The treatment options for this lesion range from excision to curettage, with curettage having a greater chance of recurrence [3,42].
Conclusions
Although incidence is lower compared to other anatomical sites, bone tumors in the hand are represented by a broad spectrum of entities, more frequently benign than malign. Patients must be thoroughly assessed through both clinical and imaging examinations of the hand. Tissue biopsy and HP examination are the most important examinations to establish the correct diagnosis. Surgical resection, through curettage, simple excision, or en bloc removal are management options of benign tumors. Reconstructive procedures may need to be employed using substitutes or bone grafts after surgical treatment of benign bone tumors. In case of severe deforming, lytic, destructive lesions, more drastic surgical measures can be taken such as wide resection, even amputation of segments. The surgical aim remains complete tumor removal, having in mind preserving stability, motion, aesthetics or attempting reconstruction procedures for regaining function and form.
|
2023-02-23T06:18:20.650Z
|
2022-12-31T00:00:00.000
|
{
"year": 2022,
"sha1": "d24c665f49425b03c4dd870fb046b18968201f06",
"oa_license": "CCBYNCSA",
"oa_url": "https://rjme.ro/RJME/resources/files/630422625632.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67928cea40cb495582dba52e6b5eb08881254a1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
267574739
|
pes2o/s2orc
|
v3-fos-license
|
MRI biomarkers and neuropsychological assessments of hippocampal and parahippocampal regions affected by ALS: A systematic review
Abstract Background and Objective Amyotrophic lateral sclerosis (ALS) is a progressive motor and extra‐motor neurodegenerative disease. This systematic review aimed to examine MRI biomarkers and neuropsychological assessments of the hippocampal and parahippocampal regions in patients with ALS. Methods A systematic review was conducted in the Scopus and PubMed databases for studies published between January 2000 and July 2023. The inclusion criteria were (1) MRI studies to assess hippocampal and parahippocampal regions in ALS patients, and (2) studies reporting neuropsychological data in patients with ALS. Results A total of 46 studies were included. Structural MRI revealed hippocampal atrophy, especially in ALS‐FTD, involving specific subregions (CA1, dentate gyrus). Disease progression and genetic factors impacted atrophy patterns. Diffusion tensor imaging (DTI) showed increased mean diffusivity (MD), axial diffusivity (AD), radial diffusivity (RD), and decreased fractional anisotropy (FA) in the hippocampal tracts and adjacent regions, indicating loss of neuronal and white matter integrity. Functional MRI (fMRI) revealed reduced functional connectivity (FC) between the hippocampus, parahippocampus, and other regions, suggesting disrupted networks. Perfusion MRI showed hypoperfusion in parahippocampal gyri. Magnetic resonance spectroscopy (MRS) found changes in the hippocampus, indicating neuronal loss. Neuropsychological tests showed associations between poorer memory and hippocampal atrophy or connectivity changes. CA1‐2, dentate gyrus, and fimbria atrophy were correlated with worse memory. Conclusions The hippocampus and the connected regions are involved in ALS. Hippocampal atrophy disrupted connectivity and metabolite changes correlate with cognitive and functional decline. Specific subregions can be particularly affected. The hippocampus is a potential biomarker for disease monitoring and prognosis.
| INTRODUC TI ON
Amyotrophic lateral sclerosis (ALS) is the most common motor neuron disease (MND) and a progressive motor and extra-motor neurodegenerative disease. 1 Although primarily affecting motor functions, ALS also leads to cognitive and behavioral changes, including memory impairment, executive dysfunction, emotional and learning alterations, and language deficits associated with the dysfunction of specific brain regions. 2,3One of the specific regions involved in these functions is the hippocampal and parahippocampal regions, which play a crucial role in memory and learning processes and are affected in ALS. 4 The hippocampal region is divided into subregions, including the dentate gyrus, cornu ammonis (CA), and the subiculum.
The fascia dentata and the hilus are included in the dentate gyrus, while the CA is anatomically and functionally separated into the CA1, CA2, CA3, and CA4 subfields.6][7][8] The parahippocampal area includes the entorhinal cortex, the perirhinal cortex, and the parahippocampal gyrus (PhG). 9other essential aspect of ALS is the co-occurrence of frontotemporal dementia (FTD) in some patients. 2,102][13] The term ALS-FTD spectrum refers to various phenotypes ranging from pure ALS to pure FTD with multiple levels of motor, cognitive, and behavioral impairment. 14,15The neuropathological link between ALS and FTD is exemplified by the presence of common protein aggregates, particularly those related to transactive response DNA-binding protein (TARDBP). 16Additionally, genetic mutations have been identified as significant risk factors for ALS, with chromosome 9 open reading frame 72 (C9orf72), superoxide dismutase 1 gene (SOD1), and TARDBP representing the most common gene mutations. 17The C9orf72 hexanucleotide repeat expansion has been associated with both familial and sporadic ALS as well as FTD, accounting for a significant proportion of ALS-FTD cases. 18tations in the SOD1 gene, which codes for the enzyme superoxide dismutase 1, have been related to familial ALS. 19,20Mutations in the TARDBP gene, which codes for TDP-43, have been detected in sporadic and familial ALS cases. 21,22These genetic factors contribute to the heterogeneity of clinical presentations, disease progression, and cognitive dysfunction observed in ALS. 23,24gnetic resonance imaging (MRI) biomarker alterations provide a helpful window into understanding the progression of ALS. 25,26I is a non-invasive and multiparametric device that can measure structural and functional changes in the hippocampus and adjacent regions in ALS. 27,283][34][35] MRI and neuropsychological assessments are used to assess the structure and function of the hippocampal and parahippocampal regions. 36,37e purpose of this work is to provide a comprehensive review of MRI biomarkers and neuropsychological assessments of abnormalities of the hippocampal and parahippocampal region in ALS patients in order to elucidate the role of hippocampus and parahippocampus damage in the clinical course of the disease, as well as to identify gaps and challenges for future research.
| Search strategy and inclusion and exclusion criteria
This systematic review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 38 standards and conducted a thorough search of the Scopus and PubMed databases (Figure 1).This systematic review was not registered in PROSPERO or any other prospective register of systematic reviews.The search strategy focused on identifying articles that studied the association between the hippocampus and adjacent regions, such as the parahippocampus, entorhinal cortex, perirhinal cortex, and parahippocampal cortex, with ALS (Figure 2).The search also included studies using MRI techniques to examine structural and functional alterations in the hippocampus and related structures in ALS patients, such as structural morphometry, fMRI, and DTI.The keywords were input as free text or MeSH phrases depending on the database.The search was limited to English items published between January 1, 2000, and July 1, 2023.For further investigation, we manually examined the reference lists of the collected articles.
Imaging studies were included, including MRI biomarker findings for hippocampal and parahippocampal regions and neuropsychological assessments associated with MRI measures of hippocampal in ALS patients.Animal studies, case reports, reviews, letters, commentary, book chapters, postmortems, and studies not written in English were excluded.
| Data extraction and analysis methods
During data extraction, eligible studies that met the inclusion criteria were analyzed, and specific information was collected.This information included the first author and publication year, sample sizes, MRI techniques, device characteristics, and the main MRI findings summarized in Table 1.Additionally, Table 2 was used to gather and summarize the main cognitive and behavioral findings and the neuropsychological tests used in each study.The studies were then classified according to their MRI findings, neurophysiological assessment, and the associations between these two techniques.
The systematic review identified several MRI biomarkers and neurophysiological assessment methods used to examine the hippocampal regions in ALS patients, including conventional, advanced, and analysis-based metrics.Quality assessments were independently reviewed and double-checked to ensure accuracy, and any discrepancies were resolved through discussion.The Cochrane Handbook's predefined quality assessment criteria were used to ensure that only high-quality studies were included, 39 which increased the credibility of the systematic review.
| Overview of results
Our results provide an overview of the results of various studies that utilized different MR neuroimaging techniques to investigate MRI biomarkers and neuropsychological evaluation of hippocampal regions affected by ALS and ALS-FTD compared to healthy controls (HCs) or other control groups.In summary, 46 studies were eligible for additional evaluation.
or the right hippocampus. 45,49,56,62,65,73 some cases, volume reductions were limited to particular subfields, such as CA4/dentate gyrus, 43,56,70 CA2/CA3, 54,56,71 and CA1. 60de et al. 30 reported that the difference between C9 + ALS and C9 − ALS in hippocampal volume loss is bilateral or unilateral volume loss, while Westeneng et al. 65 focused on the amount of volume loss and reported that C9 + ALS patients had greater volume loss in the right hippocampus compared to C9 − ALS.Furthermore, one study reported bilateral hippocampal volume loss in C9 + ALS-FTD and observed subcortical GM atrophy in C9 + ALS-FTD patients, limited to the bilateral thalami, hippocampi, and right accumbens nucleus.
Bilateral hippocampal atrophy in ALS-FTD has been reported, 42,44,67 and the amount of atrophy is more significant than in other ALS phenotypes, such as ALS-Plus or ALS. 44,67One study conducted by Machts et al. 67 reported that ALS-Plus showed significant bilateral hippocampal atrophy compared to HCs, especially in the head and body of the hippocampal.In another study, shape analysis of subcortical structures revealed progressive local atrophy, including the hippocampus. 61e volume reduction in multiple parts of the hippocampus in different stages of the disease differs, but the most significant The targeted and adjacent regions examined in this systematic review (the right hippocampal, right parahippocampal cortex, and right entorhinal cortex, as analyzed using volumetric atlas-based analysis of 3D T1-weighted images with Freesurfer software).
TA B L E 1 Summarize MRI findings in hippocampal and parahippocampal regions.
aspect is the more considerable decrease in volume and shifting into bilateral involvement of the hippocampus in the more advanced stages. 43,51In line with previous studies, Christidi et al. 54 found that ALS patients with worse memory had a specific pattern of hippocampal atrophy in the left fimbria, both hippocampal tails, the right CA1, the right molecular layer, and the right GC-DG.
Age at the symptom onset and genetic factors were also related to hippocampal atrophy.Ferraro et al. 46 found that older age at the time of symptom onset was associated with greater frontotemporal cortical thinning, including parahippocampal cortices, while Ishaque et al. 59 reported that shorter survival in ALS patients was related to changes in the hippocampus and other extramotor regions.
TA B L E 1 (Continued)
TA B L E 2 MR neuroimaging and neuropsychological findings in hippocampal and parahippocampal regions.
Finally, further analysis of diffusion data on proton position
showed decreased return-to-origin probability (RTOP) in the PhG of ALS patients. 50
| fMRI findings
fMRI data revealed a decrease in FC between the bilateral hippocampus, the bilateral parahippocampal gyri, and the cerebellum in ALS patients compared to HCs. 47 Similarly, another study reported decreased FC in the bilateral hippocampus, bilateral anterior and posterior PhG, and posterior cingulate in ALS patients. 58Additionally, Ma et al. 48found a lower d-ReHo in the left rectus gyrus and the left PhG in patients with ALS compared to HCs.
Schulthess et al. 64 observed significantly decreased FC of the medial prefrontal cortex, a major node within the default mode/hippocampal network, in ALS patients compared to HCs.
Furthermore, patterns of increased FC were observed in the analysis of the default mode/hippocampal network in ALS patients.
Increased FC was observed in parahippocampal and parietal areas of the non-task-associated DMN, 72 between the left sensorimotor cortex (SMC) and the right PhG, 78 between the right SMC and the right PhG, and between the right SMC and the right PhG. 78 Zhu et al. 69 identified increased ALFF values in the right PhG in the sporadic ALS group.
In two studies using task-based fMRI, one reported significant differences between ALS patients and HCs in response to sad facial expressions, with reduced brain activity observed in the hippocampus bilaterally for the ALS patients. 66Another reported that novelty-evoked hippocampal activity increased across 3 months in ALS patients, possibly reflecting the build-up of compensatory processes typically observed at the beginning of lesions.Motor activity, in contrast, decreased during the same interval. 74
TA B L E 2 (Continued)
the right hippocampus. 45On the other hand, hypoperfusion in ALS-FTD-M is limited to the left PhG. 42ing the MRS technique, Christidi et al. 40 reported several findings related to the hippocampal regions affected by ALS.The study observed a higher bilaterality of hippocampal tNAA, tNAA/tCr, and tCho bilaterally.Additionally, disease duration was positively associated with right hippocampal tCho and negatively related to right hippocampal Glu/tCr and left hippocampal inositol.
| Neuropsychological test performance and (para)hippocampal associations
Several studies have reported associations between neuropsychological test performance and hippocampal region metrics in patients with ALS.Christidi et al. 40 found that superior memory performance on the ECAS was associated with higher hippocampal tNAA/tCr bilaterally.Similarly, Ahmed et al. 44 reported that the hippocampus volume was positively correlated with higher ACE-III total scores, and memory difficulties were negatively correlated with the volume of some areas, including the hippocampus.There are significant negative correlations between episodic memory and the metabolic value of the bilateral hippocampus and left PhG. 63In addition, negative correlations between delayed recall and metabolic values of the left PhG were reported. 63Bilateral hippocampal atrophy and anatomical changes were associated with learning, recall, recognition, 52 and memory impairment, 82 respectively.Also, left PhG thinning was associated with poorer learning performance. 52Increased alexithymia (based on the higher total score and DIF sub-score of the TAS-20) in ALS patients was associated with significantly and negatively correlated GMV of the prefrontal cortex, right superior temporal pole, and PhG. 83 terms of disease progression, a positive correlation between ALSFRS-r and increased FC was reported between the left primary SMC and the right PhG and cerebellum 78 and a negative correlation between ALSFRS-r and higher hippocampal activation, 74 and delta ALSFRS negatively correlated with local shape distances in the right hippocampus. 81Also, higher ALSFRS-r was associated with lower hippocampal tCho and higher tNAA/tCr. 40me other studies focused on disease progression were conducted.Strong correlations were found between disease progression rate and degree of node degree in the right angular gyrus and hippocampus of ALS-FTD patients. 41The correlations were negative in the right angular gyrus and positive in the right hippocampus. 41other study reported that the ALS progression rate was positively correlated with the increased ALFF value in the right PhG. 69 Ultimately, Dieckmann et al. 43 found that decreasing bilateral hippocampal volume was associated with the parameter relative disease aggressiveness (rD50).
Additionally, several studies found correlations between hippocampal atrophy and memory performance in ALS patients. 67,73,84S patients exhibited poor performance on neuropsychological tests (cognitive and executive tests) that correlated with the ALFF values in the PhG. 69Furthermore, the correlations between neuropsychological test scores (MCST and FAB scores) and MD measures in the hippocampus highlight the role of the hippocampus in cognitive dysfunction in ALS patients. 75
| Hippocampal subfield involvement
Fimbria and HATA were particularly atrophic in the ALS-Low memory performance group, while HATA and CA2/3 were the most affected subfields in the ALS-High memory performance group. 54The contrast between the neuropsychologically defined ALS-High and ALS-Low groups also revealed significant shape differences in the lateral aspect of the left hippocampus. 54The CA1-2 hippocampal areas and dentate fascia, as well as transentorhinal region (TE) and entorhinal region (EN) regions, were associated with memory dysfunction in ALS patients. 82
| DISCUSS ION
The objective of the current study was to examine MRI biomarkers and neuropsychological evaluations of the impact of ALS on the hippocampal and parahippocampal regions (Figure 4).
Hippocampal and parahippocampal regions' involvement in ALS
appears to be dynamic, with progressive local atrophy observed in some studies and correlations between disease progression rate and GM loss in specific regions of these areas. 52,70,76Additionally, the C9 + ALS-FTD was associated with more extensive hippocampal atrophy, 57 suggesting a genotype-phenotype relationship.According to recent research, the loss-of-function impact of C9orf72, combined with certain gain-of-function entities, is required to develop a severe FTD/ALS phenotype. 85TDP-43 is a protein involved in RNA metabolism linked to the development of ALS and FTD. 86Alzheimer's disease can occasionally result in neuronal death and gliosis in the hippocampus, a kind of TDP-43 pathology known as hippocampal sclerosis. 87According to research, limbic-predominant age-related TDP-43 encephalopathy (LATE) is associated with a progressive amnestic state that resembles Alzheimer's symptoms. 88rthermore, LATE is a newly identified dementia that impairs memory and reasoning, such as Alzheimer's disease, but with distinct underlying reasons. 88,89Aberrant TDP-43 protein clusters cause LATE, which is also implicated in other neurological disorders such as ALS and FTD. 88Hippocampus atrophy in cases with LATE neuropathological change (NC) is more extensive than in patients with pure Alzheimer's disease, with stronger connections between hippocampal atrophy and LATE-NC with hippocampal sclerosis pathology. 90,91LATE is a newly suggested mention of TDP-43 proteinopathy, which mainly affects the older medial temporal lobe. 92cording to a recent molecular study, the amygdala and hippocampus are vulnerable to TDP-43 disease in elderly ALS patients. 93As a result, it seems that TDP-43 is linked to hippocampus atrophy in ALS patients.However, further studies on the LATE-NC characteristics in ALS and ALS/FTD patients are required, particularly using MRI.
The peak age of onset for ALS is between 55 and 70 years, with a male predominance. 94The study highlighted the potential influence of age at symptom onset 46 and genetic factors on hippocampal atrophy in ALS patients. 30,57,65These findings suggest that different disease mechanisms can underlie the observed atrophy patterns.
Further research is needed to elucidate the relationship between genetic factors, age at onset, and hippocampal involvement in ALS.
Approximately, 50% of ALS patients develop cognitive impairment throughout the disease 95 (p.62).Worse memory performance in ALS patients was associated with volume reductions in various hippocampal subregions, 54 highlighting the relationship between hippocampal atrophy and cognitive decline.Some studies have shown that ALS is characterized by global volume loss and local atrophy in the CA1 area of the hippocampus, 54,60 which can serve as a neural correlate for the cognitive and behavioral deficits associated with ALS.The association between hippocampal atrophy and cognitive decline in patients with ALS underscores the importance of evaluating cognitive function in clinical settings, as hippocampal atrophy can help identify patients at risk for cognitive decline or dementia.Hippocampal atrophy in patients with shorter survival suggests it can also have prognostic value in ALS. 591][82][83][84] The hippocampus is essential in episodic memory, learning, and recall. 96,972][83][84] Also, the PhG, which is functionally and anatomically connected to the hippocampus, shows correlations with memory performance. 52,63,82,83garding disease progression, the studies found mixed results. 40,41,43,63,64,74,78,81Some reported a positive association between ALSFRS-r and hippocampal volume or functional connectivity, while others found a negative correlation.The discrepancies could be due to methodological differences, sample size, disease duration, or other factors.However, the overall findings suggest the involvement of the hippocampal region in ALS progression.The hippocampus could be a biomarker to monitor disease progression and predict prognosis in ALS patients.
The studies that evaluated hippocampal subfields reported that areas like CA1-2, dentate fascia, fimbria, and HATA were particularly affected in ALS patients with memory impairment. 54,82e lateral aspect of the left hippocampus also showed significant shape differences between ALS patients with high and low memory performance. 54Furthermore, subfield analysis can provide more information about hippocampal involvement in ALS patients with cognitive and memory dysfunction.
[79][80] Most studies demonstrated increased MD in ALS patients compared to controls, which can suggest a loss of neuronal integrity in these regions.This finding was reported in seven studies, 55,61,62,75,[77][78][79][80] and it involved both the hippocampal GM and the WM tracts connected to it, such as the cingulum bundle and the PhG.Additionally, FA and other diffusivity measures (RD and AD) were found to be altered in the hippocampal and parahippocampal regions, 30,41,58,61 further emphasizing the role of microstructural changes in these areas.An increase in FA in these WM tracts can indicate a compensatory mechanism or a selective vulnerability of different fiber populations in these tracts.Compensatory mechanisms for ALS can involve increased glycolysis, relaxation of synaptic inhibitory events, and faster motor unit firing.Demethylation of the D-loop region of mitochondrial DNA has been proposed as a compensatory mechanism for mitochondrial DNA (mtDNA) overexpression in carriers of ALS-linked SOD1 mutations. 98,99However, the precise processes of these compensatory mechanisms and their influence on the course of ALS remain unknown.
Another common finding was a correlation between FA values and FC measures within the default mode/hippocampal network, 64 which reflects the temporal synchronization of neural activity between brain regions, and it involved the medial prefrontal cortex, which is a major node within the default mode/hippocampal network.The correlation between FA and FC in this network can indicate a relationship between the structural and functional integrity of this network, which is involved in cognitive and emotional functions.
Two consistent findings were an increase in AD and RD. 30,61creased AD in the hippocampus can indicate a degeneration of the axons, which could lead to neuronal loss and atrophy in this region.
Furthermore, increased RD in the hippocampus can indicate a disruption of the myelin sheath around the axons, which could impair signal transmission and synaptic plasticity in this region.
In ALS, the RTOP of water molecules can be used as a biomarker to assess tissue complexity.RTOP reflects the probability of water molecules returning to their original position after diffusion and is sensitive to tissue complexity.This finding was reported by Chen et al. 50and involved PhG.The decrease in RTOP in this region can indicate a reduction in tissue heterogeneity and complexity, which could reflect a loss of cellular structures and organization.
Based on fMRI findings, changes in FC, Reho, and ALFF of brain activity in the hippocampal and PhG regions suggest that ALS affects not only motor function but also other cognitive and emotional processes. 47,48,58,64,66,69,72,74,78The emotional processing differences that Aho-Özhan et al. 66 found suggest that ALS patients can have unique responses to emotional stimuli, which could be related to the observed alterations in hippocampal function and connectivity.Furthermore, the decreased FC in the hippocampal and PhG regions can indicate disrupted neural networks and potential neurodegeneration in ALS patients. 47,48,58These results suggest that ALS has a broader impact on brain function beyond motor function.
Schulthess et al. 64 provided further evidence supporting alterations in default mode/hippocampal network connectivity in ALS patients, which could also be linked to cognitive and emotional impairments.
Our findings were approved by a recent study that used aberrant multimodal connectivity patterns and found the regional-node structural-functional connectivity (SC-FC coupling) of the limbic network (LN)-related brain regions such as the hippocampus, and PhG was significantly altered. 100terestingly, some studies reported increased FC in certain regions, such as the PhG, 69,72,78 suggesting that the brain may attempt to compensate for dysfunctional networks by recruiting additional areas.This hypothesis is supported by the findings of Stoppel et al., 74 who reported increased novelty-evoked hippocampal activity across 3 months in ALS patients, potentially reflecting compensatory processes.Finally, these findings imply that ALS patients can recruit additional brain areas to compensate for dysfunctional networks, which may manifest as higher FC and activity in specific brain regions.
ASL is a PWI-MRI technique that non-invasively measures CBF
in the brain. 101ASL has been used to study perfusion changes in neurodegenerative disorders.Based on ASL findings, hypoperfusion in motor-onset ALS-FTD was confined to the left PhG, suggesting that this region may be particularly vulnerable in this specific patient group. 42This finding could have potential implications for understanding the neuropathological processes that underlie motor-onset ALS-FTD and developing targeted therapeutic interventions.Furthermore, significant discrimination between ALS patients and HCs based on CBF in the right hippocampus indicates that alterations in CBF within the right hippocampus could serve as a potential biomarker for the diagnosis of ALS and monitoring disease progression. 45Other previous studies in ASL reported disease severity associated with GM and motor neuron involvement, in line with our findings. 102,103lra 104 reviews the literature on MRS findings in ALS, focusing on the motor and non-motor regions affected by the disease.
He demonstrated neurochemical changes reflecting neuronal loss or dysfunction NAA is most significant in the motor cortex and corticospinal tracts (CST).Other neurochemical changes observed include increased myo-inositol (mIns), a putative marker of gliosis.
The MRS confirms that the involvement of non-motor regions such as the frontal lobes, thalamus, basal ganglia, and cingulum is consistent with the multi-system facet of MND with ALS.In line with our findings, Christidi et al. 40 found that metabolic alterations in the hippocampal region, specifically tNAA, tNAA/tCr, tCho, Glu/tCr, and inositol, could serve as valuable markers for ALS characterization.Furthermore, certain metabolite associations may be useful for monitoring disease progression and evaluating treatment efficacy.
| LI M ITATI O N S A N D RECOMMENDATIONS
One of the main limitations of this review is the heterogeneity in
| CON CLUS IONS
The hippocampus and connected medial temporal lobe structures are implicated in memory impairment, functional decline, and disease progression in ALS.Hippocampal atrophy, disrupted connectivity, and altered metabolites correlate with poorer cognitive performance, functional measures, and faster disease progression.The findings highlight those specific hippocampal subregions, such as CA1-2, dentate gyrus, and fimbria, can be particularly vulnerable.
Ultimately, the hippocampus shows potential as a biomarker for disease monitoring, prognosis prediction, and treatment response assessment in ALS.Understanding the relationship between genetic factors, age at symptom onset, cognitive profiles, and hippocampal involvement can provide insights into the heterogeneous mechanisms underlying ALS and its clinical manifestations.
The analysis revealed uneven global representation, with most of the contributions coming from selected countries.Specifically, Germany (n = 10), China (n = 6), Italy (n = 6), Ireland (n = 5), and the Netherlands (n = 4) collectively accounted for approximately 67% of the included articles.European nations exhibited the highest participation rate, comprising 71.7% (33 out of the 46) of studies.Prominent contributions were from Germany, Italy, Ireland, the Netherlands, Greece (n = 3), France (n = 3), and the United Kingdom (n = 2).The remaining countries contributed 1-2 articles each, including Japan, Australia, Brazil, Canada, India, and South Korea.Given the geographical concentration observed, we recommend future efforts to improve population heterogeneity through targeted recruitment across underrepresented world regions.Expanding diagnostic research globally will improve the generalizability of systematic reviews and provide a more comprehensive understanding of ALS epidemiology, particularly regarding genetic diversity.Moreover, investigating diverse populations and countries may reveal previously undiscovered disease characteristics.This geographical distribution analysis highlights the need for broader international representation in ALS imaging and neuropsychology research associated with hippocampal and parahippocampal regions and related cognitive-behavioral impairments.MRI-based biomarker findings in hippocampal and parahippocampal regions are summarized in Table 1.Studies were carried out between 2007 and 2023.The different MR neuroimaging techniques used in these studies include T1-weighted (T1-w) imaging, pseudocontinuous arterial spin labeling (PCASL), MRS, DTI, and restingstate fMRI (rs-fMRI).Most studies employed 3T MRI scanners, with a few using 1.5T and 4.7T devices.Most studies used 8-channel head coils; some used 4, 12, 32, or 64-channel coils, and others did not report this information.The studies covered a wide range of participant populations, including patients with different ALS and FTD subtypes and those with genetic mutations such as C9orf72 expansions (C9 + ).Longitudinal studies were also conducted, with follow-up time points.In terms of techniques, T1-w was the most F I G U R E 1 PRISMA flow diagram depicting article selection and exclusion.commonly employed method, used either alone (n = 19) or in combination with other techniques like DTI (n = 4), PCASL (n = 2), and fMRI (n = 1).DTI was also used frequently (n = 7), followed by fMRI (n = 5).MRS was less commonly used (n = 1).
F I G U R E 4
Primary MRI and neuropsychological findings of hippocampal and parahippocampal regions in ALS patients.
methodology and patient populations between studies.The studies employed different MRI techniques, scanner strengths, acquisition parameters, and analysis methods, which can introduce variability in the results.The studies also included patients with different ALS subtypes, stages of disease, and genetic mutations, which limits the comparability of findings.Furthermore, some studies had small sample sizes, which can reduce statistical power to detect differences and associations.Another limitation is the cross-sectional nature of most studies (only four were longitudinal).More longitudinal studies are needed to determine the temporality of changes in the hippocampus and parahippocampal regions relative to clinical changes in ALS.Subfield analysis of the hippocampus can provide valuable insights intothe regions affected by ALS and associated with cognitive impairment.Further research should aim to determine specific subfields that can serve as biomarkers for the monitoring and prognosis of the disease.An integrated analysis of multimodal imaging, combining structural and functional MRI with other modalities such as fMRI, DTI, MRS, ASL, and other neuroimaging methods, can yield a more comprehensive understanding of how the hippocampus and parahippocampal regions are affected in ALS.More research on the LATE-NC characteristics in patients with ALS and ALS/FTD, particularly using MRI.Studies could evaluate whether LATE contributes to the observed hippocampal atrophy and if it correlates with cognitive decline in these patients.
|
2024-02-10T06:17:32.097Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "2092d425bae2c4c51a35ed68537556a05f505893",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6b3f5a8764b1a90c30a6dbb5f5dfea0f2790fa58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9205581
|
pes2o/s2orc
|
v3-fos-license
|
Prevention for Micro- and Macro-Vascular Complications in Diabetic Patients
One of the goals in long-term cares for patients with diabetes mellitus (DM) is to prevent the development of micro-and macrovascular complications (The International Diabetes Federation, 2011). To achieve this purpose, an adequate control of blood pressure (BP) as well as a good glycaemic control is crucial (The International Diabetes Federation, 2011). The American Diabetes Association recommended that the BP goal should be lowered to 130/80 mmHg in the daytime of clinic setting (The American Diabetes Association, 20022011). However, in 4733 patients with type 2 DM at high risk for cardiovascular events followed the mean of 4.7 years, targeting a systolic casual/clinic BP (CBP) in the daytime of less than 120 mmHg as compared with less than 140 mmHg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events (ACCORD Study Group, 2010). Further, in 16,893 patient-years of follow-up at 862 sites in 14 countries, a tight control of systolic CBP in the daytime among patients with DM and cardiovascular disease to achieve systolic CBP in the daytime of less than 130 mmHg and diastolic CBP of less than 85 mmHg was not associated with improved cardiovascular outcomes compared with usual control (Cooper-DeHoff et al, 2010). At present, the reasons of difference are not clear. Recently, a discrepancy between screening BP by CBP measurment and ambulatory BP by ambulatory blood pressure monitaring (ABPM) has been noted. It has also been shown that in patients with essential hypertension, home BP (HBP) measurement in the morning has a stronger predictive power for mortality than CBP measurements in the daytime (Aihara et al, 1998, Ohkubo et al, 1998, Imai et al, 1999). Accordingly, the difference of results (ACCORD Study Group, 2010, Cooper-DeHoff et al, 2010) may be due to be not evaluated BP in the midnight or in the morning by ABPM or HBP measurements. To evaluate the usefulness of HBP measurement in the morning in patients with DM, we examined whether BP elevations at the awakening-up in the morning detected by HBP were more predictive than those in the daytime detected by CBP for microand macrovascular complications in patients with type 1 or 2 DM, as observed in patients with essential hypertension (Aihara et al, 1998, Ohkubo et al, 1998, Imai et al, 1999). Our cross-sectional studies have demonstrated that HBP measurements at the awakning-up in the morning offer stronger predictive power for microand macro-vascular complications
Introduction
One of the goals in long-term cares for patients with diabetes mellitus (DM) is to prevent the development of micro-and macro-vascular complications (The International Diabetes Federation, 2011).To achieve this purpose, an adequate control of blood pressure (BP) as well as a good glycaemic control is crucial (The International Diabetes Federation, 2011).The American Diabetes Association recommended that the BP goal should be lowered to 130/80 mmHg in the daytime of clinic setting (The American Diabetes Association, 2002Association, -2011)).However, in 4733 patients with type 2 DM at high risk for cardiovascular events followed the mean of 4.7 years, targeting a systolic casual/clinic BP (CBP) in the daytime of less than 120 mmHg as compared with less than 140 mmHg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events (ACCORD Study Group, 2010).Further, in 16,893 patient-years of follow-up at 862 sites in 14 countries, a tight control of systolic CBP in the daytime among patients with DM and cardiovascular disease to achieve systolic CBP in the daytime of less than 130 mmHg and diastolic CBP of less than 85 mmHg was not associated with improved cardiovascular outcomes compared with usual control (Cooper-DeHoff et al, 2010).At present, the reasons of difference are not clear.Recently, a discrepancy between screening BP by CBP measurment and ambulatory BP by ambulatory blood pressure monitaring (ABPM) has been noted.It has also been shown that in patients with essential hypertension, home BP (HBP) measurement in the morning has a stronger predictive power for mortality than CBP measurements in the daytime (Aihara et al, 1998, Ohkubo et al, 1998, Imai et al, 1999).Accordingly, the difference of results (ACCORD Study Group, 2010, Cooper-DeHoff et al, 2010) may be due to be not evaluated BP in the midnight or in the morning by ABPM or HBP measurements.To evaluate the usefulness of HBP measurement in the morning in patients with DM, we examined whether BP elevations at the awakening-up in the morning detected by HBP were more predictive than those in the daytime detected by CBP for micro-and macro-vascular complications in patients with type 1 or 2 DM, as observed in patients with essential hypertension (Aihara et al, 1998, Ohkubo et al, 1998, Imai et al, 1999).Our cross-sectional studies have demonstrated that HBP measurements at the awakning-up in the morning offer stronger predictive power for micro-and macro-vascular complications Data are means 士 SD.The systolic BP (SBP) and the diastolic BP (DBP) levels in all patients were measured at the clinic in the daytime and at the home at the awakening-up in the morning, resepectively.CH; clinic hypertension, CN; clinic normotension, MH; morning hypertension, MN; morning normotension, Cr; creatinine, UERA; urinary excretion rate of albumin.** P<0.01 versus patients with CN; ‡ P<0.01 versus patients with MN; and † P<0.01 versus patients measured at the clinic in the daytime.
Table 1.Characteristics of patients with type 1 diabetes mellitus in a cross-sectional study 2.2 Method 2.2.1 Blood pressure
CBP
CBP levels were measured once in clinical setting of a daytime during each clinic visit.HBP was also measured once each morning, in the sitting position within 10 min after awakening, for every day.For the CBP levels in the daytime, when patients with DM had clinic systolic BP 130 mmHg and/or clinic diastolic BP 85 mmHg, we classified these patients as having clinic hypertension (CH) by the criteria of the WHO and International Society of Hypertension guide lines (Guide lines Subcommittee, 1999).Table 2. Prevalence of micro-and macro-vascular events and medical treatment in patients shown in the table 1
HBP
When the HBP levels at the awakening-up in the morning were systolic HBP 130 mmHg and/or diastolic HBP 85 mmHg, we classified these patients as having morning hypertension (MH).When these values were 130 mmHg of systolic BP and 85 mmHg of diastolic BP, we classified these patients as having clinic normotension (CN) or morning normotension (MN), respectively.All subjects were divided into two groups: with CH or MH and without CH or MH.Finally, we examined whether CBP in the daytime and HBP at the awakening-up in the morning is more predictive of these events.
Micro-and macro-vascular complications
The microvascular complications detected in this study were nephropathy and retinopathy.Occurrence of nephropathy was evaluated each three months in the clinic setting from beginning of the study based on urinary excretion rate of albumin (UERA), whereas occurrence of retinopathy was evaluated at least once each 6 months during the study.The macrovascular complications defined were coronary heart disease (CHD) and cerebrovascular disease (CVD) assessed by clinical situation.Prevalence of these events was confirmed by medical history at the beginning of the study.
Data are means 士 SD.The systolic BP (SBP) and the diastolic BP (DBP) levels in all patients were measured at the clinic in the daytime and at the home at the awakening-up in the morning, resepectively.
Glycaemic control and other variables
Glycaemic control was evaluated by HbA1c values (JDS: normal range 4.5-5.7%)(Kasezawa et al, 1987).Other variables, including serum concentrations of electrolytes and lipids, were also measured (Kamoi et al, 2002).Albumin concentration in random spot urine was measured by the latex agglutinination photometric immunoassay method (Kamoi et al, 2002).
CBP
CBP level was measured by patient's self at the clinic in the daytime in the left arm after a 5min rest in a sitting position using an automatic device based on the cuff-oscillometric method (FT-200; Parama-Tech, Fukuoka, Japan).
HBP
HBP level was measured at the home in the morning within 10 min after awakening, by a patient's self or a family member, in the left arm in a sitting position.Semiautomatic devices based on the cuff-oscillometric principles that generate a digital display of both systolic BP and diastolic BP were used.All devices met the criteria set by the Association for the Advancement of Medical Instrumentation.A standard arm cuff was used to measure both CBP and HBP levels.
Variable parameters
Venous samples were collected during each clinic visit and were analysed for HbA1c levels and concentrations of total cholesterol (TC), low densty lipoprotein cholesterol (LDL), high densty lipoprotein chesterol (HDL), triglyceride (TG), and creatinine without fasting.Microalbuminuria and clinical albuminuria were defined as UERA 30 and 300μg/mg creatinine, respectively (The American Diabetes Association, 2002).
Data represent mean ± SD.Morning and clinic blood pressures were measured at the home at the wakening-up in the morning and at the clinic in the daytime, respectively Data represent means ± SD.Morning and clinic blood pressures were measured at the home at the awkening-up in the morning and in the clinic in the daytime, respectively.*Numbers in parentheses represent a percentage ratio of patients in each type for all subjects.CI: confidence interval.† P<0.05 versus patients with normotension; ** P<0.01 versus patients measured at the clinic.To compare the prevalence of micro-and macro-vascular complications in groups with and without the hypertension, Yates 'continuity corrected x2 test with two-tailed P value was performed and odds ratios were calculated; if prevalence of the events was 0.5 was added to all values before calculating the odds ratio and 95% CIs were provided.Multiple logistic analyses were used to determine the contribution of the variables to the events.Correlation between HBP and CBP levels was calculated.In addition, receiver operating characteristic (ROC) curves for HBP and CBP with various end points were used to examine whether HBP levels in the morning and CBP levels in the daytime behave differently in allowing ascertainment of the true risk or whether the 130/85 mmHg cut points are better for HBP levels in the morning than for CBP levels in the daytime.
Table 7. Multivariate-adjusted odds ratios and 95%CIs of risk factors for nephropathy in patients with type 1 diabetes
Endpoints and outcome measures
Differences in outcomes for each endpoint of death, and new or worsened micro-and macro-vascular complications between hypertensive and normotensive patients on the basis of HBP levels in the morning or CBP levels in the daytime were assessed using survival curves calculated according to Kaplan-Meier methods, then compared by a hazard ratio using the log-rank test.Within the survey time previously defined, a time until censoring or death (or occurrence of the event) was calculated for each endpoint.
Risk factor assessment for outcomes
In a longitudinal study, risk factors related to outcomes determined statistically by a logrank test were assessed using hazard ratios by Cox proportional hazards model.For outcomes of microvascular complications, risk factors were determined in new, worsened, or improved events.Omnibus tests were used to determine the appropriateness of Cox proportional hazards modelling.Confounding factors used in this analysis were variables with MH in the morning or CH in the daytime at baseline and additional therapy for each disease.The analysis was based on the first event of each participant, thereby, allowing each participant to enter once in the Cox proportional hazard models.These analyses were performed using the GraphPad Prisms of software (version 3.02-5.01;GraphPad Soft-ware, San Diego, CA, USA), the Statistical Package for the Biosciences (SPBS; Anti-hypertensive drugs Anti-diabetic drugs Winesteem Institute of Community Medicine, Tokyo, Japan) and the Dr. SPSSII for Windows (SPSS Japan, Tokyo, Japan).A two-tailed value of P< 0.05 was considered statistically significant.
Table 8.Multivariate-adjusted odds ratios and 95%CIs of risk factors for nephropathy in patients with type 2 diabetes
In a cross-sectional study 3.1.1 A type 1 diabetes mellitus
As shown in the figure 1, in type 1 diabetic groups with both CH and CN, the kinds of antihypertensive medicines administered after taking breakfast in the groups with MH were greater than in the groups with MN.There were no significant differences in the prevalence of nephropathy and retinopathy between the two groups with CH and CN.In contrast, the prevalence of nephropathy with 8 microalbuminuria and 3 clinical albuminuria in the patients with MH was significantly higher than those with MN (Table 2).The prevalence of proliferative retinopathy in the patients with MH was significantly higher than that in those with MN, although there was no significant difference in all types of retinopathy between two groups.There was no occurrence of CHD or CVD in the two groups.Specifically, systolic MH made a significant (r = 0.66, P = 0.001) contribution to the occurrence of nephropathy by multiple regression analysis, whereas the difference is not related to age, sex, duration of diabetes, BMI, HbA1c, and serum lipid concentrations or use of different methods of insulin therapy and anti-hypertensive drugs.Meanwhile, the duration of diabetes had a significant (r = 0.4, P = 0.001) contribution to the occurrence of retinopathy (Table 7).No relationships Anti-hypertensive drugs Anti-diabetic drugs between systolic HBP and diastolic HBP, and systolic CBP and diastolic CBP measurements were observed (morning systolic HBP = 0.28, systolic CBP = 0.07, P = 0.06 and diastolic HBP = 0.25, diastolic CBP = 0.14, P = 0.005).The area under the ROC curve (AUC) of morning systolic HBP (0.99 ± 0.01) was significantly higher (P < 0.001) than that of systolic CBP (0.49 ± 0.10) in nephropathy (Figure 2).There was no statistical difference in AUC between them in other events.In nephropathy, sensitivities of 130 mmHg threshold in morning and clinic systolic BP were 1.0 (95% CI 1.0-1.0)and 0.55 (0.23-0.83), respectively, whereas those of 85 mmHg threshold in morning and clinic diastolic BP were 0.64 (0.31-0.89) and 0.55 (0.23-0.83), respectively.Specificities of 130 mmHg threshold in morning and clinic systolic BP were 0.95 (0.84-0.99) and 0.48 (0.32-0.64) (Figure 3), respectively, whereas those of 85 mmHg threshold in morning and clinic diastolic BP were 0.14 (0.05-0.29) and 0.29 (0.16-0.45), respectively.
Fig. 1.Various kinds of anti-hypertensive medicines in each group with CH or CN and MH or MN in patients with type 1 diabetes mellitus in a cross-sectional study.These antihypertensive medicines were administered after taking a breakfast.CH; clinic hypertension, CN; clinic normotension,MH; morning hypertension, MN; morning normotension, ARB; angiotensinⅡreceptor blocker, CCB; calcium channel blocker , ACE inhibitor; angiotensin converting enzyme inhibitor.
A type 2 diabetes mellitus
As shown in the figure 4, in the type 2 diabetic groups with both CH and CN, the kinds of anti-hypertensive medicines administered after taking breakfast in the groups with MH were also greater than in the groups with MN as those in patients with type 1 diabetes (Table 4).Comparing the characteristics of patients with and without CH, the following trends were noted.The prevalence of CH was four times higher than CN.BMI in CH patients was slightly higher than in CN patients.In contrast, serum creatinine concentration and UERA in CH patients were significantly lower than in CN patients (Table 3).No significant differences in other variables were noted between the two groups (Tabel 4).A total of 48% of CH patients were being treated with anti-hypertensive drugs, compared with 44% of CN patients (Tabel 4).When we compared the prevalence of diabetic complications in the two groups, there were no significant differences in the prevalence of nephropathy, retinopathy, CHD, and CVD between the two groups.However, the prevalence of clinical albuminuria in CH patients was lower than in CN patients (Table 2).
The CH patients were further divided into two groups: with and without MH (Table 3).BMI in MH patients was slightly higher than in MN patients.Systolic BP, diastolic BP, and UAER in MH patients were significantly higher than those in MN patients.There were no significant differences in other variables between the two groups.Nephropathy was observed in 69% of MH patients, whereas there was no nephropathy in MN patients.The prevalences of retinopathy and CVD in MH patients were also significantly higher than in MN patients (Table 4).The prevalence of treatment with anti-hypertensive drugs was higher in MH than in MN (Table 4).
The CN patients were also divided into two groups: with and without MH (Table 3).The means of age, BMI, systolic HBP, and diastolic HBP in MH patients were significantly higher than those in MN patients.Serum creatinine concentration and UAER were also higher in MH than in MN.No significant differences in other variables were shown between the two groups.However, the prevalence of nephropathy in MH patients was high (91%), whereas no nephropathy was observed in MN patients.The prevalences of retinopathy, CHD, and CVD in MH patients were also higher than in MN patients.More MH patients than MN patients were being treated with anti-hypertensive drugs (Table 4).
Comparing the characteristics of the two patient groups with and without MH, the following trends were noted (Table 3).The means of age, sex, HbA1c levels, and lipid concentrations were not different between the two groups.However, systolic BP and diastolic BP, based on HBP in the morning, in MH patients were significantly higher than in MN patients.Serum creatinine concentration and UAER were also higher in MH patients (Table 3).The prevalences of treatment with anti-hypertensive and anti-diabetic drugs were 3 and 1.5 times higher, respectively, in MH patients compared with MN patients (Table 4).
The prevalence of nephropathy in MH patients was 75%, whereas no nephropathy was noted in MN patients.The prevalence of retinopathy in MH patients was twice that found in MN patients, although there was no difference in the prevalences of non-proliferative and proliferative retinopathies between the two groups (Table 4).The prevalences of CHD and CVD in MH patients were four and six times higher, respectively, than in MN patients.Specifically, the prevalence of nephropathy in all subjects was highly associated (P < 0.001) with systolic MH, but not with age, sex, HbA1c, serum lipid concentrations without LDL, and use of anti-diabetic drugs by multiple logistic analysis.However, prevalence of nephropathy was associated with BMI, LDL concentration, serum creatinine concentration, and use of antihypertensive drugs and was negatively associated with diastolic CBP (Table 8).
In a longitudinal study 3.2.1 Baseline characteristics of patients
In the patients with a type 2 DM, baseline characteristics of patients classified as hypertensive or normotensive on the basis of HBP and CBP are shown in Tables 5, respectively.Based on HBP, prevalence of MH was double that of MN.Mean age, duration of disease, BMI, systolic BP and diastolic BP in both HBP and CBP, serum creatinine concentration and UAER were also significantly higher with MH than with MN (Table 5).In MH patients, morning diastolic HBP was significantly lower than diastolic CBP.In MN patients, morning systolic and diastolic HBP were significantly lower than systolic and diastolic CBP, respectively.No significant differences were noted in other laboratory variables between the two groups.Prevalence of microvascular complications was significantly great higher with MH than with MN, and prevalence of nephropathy was about 9-fold higher with MH than with MN (Table 6), although there was no dialysis.Prevalence of macrovascular complications was also significantly higher with MH than with MN.Most patients showing MH received antihypertensive and anti-diabetic drugs.The prevalence of patients receiving anti-hypertensive drugs was 6-fold higher for MH than for MN.The prevalence of patients receiving antidiabetic drugs appeared 1.5-fold higher with MH than with MN, although no significant difference was evident.Prevalences of using anti-dyslipidemia and anti-hypercoagulation agents were also significantly higher with MH than with MN (Table 6), but prevalences were lower than those for anti-hypertensive and anti-diabetic drugs.
On the basis of CBP, most characteristics of CH and CN patients at baseline were similar to those of MH and MN patients, respectively (Table 5).However, no significant differences in mean age, duration of disease, serum creatinine concentration or UAER or in prevalences of retinopathy and macrovascular complications were noted between these patients.Meanwhile, mean LDL was significantly lower with CH than with CN.In patients with CN, morning systolic and diastolic HBP were significantly higher than systolic and diastolic CBP, respectively (Table 5).
Endpoints and outcome measures
Nine cumulative events (2.3%) of death were observed for 6 years (Figure 6).They were with sustained MH, whereas none occurred in patients with sustained MN (Table 9).The hazard ratio was significantly (5-fold) higher with sustained MH than with sustained MN.
Table 9.Primary and secondary outcomes in a longitudinal study.
On the causes of death, 3, 3, 2 and 1 patients in the MH group were due to cancer with brain, breast or pancreas, CVD, CHD and unknown cause, respectively.while 3, 1, 1 and 1 patients in the CH group were due to cancer with brain, breast or pancreas, CVD, CHD and unknown cause, respectively, and 2 and 1 patients in the CN group were due to CVD and CHD, respectively (Kamoi et al, 2010).As shown in figure 7, new or worsened events of microvascular complications were observed in 72 patients (18%) included 36 with retinopathy and 59 with nephropathy, while improved events of microvascular complications were shown in 102 patients (25.5%) included 27 with retinopathy and 79 with nephropathy.New or worsened events of macrovascular complications were shown in 21 patients (5.3%) included 8 patients with myocardial infarction, 3 with heart failure, 1 with atrial fibrillation, 7 with cerebral infarction and 2 with cerebral bleeding.These new or worsened cumulative events were also significantly higher with sustained MH than with sustained MN, whereas no significant difference was seen between sustained CH and CN.
In terms of macrovascular complications, cumulative events also occurred significantly with sustained CH (Table 9).On the outcome of each group with normotension, white coat hypertension, masked hypertension or sustained hypertension, we were not able to survey their outcomes with statistics as a cohort study, because that each number participated in this study was small.
Risk factor assessment for outcomes
In terms of death, macrovascular complications at baseline represented a significant risk factor for patients with sustained MH, as determined by a Cox proportional hazard model that was significantly (P<0.001)appropriate according to Omnibus tests.Serum creatinine and UAER at baseline levels also represented significant confounding factors.However, as hazard ratios for these parameters were 0.01 and 1.00, respectively, these represented negative or small associated risk (Table 10).
In terms of microvascular complications, MH at baseline on the basis of HBP was a significant risk factor related to new, worsened or improved events of events according to a Cox proportional hazard model that was significantly (P<0.001)appropriate by Omnibus tests.Additional therapies for hypertension and DM also represented significantly confounding factors, but displayed negative associations with this outcome (Table 10).
In terms of macrovascular complications, HbA1c and presence of micro-and macrovascular complications at baseline in patients with sustained MH were significantly associated with this outcome, as determined by a Cox proportional hazard model that was significantly (P<0.001)appropriate by Omnibus tests.Additional therapy for hypertension represented a negative confounding factor significantly (Table 4).In patients with sustained CH on the basis of CBP, a Cox proportional hazard model was found to be significantly (P<0.001)appropriate by Omnibus tests, and additional therapies for hypercoagulation and others represented a significant confounding factor (P=0.025; Hazard ratio 5.71).No other significant risk factors were identified other than serum TG level at baseline (P=0.025), for which the hazard ratio was 1.00.Additional therapy for hypertension also represented significantly confounding factors, but displayed negative associations with this outcome (P=0.001;Hazard ratio 0.10) (data not shown in the table).
Blood pressure (BP)
Over the past 100 years, BP has been measured in the clinic of daytime, which has been called casual or clinic BP (CBP).As hypertension research and treatment methodologies have substantially advanced since the development of CBP, the gold standard of BP measurement for practice and research has been CBP (Imai et al, 2004).Namely, an evaluation for BP is based on the value of CBP.However, an alternative to the CBP was proposed soon after the introduction of BP measurements.Recent studies show that a discrepancy between CBP and ABPM has been noted.It has also been shown that in patients with essential hypertension, HBP measurement in the morning has a stronger predictive power for mortality than CBP measurements in the daytime (Aihara et al, 1998, Ohkubo et al, 1998, Imai et al, 1999).Further, BP measurements by using ABPM or HBP revealed that there is a white hypertension that BP by CBP in the daytime is high, whereas BP byHBP in the daytime is normal or a masked hypertension that BP in the daytime is normal but BP in the night or in the morning is high in peoples (Pickering, 1992), whch has worsend outcomes for complications in patients, is paied attention to many researchers.
Table 10.Risk factors for each outcome of events in patients with sustained morning hypertension on the basis of HBP In 1896, Riva-Rocci developed an indirect arm-cuff method for the BP measurement, and in 1905, Korotkoff introduced the use of auscultation.Since then, the method for BP measurement with sphygmomanometers has remained essentially unchanged for 100 years.Nowadays, the oscillometric method takes the plase of sphygmomanometers for having a favorable environment.
ABPM by automated BP measurements
However, alternative methods to the CBP are proposed as HBP.To evaluate the HBP, many methods have developed in the world.One of the methods is ABPM.There are many reports on the results using ABPM for several decades (Imai et al, 2004).Clinically, Sokolow M and his colleagues developed the initial semiautomatic ABPM device in 1962.It consisted of a BP cuff that was manually inflated by the subject and of a tape recorder on which the Korotkoff sounds were recorded.Now, ABPM provides automated measurements of arterial BP for 24 hours or longer.Most modern ABPM monitors use the oscillometric technique.The monitors are programmed to take readings at desired intervals, usually every 15 to 30 minutes, throughout the day and night.At the end of recording, the readings are downloaded onto a computer.ABPM demonstrated the variability of BP during the daytime and its relatively poor correlation with CBP and first showed that ABPM correlates more closely than CBP with damage to heart and arteries caused by hypertension.They also provided that ABPM improves the ability to predict risk.Nowadays, ABPM is reliable and quiet, and can be programmed to be fully automatic and be worn with little discomfort.Rrecordings by ABPM demonstrate the well-known diurnal pattern of BP, with the higher pressures in the afternoon, with the lower readings in the evening, with the nadir during sleep, and the wellreported early morning surge starting.The BP measurement by ABPM showed there is a white coat hypertension or a masked hypertension (Pickering, 1992).Thus, the ABPM provides BP information in relation to time.
HBP by self-BP measurement
Another method is self-BP measurement.In 1940, Ayman and Goldshine first reported the concept of "self-BP measurement" and demonstrated an apparent difference between the CBP and the self measured BP.Initially, self-measurement was done using the auscultation method.In the 1970s, an electric device based on the microphone method was marketed, but not widely distributed because of high price, mechanical difficulties, and the issue of auscultation gap.Explosive distribution of HBP measurement devices since the 1980s is mediated by the development of devices based on the cuffoscillometric principle.The basic algorithm of the principle has been improved by procedures to correctly approximate the characteristic changes during phase I and phase V Korotkoff sounds owing to electronic development.Recently, the accuracy of the automatic device is determined by comparison with the auscultation method and no other standard method is currently available for this purpose.At present, three types of electrical devices for HBP measurements are commercially available: the arm-cuff device, the wrist-cuff device, and the finger-cuff device.Ten million such electrical devices are produced each year in the Far East (including Japan, Korea, Taiwan and China), which represents 85% of the world production.Of those, 35% are wrist-cuff devices.Finger-cuff devices commanded a considerable portion of the market share owing to their convenience and ease-of-use.Nowdays, manufacturers have decreased production of finger-cuff devices owing to technical problems and extensively increased production of wrist-cuff devices.In Japan, wrist-cuff devices possess 30% of the market share.Wrist-cuff devices are much easier to handle and more portable, but include serious shortcomings (Imai et al., 2004).The reference level for BP measurement is the right atrium.When the measurement site is 10 cm below (above) the right atrium, systolic BP and diastolic BP are measured 7 mmHg higher (lower) than those at the level of the right atrium.
Even after appropriate correction of the hydrostatic pressure, another issue remains concerning the anatomy of the wrist.At the wrist, the radial and ulnar arteries are surrounded by the radial bone, the ulnar bone and several long tendons, including the palmaris longus tendon.Therefore, even a sufficient amount of cuff pressure over the arterial area does not necessarily occlude these arteries completely.As a result, wrist-cuff devices sometime provide erroneous readings, especially for systolic BP.Therefore, arm-cuff devices based on the cuff-oscillometric method are recommended for HBP measurement (Imai et al, 2004), which is recommended by guidline of many societies for hypertension (the European Society of Hypertension and the European Society of Cardiology, 2007, the American Heart Association, American Society of Hypertension and Preventive Nurses Association, 2008, the Japanese Society of Hypertension , 2009).
Differences between ABPM by automated-BP measurement and HBP by self-BP measurement
It is very important to know the characteristcs of the difference between ABPM and HBP.ABPM is measured under several psychological and physiological conditions by automated BP measuremnt, while HBP is measured under relatively stable conditions by self-BP measurment.Although both ABPM and HBP are able to evaluate BP in the night-time, an estimation of BP in the short-time by HBP is inadequate.Meanwhile, an estimation of BP in the long-time for more than 24 hours including the drug effect by ABPM is inadequate and occasionally insufficient due to regression to the mean, and reproducibility owing to measure BP under psychological and physiological conditions by ABPM is poor (Imai et al, 2004).Further, the costs for ABPM by automated BP measurement including devices are higher than those for HBP by self-BP measurement.However, we confirmed that ABPM is sometimes better to check the accuracy of HBP measurement method.
Variation of BP by HBP in healthy subjects
There is no difference in BP by HBP using Omron device between left and right arms for a day and for one month.Further, there is no difference in BP using left arm-cuff by HBP between summer and winter.
Variation of BP by HBP in diabetic patients
As mentioned in above, the variation of BP by HBP at the awakening-up using wrist-cuff in a diabetic patient is sometimes higher than that using arm-cuff.In diabetic patients, BP by HBP at the wakening-up is also increased sometimes by stress for a month.In them, more than 10% of day-by-day coefficient variation (CV) in HBP at the wakening-up for a month leads to have more occurences of complications with micro-and macro-vascular disturbences than less than 10% of CV (Figure 5) as shown in a diabetic patient who had acute myocardial infarction.These findings were demonstrated by Ohasma study (Imai et al, 2004).Short-term BP variability is a risk factor for cardiovascular diseases (Imai et al, 2004).Although short-term information is available from ABPM, the information on day-byday variability is obtained only with home BP measurements.The Ohasama study demonstrated that day-by-day variability reflects the risk of cardiovascular diseases.Thus, home BP measurements can now replace ABPM (Imai et al , 2004).
Threshold
Subjects from the Ohasama population aged 40 years and over were followed up for an average of 10.6 years.In the study, when the relationship between BP level and stroke incidence being analyzed by a Cox regression model was adjusted for age, sex, and drug treatment, the study suggested that there is higher predictability of HBP when compared with CBP.The linear regression analysis deduced that 140/90 mmHg for CBP corresponds to 125/80 mmHg for HBP, suggesting that the normative value of HBP is less than 125/80 mmHg (Imai et al, 2004) In our study, we used thresholds of once HBP value at the awakening-up in the morning and those of CBP in the daytime as <130/85 mmHg value based on the criteria of CH owing to the 1999 WHO-International Society of Hypertension guidelines (Guide lines Subcommittee, 1999).This studies showed the threshold of 130 mmHg of systolic BP at the awakening-up in the morning for micro-and macro-vasclualr complications is significant (Figure 3), while diastolic BP of HBP at the wakening-up is persistently referred to assement the BP for the complications.
Fig. 5. Relationship between variations of BP by HBP measured at the awakening-up in the morning and vascular comlications in diabetic patients in comparison with those of less than 10% and more than 10% of CV for a month.In diabetic patients, the mean of more than of 10% for CV has demonstrated that there were more complications than that of less than 10% for CV.
Recently, all guidelines recommended a threshold for HBP by 5-10 mmHg is lower than for CBP (Mancia et al, 2007, Pickering et al, 2008, Ogihara et al, 2009).The guidelines indicated that the target HBP goal for treatment is <130-135/85 mmHg in the morning (Mancia et al, 2007) and <135/85 mmHg or <130/80 mmHg in the morning in high-risk patients (Pickering et al, 2008).The Japanese Society of Hypertension defined the threshold of controlled BP is < 135 /85 mmHg of HBP in the morning (Ogihara et al,2009).
In a cross-sectional study 4.2.1 In a type 1 diabetes mellitus
As shown in the figure 1, in the type 1 diabetic groups with CH or CN, the anti-hypertensive medicines in the groups with MH were larger number than in the groups with MN.The prevalence of nephropathy in the patients with MH was significantly higher than in those without MH, even though they had CN (Table 2).In contrast, the occurrence was not observed in those without MH, even though they had CH.Specifically, nephropathy, including clinical albuminuria, was observed in patients with systolic MH but not in patients without MH.Analysis by ROC curves also indicates that home BP in the morning has a stronger predictive power than clinic BP, especially in nephropathy (Figure 2).The cut point of 130-mmHg morning systolic BP has higher sensitivity and higher specificity than that of clinic systolic BP (Figure 3).This finding indicates that nephropathy in type 1 diabetic patients may be strongly related to morning home BP rather than clinic BP (Kamoi et al, 2003).The reason may be explained by several factors, such as white coat hypertension, nondipper hypertension, and morning surge.Particularly, an increase in nocturnal BP, as detected by ABPM, in type 1 diabetes is related to the development of microalbuminuria (Moore et al, 1992, Lurbe et al, 2003).These phenomena are thought to be caused by many neuroendocrine and hematological factors, especially autonomic neuropathy (Spallone et al, 1993, Lafferty et al, 2000, Torbjornsdotter et al, 2001).Although we did not measure 24-h ambulatory BP, the greater range in the relation of morning home BP and clinic BP may be partially explained by true and white coat hypertension, reverse-dipping hypertension, and the effects of treatment with anti-hypertensive drugs.In contrast, the prevalence of retinopathy in type 1 diabetic patients did not relate to BP, including morning home BP, although the degree of retinopathy was strengthened by MH.The duration of diabetes contributed to retinopathy significantly.They support the hypothesis that sustained longterm hyperglycaemia is the strongest predictor for developing retinopathy and that high morning home BP accelerates retinopathy as well as nephropathy.
In a type 2 diabetes mellitus
In the type 2 diabetic patients who were regularly treated with diet and exercise or medications for hyperglycaemia and hypertension, we found that one half of CH patients had MN, whereas two thirds of CN patients had MH.The prevalences of nephropathy, retinopathy, CHD, and CVD in patients with MH were significantly higher than in patients without MH, even though they had CN.In contrast, the prevalence of these vascular disturbances was significantly lower in patients without MH than in patients with MH, even though they had CH.Specifically, nephropathy, including clinical albuminuria, was observed in patients with systolic MH but not in any patients without MH.The difference is not related to age, sex, BMI, HbA1c, serum lipid concentrations, or use of anti-diabetic and anti-hypertensive drugs.The finding in the present cross-sectional study indicates that micro-and macro-vascular complications of type 2 diabetic patients may be strongly related to HBP in the morning rather than CBP.
The reason for the underlying relation of high HBP in the morning rather than high CBP to the vascular complications is not clearly determined by this study.However, several possibilities are postulated.First, type 2 diabetic patients have high prevalence of increased CBP but normal HBP in the morning (white coat hypertension) (Burgess et al, 1991, Puig et al, 1995).White coat hypertension seems to be a low risk for vascular complications (Pickering, 1996, Nielson et al, 1997).Second, O'Brien et al (O'Brien et al, 1988) and Imai et al (Imai et al, 1990) reported that nocturnal decline in BP in patients with essential hypertension is often diminished (non-dipper hypertension) and sometimes inverts to become a nocturnal elevation (inverted dipper hypertension).Non-dipper hypertension, particularly inverted dipper hypertension, accelerates vascular disturbances (Shimada et al, 1990, Okubo et al, 1997), including microalbuminuria (Opsahl et al, 1988).Many studies have reported that type 2 diabetic patients have non-dipper hypertension (Forgari et al, 1994, Spalone et al, 1993, Farmer et al, 1998, Sturrock et al, 2000, White, 2001, Aronson, 2001).Therefore, it seems that blunted nocturnal and/or inverted dipper hypertension may cause micro-and macro-vascular complications in type 2 diabetic patients.Third, a morning surge in BP may be related to these events.A number of reports indicate that the early morning surge in BP acts as a trigger for vascular events (White, 2001, Aronson, 2001).Most diabetic patients have the morning surge (Aronson, 2001).These phenomena in diabetic patients are considered to be caused by many neuroendocrine and haematological factors, including autonomic neuropathy, which may result in glomerular hyperfiltration, hypercoagulability, and hypofibrinolysis, promoting micro-and macro-vascular disturbances.In fact, a high prevalence of these phenomena was observed in MH but not in MN.In addition, the severity of MH in CN patients tended to be greater than in CH patients.Moreover, the relation of MBP and CBP levels was a greater range, indicating true and white coat hypertension, and MBP level in some patients was higher than the corresponding CBP level, indicating that reverse dipping hypertension might occur, although we did not measure 24-h ambulatory BP.It is hypothesized that treatment with anti-hypertensive drugs reduced daytime BP but did not restore blunted nocturnal hypertension, did not decrease nocturnal hypertension, and could not attenuate the morning surge in BP (Spalone et al, 1993, Imai et al, 1999).The greater range in relation of MBP and CBP, and the negative association between events of nephropathy and clinic DBP may be partially explained by the effect of treatment with anti-hypertensive drugs, as hypothesized above.
Analysis by ROC curves also indicates that HBP has a stronger predictive power than CBP, especially in nephropathy.The cut points of 130/85 mmHg have higher sensitivity in morning measurement than in clinic measurement (Figure 2), although specificity in the cut point of 130 mmHg SBP in the morning measurement was lower than in the clinic measurement.Accordingly, measurement of HBP in the morning is a useful method of determining these phenomena, as indicated by the Ohasama study (Imai et al, 1990-2004, Okubo et al, 1995-1998), and high HBP levels at the awakening-up in the morning in type 2 diabetic patients may be related to micro-and macro-vascular complications of diabetes.
All findings indicate that high BP levels at the awakening-up in the morning, obtained by means of self-measurement in type 2 diabetic patients, should be treated as hypertension.
In a longitudinal study 4.3.1 General
We analysed the influence of HBP at the awakening-up in the morning and of CBP in the daytime on outcomes of events including death, microvascular complications as nephropathy and retinopathy, and macrovascular complications as CHD and CVD for data obtained over 6 years in a prospective, longitudinal study of type 2 diabetic patients.To clarify which of HBP or CBP provides the stronger predictive power for the outcomes, the Recent Advances in the Pathogenesis, Prevention and Management of Type 2 Diabetes and its Complications 360 400 patients were classified as with or without hypertension based on HBP and CBP measurements at baseline, because that although the cross-sectional studies have demonstrated that HBP measurements at the awakening-up in the morning offer stronger predictive power for micro-and macro-vascular complications in patients with type 1 and 2 DM than CBP measurements in the daytime (Kamoi et al, 2002(Kamoi et al, -2003) ) and the MH may be caused by micro-and macro-vascular complications.All subjects were Japanese patients with type 2 diabetes.Subject characteristics were broadly similar to those described previously (Kamoi et al, 2002), except that patients with CH showed a higher prevalence of nephropathy than patients with CN.
Recently, all guidelines recommended a threshold for HBP by 5-10 mmHg lower than for CBP (Mancia et al, 2007, Pickering et al, 2008, Ogihara et al, 2009) as mention.In this study, the use of the same thresholds based on the criteria of CH owing to the 1999 WHO-International Society of Hypertension guidelines (WHO, 1999) for both methods resulted to MN patients with higher threshold and more severe MH patients selected with HBP.Nevertheless, the cumulative event of death was observed in sustained MH patients, but not in sustained MN patients.United Kingdom Prospective Diabetes Study (UKPDS) reported that the cumulative incidence of death was 12.4 % (597 of 4801 patients with type 2 diabetes) for ten years (Adler et al, 2000).In the study, the incidence was 2.3 % for 6 years.Although the reason is unclear why the incidence in this study is lower than in UKPDS, the hazard ratio was significantly (5-fold) higher in sustained MH patients than in sustained MN patients, while no significant difference was seen between sustained CH and CN patients.In addition, cumulative events of new or worsened microvascular complications were significantly (2-fold) higher in sustained MH patients than in sustained MN patients, while no significant difference was seen between sustained CH and CN patients.The incidence of the events is about 50% higher in the MH patients as compared to the MN patients (19.2% vs. 14.9%), while the hazard ratio indicates that the risk of an event in the MH patients is about twice as high as the risk of an event in the MN patients.This may be explained by that the follow-up time in the MH patients is much shorter than the follow-up time in the MN patients.Furthermore, UKPDS reported that the cumulative incidence of CHD was 12.5 % (600 of 4801 patients with type 2 diabetes) for ten years.In the study, the incidence was 5.3 % for 6 years.Also, although the reason is unclear why the incidence in this study is lower than in UKPDS (Adler et al, 2000), cumulative events of new or worsened macrovascular complications were significantly higher in sustained MH patients than in sustained MN patients, and significantly higher in sustained CH patients than in sustained CN patients.The present results indicate that cumulative events of death and new or worsened microand macro-vascular complications are more strongly related to sustain MH, although sustained CH is also related to them, In terms of death among sustained MH patients, the finding that presence of macrovascular complications at baseline was a significant risk factor indicates that sustained MH may be a trigger for death among patients with macrovascular complications.In the event of new or worsened microvascular complications, the fact that MH at baseline was the only associated risk factor indicates that sustained MH also represents a strong contributor to new or worsened microvascular complications.The finding that additional therapy for hypertension suppressed occurrence of new or worsened microvascular complications supports this view.In the event of new or worsened macrovascular complications, the identification of glycaemic control and presence of micro-and macro-vascular complications at baseline among patients with sustained MH as associated risk factors indicates that sustained MH, as along with glycaemic control and presence of micro-and macro-vascular complications (American Diabetes Association, 2009), is important risk factor.It is supported this view that age, sex, serum creatinine, LDL and proteinuria were not risk factors.Moreover, additional therapy for hypertension improved or prevented the macrovascular events, supporting this idea.Meanwhile, the finding that sustained CH was related to the macrovascular events is consistent with the findings of a previous report (Aldler et al, 2000).Accordingly, not only sustained CH but also sustained MH is related to the new or worsened macrovascular events.All findings indicated that events of death and new or worsened micro-and macro-vascular complications in type 2 diabetic patients are strongly related to sustained MH, irrespective of sustained CH as demonstrated in a cross-sectional study (Kamoi et al, 2002(Kamoi et al, -2003) ) and in the Ohasama study (Okubo et al, 1998), and support the view that the available evidence suggests that HBP has strong prognostic value, which appears to be superior to that of the conventional CBP measurements.The reasons of differencnt results by studies of ACCORD and Cooper-DeHoff et al may be obtained by evaluation of BP by HBP measurement at the wakening-up.
Limitations of this study
In a longitudinal study, this study was that the numbers of patients participating and events occurring over the 6 years of the study were heterogeneous and small, so we were unable to survey outcomes and compare differences among baseline groups of patients w i t h M H , M N , C H a n d C N a s a c o h o r t s t u d y .F u r t h e r , t h e r e w e r e n o e v e n i n g measurements as well as 24 hours BP monitoring to compare them.Instead, we classified the 400 patients into patients with or without hypertension based on HBP and CBP measurements and compared differences in cumulative events between sustained hypertensive and normotensive patients in each group.These patients' classifications obviously overlapped.Accordingly, the censoring date depends on whether HT and NT are defined according to CBP or HBP and the same censoring time was not used in the 2 analyses.Furthermore, for ethical reasons, most patients received treatment with various anti-hypertensive agents and other medications during follow-up.Therefore, we were unable to examine outcomes without changing treatments from baseline over the 6 years of the study and whether these drugs would thus have influenced the outcomes of events in this study.At baseline 49% of the subjects received anti-hypertensive treatment.Antihypertensive drugs are most likely prescribed on the basis of CBP.Therefore, somebody argues that it is not appropriate to classify patients taking anti-hypertensive drugs and having a normal BP as normotension, and the untreated CBP in most of these patients may be probably in the hypertensive range.This may introduce a bias in the comparison between CBP and HBP.Particularly, it is clinically more informative to evaluate the prognostic value of both white coat hypertension and masked hypertension based on CBP and HBP among subjects with diabetes.However, we were not able to survey their outcomes with statistics as a cohort study, because that each number participated in this study was small.Someone may indicate that the prognostic values of CBP and HBP should be assessed as not only categorical data, but also continuous variables.The analysis using continuous variables may give more significant meaning, but in this study, BP as continuous variables showed high fluctuation and the numbers participated were small.Accordingly, as the analysis with statistics is complex, we did not examine it in this study.By meta-analysis, compared with clinic BP monitoring alone, systolic BP at daytime by home BP monitoring has the potential to overcome therapeutic inertia and lead to a small but significant reduction in systolic and diastolic BP.Hypertension control with home BP monitoring can be enhanced further when accompanied by plans to monitor and treat elevated BP, although there is not a systematic review on the morning BP by home BP measurement (Agarwal et al, 2011).
A mechanism undering awakening-up hypertension related to vascular complications in patients with diabetes mellitus
As shown in Figure 8, when subjects had awakening-up from sleeping, their parasympathic activity changed to sympathic activity.Such changes at the awakening-up have most increase in activiation of renin-angiotenin-aldosteron-vasopressin system, coaglulation system and oxidant stress, and most decreases in activation of plasminogen activator inhibitor and fibilinolytic system.The alterations have companied with most conctriction of blood vessel owing to most decreased endothelial function in the day (Figure 9) (Otto et al, 2004).In the states, hypertension may lead to have a vascular injury, resulting in vascular disturbances.Most patients with diabetes have hypertension when we used measurement of BP by HBP at the awkening-up as well as CBP in the daytime.In the supine position % Change in diameter of brachial artery Fig. 9. Change in diameter of brachial artery at the awakening-up, after the wakening and before sleeping in the supine postion of healthy subjects by evaluation using flow-mediated echogram.
A reason why we measure BP at the wakening-up by HBP
There were more occurrences of CVD and CHD in the morning and in the evening in Japanese peoples in 1994 (Sato et al, 1994).However, the reason was unclear.Also, Stergiou GS et al in 2002 showed that the incidences of vascular complications in patients with hypertension at the awakening-up in Siesta were greater number than those in patients with normotension at the awakening-up (Stergious et al, 2002) (Figure 10).Our previous studies demonstrated that secretions of hormones related to BP in the upright position are higher than in the recumbent position (Kamoi et al, 1988).These findings indicated that the differences may be related to the difference of parasymapthic-and sympathic-nerves activities.Further, the increased hormones have decresded immediately after bias by various factors including own or other helps.In fact, HBP at the morning has been decreased immediately after awakening-up and second or third measurement of HBP is more decreased than first measurement of BP.Therefore, we chosed first HBP measurement once at the awakening-up except another points of BP in a day, although someone thinks that the once mesurment may be strict, because that many reseachers have a mean of BP for several mesuremnts of BP becasuse that most patients desire to pass urine after awakening.When they are unbearable to pass it, I recommend once measurement of BP after passing the urine.Most patients have awakening-up in the eary morning, but some patients who have worked in midnight have awakening-up in the late morning.Therefore, I recommend patients to measure BP at awakening-up in the day.The method is simple and accurate to assess HBP.
An usefulness of BP by HBP in the disaster
First, Kario et al observed that there were more occurences CVD or CHD in patients with MH, which increased after several days after Hanshin-Awaji earthquake using ABPM in 1994 (Kario et al, 2002).The mechanism may be due to activity of sympathic nerve, which was supported by administration of α blockers.They proposed if peoples have the MH by HBP or ABPM, α blockers administration into the peoples have been recommended, which needs few weeks post the disaster.To evaluate MH in the morning, BP must be measured by a device of measurements HBP or ABPM.However, in Japan, nowadays, peoples in the public refuge houses have BP measurement in the daytime, but not upon awakening in the morning.
Our experices in the 2004 Mid-Niigata Prefecture Earthquake in 2004 (Kamoi et al, 2006) are same as Kario et al.The patients measured HBP in the awkening-up in his own house showed an increased HBP within a few weeks afer earthquake and the patients suppressed HBP in the awalening-up by taking anti-hypertensive medicines for MH before the earthquake had no vascular complications (Figure 12), whereas peoples without measured HBP had many CHD, CVD or dialysis during 6 months after the earthquake (Kamoi et al, 2006).These findings suggest that it is important to control MH as we11 as CH during a disaster to prevent vascular complications, particularly such as nephrophaty.However, our study showed only one-third of patients measured their HBP within three months after the shock.Although the reasons why they did not measure their HBP were not clarified by the study, it is known that some patients lost their HBP measurement equipment, some had their equipment destroyed, and some suffered from anxiety, in particular sleep disturbance, as result of the devastation caused by the strong earthquake.In the public refuge houses, all patients have BP measurements in the daytime as the report by Kario et al, but not upon awakening in the morning.Therefore, we recommend strongly that there is need to develop a procedure of BP measurement upon awakening in the morning during a disaster in the public refuge houses as well as in their homes and to educate individuals about appropriate adaptation mechanisms following a disaster such as taking special care of themselves during the initial three months following a disaster.Appropriate information about morning hypertension should be provided to all affected people using all possible means, including the mass media, to decrease the potential for adverse consequences.
Treatment methods for awakening-up hypertension, by HBP in patients with diabetes melitus
First, restriction of salt ingestion in diabetic patients is necessary to have better BP by HBP at the awakening-up as patients with hypertension in the daytime.As hyperglycaemia causes increased urinary excretion of glucose via convoluted tubule, reabsorption of sodium cloride from the convoluted tubule into blood is increased.Hence, volume expansion in the blood and activation of sympathic nerve occure in them.They lead to occure MH in the morning as CH in the daytime.Therefore, restriction of salt ingestion (less than 7.0 g/day) is usefull to control MH.Second, howeve, the treatment is not effective in most diabetic patients.Some patients have MN in the admission of hospital, but have MH in their homes.Probably, the sympathic activity of them may increase in the daily life, which shows the MH at the awakening-up but CN in the daytime as masked hypertension.Further, some patients have orthostatic hypotension by nerve disturbenes, which shows that BP are hypotension by CBP in the daytime, whereas they are MH by HBP at the awakening-up in the morning.In such patients, switched bedtime adminstration of α blockers is effective (Figure 11) (Kamoi et al, 2006).Third, when the patients have CH in the daytime and MH at the awakening-up, administration of long active anti-hypertensive medicines after taking breakfast as a conventinal method for hypertension is usfull.Sometimes, such adminstraton is not effective in the patients with MH.In that case, switched bedtime administration is effective (Kamoi et al, 2006).In the world, many studies by researchers on administration methods of them are proceeding.Fourth, we meet many patients who had albeit such treatments, the therapy had no effective for MH.In such diabetic patients, clock time disturbed may be accompanied (Figure 8).As it is difficult to treat such patients, the treatment methods remain to be not resolved completely.Any way, controlling high BP at the wakening-up by using various methods prevents for micro-and macro-vascluar complications.Fig. 12.Effect of earthquake (magnitude 7.0) on HBP and CBP at the awakening-up in type 2 diabetic patients for 6 months.Increased BP at the awakening-up in the morning owing to devastation by earthquake continued for several months, whereas CBP was not changed, eventhough the patient had received anti-hypertensive medicines before occuring earthquake.
Conclusion
In conclusion, elevations of blood pressure on self-measurement at the awakening-up in the morning as well as clinical blood pressure measurement in the daytime in type 1 and 2 diabetic patients are strongly related to microvascular complications, especially nephropathy, and the control of morning hypertension may prevent to have a development of micro-and macro-vascular complications in patients with diabetes mellitus.
www.intechopen.comRecentAdvances in the Pathogenesis, Prevention and Management of Type 2 Diabetes and its Complications 340 Data are number.Odds ratio for CH and CN groups was calculated.MDI; multiple daily insulin injections, CSII; subcutaneous continuous insulin infusion, CI; confidence interval.
Fig. 2 .
Fig. 2. ROC analysis in nephropathy in patients with type 1 and 2 diabetes mellitus.
Fig. 3 .
Fig. 3. Threshold of systolic HBP at the wakening-up for prevalence of micro-and macrovascular events in patients with type 1 and 2 diabetes mellitus.The vertical arrow indicated the value of threshold of systolic HBP.
Fig. 4 .
Fig. 4. Various kinds of anti-hypertensive medicines in each group with CH or CN and MH or MN in patients with type 2 diabetes mellitus in a cross-sectional study.CH; clinic hypertension, CN; clinic normotension, MH; morning hypertension, MN; morning normotension, ARB; angiotensin Ⅱreceptor blocker, CCB; calcium channel blocker , ACE inhibitor; angiotensin converting enzyme inhibitor
Fig. 6 .Fig. 7 .
Fig. 6.Event-free survival curve of primary endpoints in patients with type 2 diabetes in a longitudinal study.Normo; MN and CN, mHT; MH, oHT; CH, HT; MH and CH
Fig. 10 .
Fig. 10.Relationship between systolic blood pressure at the awkening-up in Siesta of Greece and cerebral vascular diseases (CVD).
Fig. 11 .
Fig. 11.Effect ofαblocker administration at bedtime in type 2 diabetic patients on nephropathy.The patients were treated with doxosine received at bedtime for 3 years for MH.
Table 3
. Characteristics of patients with type 2 diabetes mellitus in a cross-sectional study
Table 4
. Prevalence of micro-and macro-vascular events and medical treatment in patients shown in the table3
Table 6
. Prevalence of micro-and macro-vascular events and medical treatment in patients shown in the table52.
3 Statistical analysis 2.3.1 Baseline All
values are presented as means 土 SD.Mean values were compared using chi square test or un-paired Student's t test.
|
2017-09-12T19:01:58.432Z
|
2011-08-29T00:00:00.000
|
{
"year": 2011,
"sha1": "8600d14c2724a61aac8f61d174853ca1de0b951f",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/18551",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8600d14c2724a61aac8f61d174853ca1de0b951f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.